Evaluation of Sequencing Approaches for High-Throughput Transcriptomics - (BOSC)
Whole-genome in vitro transcriptomics has shown the capability to identify mechanisms of action and estimates of potency for chemical-mediated effects in a toxicological framework, but with limited throughput and high cost. The generation of high-throughput global gene expression...
Lyons, Eli; Sheridan, Paul; Tremmel, Georg; Miyano, Satoru; Sugano, Sumio
2017-10-24
High-throughput screens allow for the identification of specific biomolecules with characteristics of interest. In barcoded screens, DNA barcodes are linked to target biomolecules in a manner allowing for the target molecules making up a library to be identified by sequencing the DNA barcodes using Next Generation Sequencing. To be useful in experimental settings, the DNA barcodes in a library must satisfy certain constraints related to GC content, homopolymer length, Hamming distance, and blacklisted subsequences. Here we report a novel framework to quickly generate large-scale libraries of DNA barcodes for use in high-throughput screens. We show that our framework dramatically reduces the computation time required to generate large-scale DNA barcode libraries, compared with a naїve approach to DNA barcode library generation. As a proof of concept, we demonstrate that our framework is able to generate a library consisting of one million DNA barcodes for use in a fragment antibody phage display screening experiment. We also report generating a general purpose one billion DNA barcode library, the largest such library yet reported in literature. Our results demonstrate the value of our novel large-scale DNA barcode library generation framework for use in high-throughput screening applications.
Evaluating High Throughput Toxicokinetics and Toxicodynamics for IVIVE (WC10)
High-throughput screening (HTS) generates in vitro data for characterizing potential chemical hazard. TK models are needed to allow in vitro to in vivo extrapolation (IVIVE) to real world situations. The U.S. EPA has created a public tool (R package “httk” for high throughput tox...
Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin
2015-01-01
Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks. PMID:26501283
Li, Dongfang; Lu, Zhaojun; Zou, Xuecheng; Liu, Zhenglin
2015-10-16
Random number generators (RNG) play an important role in many sensor network systems and applications, such as those requiring secure and robust communications. In this paper, we develop a high-security and high-throughput hardware true random number generator, called PUFKEY, which consists of two kinds of physical unclonable function (PUF) elements. Combined with a conditioning algorithm, true random seeds are extracted from the noise on the start-up pattern of SRAM memories. These true random seeds contain full entropy. Then, the true random seeds are used as the input for a non-deterministic hardware RNG to generate a stream of true random bits with a throughput as high as 803 Mbps. The experimental results show that the bitstream generated by the proposed PUFKEY can pass all standard national institute of standards and technology (NIST) randomness tests and is resilient to a wide range of security attacks.
Dawes, Timothy D; Turincio, Rebecca; Jones, Steven W; Rodriguez, Richard A; Gadiagellan, Dhireshan; Thana, Peter; Clark, Kevin R; Gustafson, Amy E; Orren, Linda; Liimatta, Marya; Gross, Daniel P; Maurer, Till; Beresini, Maureen H
2016-02-01
Acoustic droplet ejection (ADE) as a means of transferring library compounds has had a dramatic impact on the way in which high-throughput screening campaigns are conducted in many laboratories. Two Labcyte Echo ADE liquid handlers form the core of the compound transfer operation in our 1536-well based ultra-high-throughput screening (uHTS) system. Use of these instruments has promoted flexibility in compound formatting in addition to minimizing waste and eliminating compound carryover. We describe the use of ADE for the generation of assay-ready plates for primary screening as well as for follow-up dose-response evaluations. Custom software has enabled us to harness the information generated by the ADE instrumentation. Compound transfer via ADE also contributes to the screening process outside of the uHTS system. A second fully automated ADE-based system has been used to augment the capacity of the uHTS system as well as to permit efficient use of previously picked compound aliquots for secondary assay evaluations. Essential to the utility of ADE in the high-throughput screening process is the high quality of the resulting data. Examples of data generated at various stages of high-throughput screening campaigns are provided. Advantages and disadvantages of the use of ADE in high-throughput screening are discussed. © 2015 Society for Laboratory Automation and Screening.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong
The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less
Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao
2016-04-01
The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.
Besaratinia, Ahmad; Li, Haiqing; Yoon, Jae-In; Zheng, Albert; Gao, Hanlin; Tommasi, Stella
2012-01-01
Many carcinogens leave a unique mutational fingerprint in the human genome. These mutational fingerprints manifest as specific types of mutations often clustering at certain genomic loci in tumor genomes from carcinogen-exposed individuals. To develop a high-throughput method for detecting the mutational fingerprint of carcinogens, we have devised a cost-, time- and labor-effective strategy, in which the widely used transgenic Big Blue® mouse mutation detection assay is made compatible with the Roche/454 Genome Sequencer FLX Titanium next-generation sequencing technology. As proof of principle, we have used this novel method to establish the mutational fingerprints of three prominent carcinogens with varying mutagenic potencies, including sunlight ultraviolet radiation, 4-aminobiphenyl and secondhand smoke that are known to be strong, moderate and weak mutagens, respectively. For verification purposes, we have compared the mutational fingerprints of these carcinogens obtained by our newly developed method with those obtained by parallel analyses using the conventional low-throughput approach, that is, standard mutation detection assay followed by direct DNA sequencing using a capillary DNA sequencer. We demonstrate that this high-throughput next-generation sequencing-based method is highly specific and sensitive to detect the mutational fingerprints of the tested carcinogens. The method is reproducible, and its accuracy is comparable with that of the currently available low-throughput method. In conclusion, this novel method has the potential to move the field of carcinogenesis forward by allowing high-throughput analysis of mutations induced by endogenous and/or exogenous genotoxic agents. PMID:22735701
Besaratinia, Ahmad; Li, Haiqing; Yoon, Jae-In; Zheng, Albert; Gao, Hanlin; Tommasi, Stella
2012-08-01
Many carcinogens leave a unique mutational fingerprint in the human genome. These mutational fingerprints manifest as specific types of mutations often clustering at certain genomic loci in tumor genomes from carcinogen-exposed individuals. To develop a high-throughput method for detecting the mutational fingerprint of carcinogens, we have devised a cost-, time- and labor-effective strategy, in which the widely used transgenic Big Blue mouse mutation detection assay is made compatible with the Roche/454 Genome Sequencer FLX Titanium next-generation sequencing technology. As proof of principle, we have used this novel method to establish the mutational fingerprints of three prominent carcinogens with varying mutagenic potencies, including sunlight ultraviolet radiation, 4-aminobiphenyl and secondhand smoke that are known to be strong, moderate and weak mutagens, respectively. For verification purposes, we have compared the mutational fingerprints of these carcinogens obtained by our newly developed method with those obtained by parallel analyses using the conventional low-throughput approach, that is, standard mutation detection assay followed by direct DNA sequencing using a capillary DNA sequencer. We demonstrate that this high-throughput next-generation sequencing-based method is highly specific and sensitive to detect the mutational fingerprints of the tested carcinogens. The method is reproducible, and its accuracy is comparable with that of the currently available low-throughput method. In conclusion, this novel method has the potential to move the field of carcinogenesis forward by allowing high-throughput analysis of mutations induced by endogenous and/or exogenous genotoxic agents.
Solar fuels photoanode materials discovery by integrating high-throughput theory and experiment
Yan, Qimin; Yu, Jie; Suram, Santosh K.; ...
2017-03-06
The limited number of known low-band-gap photoelectrocatalytic materials poses a significant challenge for the generation of chemical fuels from sunlight. Here, using high-throughput ab initio theory with experiments in an integrated workflow, we find eight ternary vanadate oxide photoanodes in the target band-gap range (1.2-2.8 eV). Detailed analysis of these vanadate compounds reveals the key role of VO 4 structural motifs and electronic band-edge character in efficient photoanodes, initiating a genome for such materials and paving the way for a broadly applicable high-throughput-discovery and materials-by-design feedback loop. Considerably expanding the number of known photoelectrocatalysts for water oxidation, our study establishesmore » ternary metal vanadates as a prolific class of photoanodematerials for generation of chemical fuels from sunlight and demonstrates our high-throughput theory-experiment pipeline as a prolific approach to materials discovery.« less
Solar fuels photoanode materials discovery by integrating high-throughput theory and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Qimin; Yu, Jie; Suram, Santosh K.
The limited number of known low-band-gap photoelectrocatalytic materials poses a significant challenge for the generation of chemical fuels from sunlight. Here, using high-throughput ab initio theory with experiments in an integrated workflow, we find eight ternary vanadate oxide photoanodes in the target band-gap range (1.2-2.8 eV). Detailed analysis of these vanadate compounds reveals the key role of VO 4 structural motifs and electronic band-edge character in efficient photoanodes, initiating a genome for such materials and paving the way for a broadly applicable high-throughput-discovery and materials-by-design feedback loop. Considerably expanding the number of known photoelectrocatalysts for water oxidation, our study establishesmore » ternary metal vanadates as a prolific class of photoanodematerials for generation of chemical fuels from sunlight and demonstrates our high-throughput theory-experiment pipeline as a prolific approach to materials discovery.« less
CrossCheck: an open-source web tool for high-throughput screen data analysis.
Najafov, Jamil; Najafov, Ayaz
2017-07-19
Modern high-throughput screening methods allow researchers to generate large datasets that potentially contain important biological information. However, oftentimes, picking relevant hits from such screens and generating testable hypotheses requires training in bioinformatics and the skills to efficiently perform database mining. There are currently no tools available to general public that allow users to cross-reference their screen datasets with published screen datasets. To this end, we developed CrossCheck, an online platform for high-throughput screen data analysis. CrossCheck is a centralized database that allows effortless comparison of the user-entered list of gene symbols with 16,231 published datasets. These datasets include published data from genome-wide RNAi and CRISPR screens, interactome proteomics and phosphoproteomics screens, cancer mutation databases, low-throughput studies of major cell signaling mediators, such as kinases, E3 ubiquitin ligases and phosphatases, and gene ontological information. Moreover, CrossCheck includes a novel database of predicted protein kinase substrates, which was developed using proteome-wide consensus motif searches. CrossCheck dramatically simplifies high-throughput screen data analysis and enables researchers to dig deep into the published literature and streamline data-driven hypothesis generation. CrossCheck is freely accessible as a web-based application at http://proteinguru.com/crosscheck.
I describe research on high throughput exposure and toxicokinetics. These tools provide context for data generated by high throughput toxicity screening to allow risk-based prioritization of thousands of chemicals.
Life in the fast lane: high-throughput chemistry for lead generation and optimisation.
Hunter, D
2001-01-01
The pharmaceutical industry has come under increasing pressure due to regulatory restrictions on the marketing and pricing of drugs, competition, and the escalating costs of developing new drugs. These forces can be addressed by the identification of novel targets, reductions in the development time of new drugs, and increased productivity. Emphasis has been placed on identifying and validating new targets and on lead generation: the response from industry has been very evident in genomics and high throughput screening, where new technologies have been applied, usually coupled with a high degree of automation. The combination of numerous new potential biological targets and the ability to screen large numbers of compounds against many of these targets has generated the need for large diverse compound collections. To address this requirement, high-throughput chemistry has become an integral part of the drug discovery process. Copyright 2002 Wiley-Liss, Inc.
Huang, Kuo-Sen; Mark, David; Gandenberger, Frank Ulrich
2006-01-01
The plate::vision is a high-throughput multimode reader capable of reading absorbance, fluorescence, fluorescence polarization, time-resolved fluorescence, and luminescence. Its performance has been shown to be quite comparable with other readers. When the reader is integrated into the plate::explorer, an ultrahigh-throughput screening system with event-driven software and parallel plate-handling devices, it becomes possible to run complicated assays with kinetic readouts in high-density microtiter plate formats for high-throughput screening. For the past 5 years, we have used the plate::vision and the plate::explorer to run screens and have generated more than 30 million data points. Their throughput, performance, and robustness have speeded up our drug discovery process greatly.
ToxCast Workflow: High-throughput screening assay data processing, analysis and management (SOT)
US EPA’s ToxCast program is generating data in high-throughput screening (HTS) and high-content screening (HCS) assays for thousands of environmental chemicals, for use in developing predictive toxicity models. Currently the ToxCast screening program includes over 1800 unique c...
High Throughput Assays and Exposure Science (ISES annual meeting)
High throughput screening (HTS) data characterizing chemical-induced biological activity has been generated for thousands of environmentally-relevant chemicals by the US inter-agency Tox21 and the US EPA ToxCast programs. For a limited set of chemicals, bioactive concentrations r...
Accounting For Uncertainty in The Application Of High Throughput Datasets
The use of high throughput screening (HTS) datasets will need to adequately account for uncertainties in the data generation process and propagate these uncertainties through to ultimate use. Uncertainty arises at multiple levels in the construction of predictors using in vitro ...
Inter-Individual Variability in High-Throughput Risk Prioritization of Environmental Chemicals (Sot)
We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have...
We incorporate inter-individual variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which hav...
Evaluating and Refining High Throughput Tools for Toxicokinetics
This poster summarizes efforts of the Chemical Safety for Sustainability's Rapid Exposure and Dosimetry (RED) team to facilitate the development and refinement of toxicokinetics (TK) tools to be used in conjunction with the high throughput toxicity testing data generated as a par...
Next-Generation High-Throughput Functional Annotation of Microbial Genomes.
Baric, Ralph S; Crosson, Sean; Damania, Blossom; Miller, Samuel I; Rubin, Eric J
2016-10-04
Host infection by microbial pathogens cues global changes in microbial and host cell biology that facilitate microbial replication and disease. The complete maps of thousands of bacterial and viral genomes have recently been defined; however, the rate at which physiological or biochemical functions have been assigned to genes has greatly lagged. The National Institute of Allergy and Infectious Diseases (NIAID) addressed this gap by creating functional genomics centers dedicated to developing high-throughput approaches to assign gene function. These centers require broad-based and collaborative research programs to generate and integrate diverse data to achieve a comprehensive understanding of microbial pathogenesis. High-throughput functional genomics can lead to new therapeutics and better understanding of the next generation of emerging pathogens by rapidly defining new general mechanisms by which organisms cause disease and replicate in host tissues and by facilitating the rate at which functional data reach the scientific community. Copyright © 2016 Baric et al.
We incorporate inter-individual variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which hav...
Fujimori, Shigeo; Hirai, Naoya; Ohashi, Hiroyuki; Masuoka, Kazuyo; Nishikimi, Akihiko; Fukui, Yoshinori; Washio, Takanori; Oshikubo, Tomohiro; Yamashita, Tatsuhiro; Miyamoto-Sato, Etsuko
2012-01-01
Next-generation sequencing (NGS) has been applied to various kinds of omics studies, resulting in many biological and medical discoveries. However, high-throughput protein-protein interactome datasets derived from detection by sequencing are scarce, because protein-protein interaction analysis requires many cell manipulations to examine the interactions. The low reliability of the high-throughput data is also a problem. Here, we describe a cell-free display technology combined with NGS that can improve both the coverage and reliability of interactome datasets. The completely cell-free method gives a high-throughput and a large detection space, testing the interactions without using clones. The quantitative information provided by NGS reduces the number of false positives. The method is suitable for the in vitro detection of proteins that interact not only with the bait protein, but also with DNA, RNA and chemical compounds. Thus, it could become a universal approach for exploring the large space of protein sequences and interactome networks. PMID:23056904
NCBI GEO: archive for high-throughput functional genomic data.
Barrett, Tanya; Troup, Dennis B; Wilhite, Stephen E; Ledoux, Pierre; Rudnev, Dmitry; Evangelista, Carlos; Kim, Irene F; Soboleva, Alexandra; Tomashevsky, Maxim; Marshall, Kimberly A; Phillippy, Katherine H; Sherman, Patti M; Muertter, Rolf N; Edgar, Ron
2009-01-01
The Gene Expression Omnibus (GEO) at the National Center for Biotechnology Information (NCBI) is the largest public repository for high-throughput gene expression data. Additionally, GEO hosts other categories of high-throughput functional genomic data, including those that examine genome copy number variations, chromatin structure, methylation status and transcription factor binding. These data are generated by the research community using high-throughput technologies like microarrays and, more recently, next-generation sequencing. The database has a flexible infrastructure that can capture fully annotated raw and processed data, enabling compliance with major community-derived scientific reporting standards such as 'Minimum Information About a Microarray Experiment' (MIAME). In addition to serving as a centralized data storage hub, GEO offers many tools and features that allow users to effectively explore, analyze and download expression data from both gene-centric and experiment-centric perspectives. This article summarizes the GEO repository structure, content and operating procedures, as well as recently introduced data mining features. GEO is freely accessible at http://www.ncbi.nlm.nih.gov/geo/.
Environmental surveillance and monitoring. The next frontiers for high-throughput toxicology
High throughput toxicity testing (HTT) technologies along with the world-wide web are revolutionizing both generation and access to data regarding the bioactivities that chemicals can elicit when they interact with specific proteins, genes, or other targets in the body of an orga...
Increasing efficiency and declining cost of generating whole transcriptome profiles has made high-throughput transcriptomics a practical option for chemical bioactivity screening. The resulting data output provides information on the expression of thousands of genes and is amenab...
High Throughput Assays for Exposure Science (NIEHS OHAT Staff Meeting presentation)
High throughput screening (HTS) data that characterize chemically induced biological activity have been generated for thousands of chemicals by the US interagency Tox21 and the US EPA ToxCast programs. In many cases there are no data available for comparing bioactivity from HTS w...
Increasing efficiency and declining cost of generating whole transcriptome profiles has made high-throughput transcriptomics a practical option for chemical bioactivity screening. The resulting data output provides information on the expression of thousands of genes and is amenab...
The vast datasets generated by next generation gene sequencing and expression profiling have transformed biological and translational research. However, technologies to produce large-scale functional genomics datasets, such as high-throughput detection of protein-protein interactions (PPIs), are still in early development. While a number of powerful technologies have been employed to detect PPIs, a singular PPI biosensor platform featured with both high sensitivity and robustness in a mammalian cell environment remains to be established.
Taggart, David J.; Camerlengo, Terry L.; Harrison, Jason K.; Sherrer, Shanen M.; Kshetry, Ajay K.; Taylor, John-Stephen; Huang, Kun; Suo, Zucai
2013-01-01
Cellular genomes are constantly damaged by endogenous and exogenous agents that covalently and structurally modify DNA to produce DNA lesions. Although most lesions are mended by various DNA repair pathways in vivo, a significant number of damage sites persist during genomic replication. Our understanding of the mutagenic outcomes derived from these unrepaired DNA lesions has been hindered by the low throughput of existing sequencing methods. Therefore, we have developed a cost-effective high-throughput short oligonucleotide sequencing assay that uses next-generation DNA sequencing technology for the assessment of the mutagenic profiles of translesion DNA synthesis catalyzed by any error-prone DNA polymerase. The vast amount of sequencing data produced were aligned and quantified by using our novel software. As an example, the high-throughput short oligonucleotide sequencing assay was used to analyze the types and frequencies of mutations upstream, downstream and at a site-specifically placed cis–syn thymidine–thymidine dimer generated individually by three lesion-bypass human Y-family DNA polymerases. PMID:23470999
An industrial engineering approach to laboratory automation for high throughput screening
Menke, Karl C.
2000-01-01
Across the pharmaceutical industry, there are a variety of approaches to laboratory automation for high throughput screening. At Sphinx Pharmaceuticals, the principles of industrial engineering have been applied to systematically identify and develop those automated solutions that provide the greatest value to the scientists engaged in lead generation. PMID:18924701
USDA-ARS?s Scientific Manuscript database
The rapid advancement in high-throughput SNP genotyping technologies along with next generation sequencing (NGS) platforms has decreased the cost, improved the quality of large-scale genome surveys, and allowed specialty crops with limited genomic resources such as carrot (Daucus carota) to access t...
Thousands of chemicals have been profiled by high-throughput screening (HTS) programs such as ToxCast and Tox21; these chemicals are tested in part because most of them have limited or no data on hazard, exposure, or toxicokinetics (TK). While HTS generates in vitro bioactivity d...
USEPA’s ToxCast program has generated high-throughput bioactivity screening (HTS) data on thousands of chemicals. The ToxCast program has described and annotated the HTS assay battery with respect to assay design and target information (e.g., gene target). Recent stakeholder and ...
Wu, Han; Chen, Xinlian; Gao, Xinghua; Zhang, Mengying; Wu, Jinbo; Wen, Weijia
2018-04-03
High-throughput measurements can be achieved using droplet-based assays. In this study, we exploited the principles of wetting behavior and capillarity to guide liquids sliding along a solid surface with hybrid wettability. Oil-covered droplet arrays with uniformly sized and regularly shaped picoliter droplets were successfully generated on hydrophilic-in-hydrophobic patterned substrates. More than ten thousand 31-pL droplets were generated in 5 s without any sophisticated instruments. Covering the droplet arrays with oil during generation not only isolated the droplets from each other but also effectively prevented droplet evaporation. The oil-covered droplet arrays could be stored for more than 2 days with less than 35% volume loss. Single microspheres, microbial cells, or mammalian cells were successfully captured in the droplets. We demonstrate that Escherichia coli could be encapsulated at a certain number (1-4) and cultured for 3 days in droplets. Cell population and morphology were dynamically tracked within individual droplets. Our droplet array generation method enables high-throughput processing and is facile, efficient, and low-cost; in addition, the prepared droplet arrays have enormous potential for applications in chemical and biological assays.
Economic consequences of high throughput maskless lithography
NASA Astrophysics Data System (ADS)
Hartley, John G.; Govindaraju, Lakshmi
2005-11-01
Many people in the semiconductor industry bemoan the high costs of masks and view mask cost as one of the significant barriers to bringing new chip designs to market. All that is needed is a viable maskless technology and the problem will go away. Numerous sites around the world are working on maskless lithography but inevitably, the question asked is "Wouldn't a one wafer per hour maskless tool make a really good mask writer?" Of course, the answer is yes, the hesitation you hear in the answer isn't based on technology concerns, it's financial. The industry needs maskless lithography because mask costs are too high. Mask costs are too high because mask pattern generators (PG's) are slow and expensive. If mask PG's become much faster, mask costs go down, the maskless market goes away and the PG supplier is faced with an even smaller tool demand from the mask shops. Technical success becomes financial suicide - or does it? In this paper we will present the results of a model that examines some of the consequences of introducing high throughput maskless pattern generation. Specific features in the model include tool throughput for masks and wafers, market segmentation by node for masks and wafers and mask cost as an entry barrier to new chip designs. How does the availability of low cost masks and maskless tools affect the industries tool makeup and what is the ultimate potential market for high throughput maskless pattern generators?
Liu, Gary W; Livesay, Brynn R; Kacherovsky, Nataly A; Cieslewicz, Maryelise; Lutz, Emi; Waalkes, Adam; Jensen, Michael C; Salipante, Stephen J; Pun, Suzie H
2015-08-19
Peptide ligands are used to increase the specificity of drug carriers to their target cells and to facilitate intracellular delivery. One method to identify such peptide ligands, phage display, enables high-throughput screening of peptide libraries for ligands binding to therapeutic targets of interest. However, conventional methods for identifying target binders in a library by Sanger sequencing are low-throughput, labor-intensive, and provide a limited perspective (<0.01%) of the complete sequence space. Moreover, the small sample space can be dominated by nonspecific, preferentially amplifying "parasitic sequences" and plastic-binding sequences, which may lead to the identification of false positives or exclude the identification of target-binding sequences. To overcome these challenges, we employed next-generation Illumina sequencing to couple high-throughput screening and high-throughput sequencing, enabling more comprehensive access to the phage display library sequence space. In this work, we define the hallmarks of binding sequences in next-generation sequencing data, and develop a method that identifies several target-binding phage clones for murine, alternatively activated M2 macrophages with a high (100%) success rate: sequences and binding motifs were reproducibly present across biological replicates; binding motifs were identified across multiple unique sequences; and an unselected, amplified library accurately filtered out parasitic sequences. In addition, we validate the Multiple Em for Motif Elicitation tool as an efficient and principled means of discovering binding sequences.
High-throughput analysis of yeast replicative aging using a microfluidic system
Jo, Myeong Chan; Liu, Wei; Gu, Liang; Dang, Weiwei; Qin, Lidong
2015-01-01
Saccharomyces cerevisiae has been an important model for studying the molecular mechanisms of aging in eukaryotic cells. However, the laborious and low-throughput methods of current yeast replicative lifespan assays limit their usefulness as a broad genetic screening platform for research on aging. We address this limitation by developing an efficient, high-throughput microfluidic single-cell analysis chip in combination with high-resolution time-lapse microscopy. This innovative design enables, to our knowledge for the first time, the determination of the yeast replicative lifespan in a high-throughput manner. Morphological and phenotypical changes during aging can also be monitored automatically with a much higher throughput than previous microfluidic designs. We demonstrate highly efficient trapping and retention of mother cells, determination of the replicative lifespan, and tracking of yeast cells throughout their entire lifespan. Using the high-resolution and large-scale data generated from the high-throughput yeast aging analysis (HYAA) chips, we investigated particular longevity-related changes in cell morphology and characteristics, including critical cell size, terminal morphology, and protein subcellular localization. In addition, because of the significantly improved retention rate of yeast mother cell, the HYAA-Chip was capable of demonstrating replicative lifespan extension by calorie restriction. PMID:26170317
Neto, A I; Correia, C R; Oliveira, M B; Rial-Hermida, M I; Alvarez-Lorenzo, C; Reis, R L; Mano, J F
2015-04-01
We propose a novel hanging spherical drop system for anchoring arrays of droplets of cell suspension based on the use of biomimetic superhydrophobic flat substrates, with controlled positional adhesion and minimum contact with a solid substrate. By facing down the platform, it was possible to generate independent spheroid bodies in a high throughput manner, in order to mimic in vivo tumour models on the lab-on-chip scale. To validate this system for drug screening purposes, the toxicity of the anti-cancer drug doxorubicin in cell spheroids was tested and compared to cells in 2D culture. The advantages presented by this platform, such as feasibility of the system and the ability to control the size uniformity of the spheroid, emphasize its potential to be used as a new low cost toolbox for high-throughput drug screening and in cell or tissue engineering.
Novel Acoustic Loading of a Mass Spectrometer: Toward Next-Generation High-Throughput MS Screening.
Sinclair, Ian; Stearns, Rick; Pringle, Steven; Wingfield, Jonathan; Datwani, Sammy; Hall, Eric; Ghislain, Luke; Majlof, Lars; Bachman, Martin
2016-02-01
High-throughput, direct measurement of substrate-to-product conversion by label-free detection, without the need for engineered substrates or secondary assays, could be considered the "holy grail" of drug discovery screening. Mass spectrometry (MS) has the potential to be part of this ultimate screening solution, but is constrained by the limitations of existing MS sample introduction modes that cannot meet the throughput requirements of high-throughput screening (HTS). Here we report data from a prototype system (Echo-MS) that uses acoustic droplet ejection (ADE) to transfer femtoliter-scale droplets in a rapid, precise, and accurate fashion directly into the MS. The acoustic source can load samples into the MS from a microtiter plate at a rate of up to three samples per second. The resulting MS signal displays a very sharp attack profile and ions are detected within 50 ms of activation of the acoustic transducer. Additionally, we show that the system is capable of generating multiply charged ion species from simple peptides and large proteins. The combination of high speed and low sample volume has significant potential within not only drug discovery, but also other areas of the industry. © 2015 Society for Laboratory Automation and Screening.
Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz
2018-01-01
High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).
Bałut, Magdalena; Buckley, Patrick G.; Ochocka, J. Renata; Bartoszewski, Rafał; Crossman, David K.; Messiaen, Ludwine M.; Piotrowski, Arkadiusz
2018-01-01
High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp). PMID:29432475
Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter
2015-01-01
Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438
High-performance single cell genetic analysis using microfluidic emulsion generator arrays.
Zeng, Yong; Novak, Richard; Shuga, Joe; Smith, Martyn T; Mathies, Richard A
2010-04-15
High-throughput genetic and phenotypic analysis at the single cell level is critical to advance our understanding of the molecular mechanisms underlying cellular function and dysfunction. Here we describe a high-performance single cell genetic analysis (SCGA) technique that combines high-throughput microfluidic emulsion generation with single cell multiplex polymerase chain reaction (PCR). Microfabricated emulsion generator array (MEGA) devices containing 4, 32, and 96 channels are developed to confer a flexible capability of generating up to 3.4 x 10(6) nanoliter-volume droplets per hour. Hybrid glass-polydimethylsiloxane diaphragm micropumps integrated into the MEGA chips afford uniform droplet formation, controlled generation frequency, and effective transportation and encapsulation of primer functionalized microbeads and cells. A multiplex single cell PCR method is developed to detect and quantify both wild type and mutant/pathogenic cells. In this method, microbeads functionalized with multiple forward primers targeting specific genes from different cell types are used for solid-phase PCR in droplets. Following PCR, the droplets are lysed and the beads are pooled and rapidly analyzed by multicolor flow cytometry. Using Escherichia coli bacterial cells as a model, we show that this technique enables digital detection of pathogenic E. coli O157 cells in a high background of normal K12 cells, with a detection limit on the order of 1/10(5). This result demonstrates that multiplex SCGA is a promising tool for high-throughput quantitative digital analysis of genetic variation in complex populations.
High-Performance Single Cell Genetic Analysis Using Microfluidic Emulsion Generator Arrays
Zeng, Yong; Novak, Richard; Shuga, Joe; Smith, Martyn T.; Mathies, Richard A.
2010-01-01
High-throughput genetic and phenotypic analysis at the single cell level is critical to advance our understanding of the molecular mechanisms underlying cellular function and dysfunction. Here we describe a high-performance single cell genetic analysis (SCGA) technique that combines high-throughput microfluidic emulsion generation with single cell multiplex PCR. Microfabricated emulsion generator array (MEGA) devices containing 4, 32 and 96 channels are developed to confer a flexible capability of generating up to 3.4 × 106 nanoliter-volume droplets per hour. Hybrid glass-polydimethylsiloxane diaphragm micropumps integrated into the MEGA chips afford uniform droplet formation, controlled generation frequency, and effective transportation and encapsulation of primer functionalized microbeads and cells. A multiplex single cell PCR method is developed to detect and quantify both wild type and mutant/pathogenic cells. In this method, microbeads functionalized with multiple forward primers targeting specific genes from different cell types are used for solid-phase PCR in droplets. Following PCR, the droplets are lysed, the beads are pooled and rapidly analyzed by multi-color flow cytometry. Using E. coli bacterial cells as a model, we show that this technique enables digital detection of pathogenic E. coli O157 cells in a high background of normal K12 cells, with a detection limit on the order of 1:105. This result demonstrates that multiplex SCGA is a promising tool for high-throughput quantitative digital analysis of genetic variation in complex populations. PMID:20192178
A Multidisciplinary Approach to High Throughput Nuclear Magnetic Resonance Spectroscopy
Pourmodheji, Hossein; Ghafar-Zadeh, Ebrahim; Magierowski, Sebastian
2016-01-01
Nuclear Magnetic Resonance (NMR) is a non-contact, powerful structure-elucidation technique for biochemical analysis. NMR spectroscopy is used extensively in a variety of life science applications including drug discovery. However, existing NMR technology is limited in that it cannot run a large number of experiments simultaneously in one unit. Recent advances in micro-fabrication technologies have attracted the attention of researchers to overcome these limitations and significantly accelerate the drug discovery process by developing the next generation of high-throughput NMR spectrometers using Complementary Metal Oxide Semiconductor (CMOS). In this paper, we examine this paradigm shift and explore new design strategies for the development of the next generation of high-throughput NMR spectrometers using CMOS technology. A CMOS NMR system consists of an array of high sensitivity micro-coils integrated with interfacing radio-frequency circuits on the same chip. Herein, we first discuss the key challenges and recent advances in the field of CMOS NMR technology, and then a new design strategy is put forward for the design and implementation of highly sensitive and high-throughput CMOS NMR spectrometers. We thereafter discuss the functionality and applicability of the proposed techniques by demonstrating the results. For microelectronic researchers starting to work in the field of CMOS NMR technology, this paper serves as a tutorial with comprehensive review of state-of-the-art technologies and their performance levels. Based on these levels, the CMOS NMR approach offers unique advantages for high resolution, time-sensitive and high-throughput bimolecular analysis required in a variety of life science applications including drug discovery. PMID:27294925
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Shihui; Franden, Mary A; Yang, Qing
The aim of this work was to identify inhibitors in pretreated lignocellulosic slurries, evaluate high-throughput screening strategies, and investigate the impact of inhibitors on potential hydrocarbon-producing microorganisms. Compounds present in slurries that could inhibit microbial growth were identified through a detailed analysis of saccharified slurries by applying a combination of approaches of high-performance liquid chromatography, GC-MS, LC-DAD-MS, and ICP-MS. Several high-throughput assays were then evaluated to generate toxicity profiles. Our results demonstrated that Bioscreen C was useful for analyzing bacterial toxicity but not for yeast. AlamarBlue reduction assay can be a useful high-throughput assay for both bacterial and yeast strainsmore » as long as medium components do not interfere with fluorescence measurements. In addition, this work identified two major inhibitors (furfural and ammonium acetate) for three potential hydrocarbon-producing bacterial species that include Escherichia coli, Cupriavidus necator, and Rhodococcus opacus PD630, which are also the primary inhibitors for ethanologens. Here, this study was strived to establish a pipeline to quantify inhibitory compounds in biomass slurries and high-throughput approaches to investigate the effect of inhibitors on microbial biocatalysts, which can be applied for various biomass slurries or hydrolyzates generated through different pretreatment and enzymatic hydrolysis processes or different microbial candidates.« less
Yang, Shihui; Franden, Mary A; Yang, Qing; ...
2018-04-04
The aim of this work was to identify inhibitors in pretreated lignocellulosic slurries, evaluate high-throughput screening strategies, and investigate the impact of inhibitors on potential hydrocarbon-producing microorganisms. Compounds present in slurries that could inhibit microbial growth were identified through a detailed analysis of saccharified slurries by applying a combination of approaches of high-performance liquid chromatography, GC-MS, LC-DAD-MS, and ICP-MS. Several high-throughput assays were then evaluated to generate toxicity profiles. Our results demonstrated that Bioscreen C was useful for analyzing bacterial toxicity but not for yeast. AlamarBlue reduction assay can be a useful high-throughput assay for both bacterial and yeast strainsmore » as long as medium components do not interfere with fluorescence measurements. In addition, this work identified two major inhibitors (furfural and ammonium acetate) for three potential hydrocarbon-producing bacterial species that include Escherichia coli, Cupriavidus necator, and Rhodococcus opacus PD630, which are also the primary inhibitors for ethanologens. Here, this study was strived to establish a pipeline to quantify inhibitory compounds in biomass slurries and high-throughput approaches to investigate the effect of inhibitors on microbial biocatalysts, which can be applied for various biomass slurries or hydrolyzates generated through different pretreatment and enzymatic hydrolysis processes or different microbial candidates.« less
Camilo, Cesar M; Lima, Gustavo M A; Maluf, Fernando V; Guido, Rafael V C; Polikarpov, Igor
2016-01-01
Following burgeoning genomic and transcriptomic sequencing data, biochemical and molecular biology groups worldwide are implementing high-throughput cloning and mutagenesis facilities in order to obtain a large number of soluble proteins for structural and functional characterization. Since manual primer design can be a time-consuming and error-generating step, particularly when working with hundreds of targets, the automation of primer design process becomes highly desirable. HTP-OligoDesigner was created to provide the scientific community with a simple and intuitive online primer design tool for both laboratory-scale and high-throughput projects of sequence-independent gene cloning and site-directed mutagenesis and a Tm calculator for quick queries.
Huber, Robert; Ritter, Daniel; Hering, Till; Hillmer, Anne-Kathrin; Kensy, Frank; Müller, Carsten; Wang, Le; Büchs, Jochen
2009-08-01
In industry and academic research, there is an increasing demand for flexible automated microfermentation platforms with advanced sensing technology. However, up to now, conventional platforms cannot generate continuous data in high-throughput cultivations, in particular for monitoring biomass and fluorescent proteins. Furthermore, microfermentation platforms are needed that can easily combine cost-effective, disposable microbioreactors with downstream processing and analytical assays. To meet this demand, a novel automated microfermentation platform consisting of a BioLector and a liquid-handling robot (Robo-Lector) was sucessfully built and tested. The BioLector provides a cultivation system that is able to permanently monitor microbial growth and the fluorescence of reporter proteins under defined conditions in microtiter plates. Three examplary methods were programed on the Robo-Lector platform to study in detail high-throughput cultivation processes and especially recombinant protein expression. The host/vector system E. coli BL21(DE3) pRhotHi-2-EcFbFP, expressing the fluorescence protein EcFbFP, was hereby investigated. With the method 'induction profiling' it was possible to conduct 96 different induction experiments (varying inducer concentrations from 0 to 1.5 mM IPTG at 8 different induction times) simultaneously in an automated way. The method 'biomass-specific induction' allowed to automatically induce cultures with different growth kinetics in a microtiter plate at the same biomass concentration, which resulted in a relative standard deviation of the EcFbFP production of only +/- 7%. The third method 'biomass-specific replication' enabled to generate equal initial biomass concentrations in main cultures from precultures with different growth kinetics. This was realized by automatically transferring an appropiate inoculum volume from the different preculture microtiter wells to respective wells of the main culture plate, where subsequently similar growth kinetics could be obtained. The Robo-Lector generates extensive kinetic data in high-throughput cultivations, particularly for biomass and fluorescence protein formation. Based on the non-invasive on-line-monitoring signals, actions of the liquid-handling robot can easily be triggered. This interaction between the robot and the BioLector (Robo-Lector) combines high-content data generation with systematic high-throughput experimentation in an automated fashion, offering new possibilities to study biological production systems. The presented platform uses a standard liquid-handling workstation with widespread automation possibilities. Thus, high-throughput cultivations can now be combined with small-scale downstream processing techniques and analytical assays. Ultimately, this novel versatile platform can accelerate and intensify research and development in the field of systems biology as well as modelling and bioprocess optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Martin L.; Choi, C. L.; Hattrick-Simpers, J. R.
The Materials Genome Initiative, a national effort to introduce new materials into the market faster and at lower cost, has made significant progress in computational simulation and modeling of materials. To build on this progress, a large amount of experimental data for validating these models, and informing more sophisticated ones, will be required. High-throughput experimentation generates large volumes of experimental data using combinatorial materials synthesis and rapid measurement techniques, making it an ideal experimental complement to bring the Materials Genome Initiative vision to fruition. This paper reviews the state-of-the-art results, opportunities, and challenges in high-throughput experimentation for materials design. Asmore » a result, a major conclusion is that an effort to deploy a federated network of high-throughput experimental (synthesis and characterization) tools, which are integrated with a modern materials data infrastructure, is needed.« less
Green, Martin L.; Choi, C. L.; Hattrick-Simpers, J. R.; ...
2017-03-28
The Materials Genome Initiative, a national effort to introduce new materials into the market faster and at lower cost, has made significant progress in computational simulation and modeling of materials. To build on this progress, a large amount of experimental data for validating these models, and informing more sophisticated ones, will be required. High-throughput experimentation generates large volumes of experimental data using combinatorial materials synthesis and rapid measurement techniques, making it an ideal experimental complement to bring the Materials Genome Initiative vision to fruition. This paper reviews the state-of-the-art results, opportunities, and challenges in high-throughput experimentation for materials design. Asmore » a result, a major conclusion is that an effort to deploy a federated network of high-throughput experimental (synthesis and characterization) tools, which are integrated with a modern materials data infrastructure, is needed.« less
NASA Astrophysics Data System (ADS)
Pfeiffer, Hans
1999-12-01
Projection reduction exposure with variable axis immersion lenses (PREVAIL) represents the high throughput e-beam projection approach to next generation lithography (NGL), which IBM is pursuing in cooperation with Nikon Corporation as an alliance partner. This paper discusses the challenges and accomplishments of the PREVAIL project. The supreme challenge facing all e-beam lithography approaches has been and still is throughput. Since the throughput of e-beam projection systems is severely limited by the available optical field size, the key to success is the ability to overcome this limitation. The PREVAIL technique overcomes field-limiting off-axis aberrations through the use of variable axis lenses, which electronically shift the optical axis simultaneously with the deflected beam, so that the beam effectively remains on axis. The resist images obtained with the proof-of-concept (POC) system demonstrate that PREVAIL effectively eliminates off-axis aberrations affecting both the resolution and placement accuracy of pixels. As part of the POC system a high emittance gun has been developed to provide uniform illumination of the patterned subfield, and to fill the large numerical aperture projection optics designed to significantly reduce beam blur caused by Coulombinteraction.
NASA Astrophysics Data System (ADS)
Mondal, Sudip; Hegarty, Evan; Martin, Chris; Gökçe, Sertan Kutal; Ghorashian, Navid; Ben-Yakar, Adela
2016-10-01
Next generation drug screening could benefit greatly from in vivo studies, using small animal models such as Caenorhabditis elegans for hit identification and lead optimization. Current in vivo assays can operate either at low throughput with high resolution or with low resolution at high throughput. To enable both high-throughput and high-resolution imaging of C. elegans, we developed an automated microfluidic platform. This platform can image 15 z-stacks of ~4,000 C. elegans from 96 different populations using a large-scale chip with a micron resolution in 16 min. Using this platform, we screened ~100,000 animals of the poly-glutamine aggregation model on 25 chips. We tested the efficacy of ~1,000 FDA-approved drugs in improving the aggregation phenotype of the model and identified four confirmed hits. This robust platform now enables high-content screening of various C. elegans disease models at the speed and cost of in vitro cell-based assays.
Chan, Leo Li-Ying; Smith, Tim; Kumph, Kendra A; Kuksin, Dmitry; Kessel, Sarah; Déry, Olivier; Cribbes, Scott; Lai, Ning; Qiu, Jean
2016-10-01
To ensure cell-based assays are performed properly, both cell concentration and viability have to be determined so that the data can be normalized to generate meaningful and comparable results. Cell-based assays performed in immuno-oncology, toxicology, or bioprocessing research often require measuring of multiple samples and conditions, thus the current automated cell counter that uses single disposable counting slides is not practical for high-throughput screening assays. In the recent years, a plate-based image cytometry system has been developed for high-throughput biomolecular screening assays. In this work, we demonstrate a high-throughput AO/PI-based cell concentration and viability method using the Celigo image cytometer. First, we validate the method by comparing directly to Cellometer automated cell counter. Next, cell concentration dynamic range, viability dynamic range, and consistency are determined. The high-throughput AO/PI method described here allows for 96-well to 384-well plate samples to be analyzed in less than 7 min, which greatly reduces the time required for the single sample-based automated cell counter. In addition, this method can improve the efficiency for high-throughput screening assays, where multiple cell counts and viability measurements are needed prior to performing assays such as flow cytometry, ELISA, or simply plating cells for cell culture.
SMARTIV: combined sequence and structure de-novo motif discovery for in-vivo RNA binding data.
Polishchuk, Maya; Paz, Inbal; Yakhini, Zohar; Mandel-Gutfreund, Yael
2018-05-25
Gene expression regulation is highly dependent on binding of RNA-binding proteins (RBPs) to their RNA targets. Growing evidence supports the notion that both RNA primary sequence and its local secondary structure play a role in specific Protein-RNA recognition and binding. Despite the great advance in high-throughput experimental methods for identifying sequence targets of RBPs, predicting the specific sequence and structure binding preferences of RBPs remains a major challenge. We present a novel webserver, SMARTIV, designed for discovering and visualizing combined RNA sequence and structure motifs from high-throughput RNA-binding data, generated from in-vivo experiments. The uniqueness of SMARTIV is that it predicts motifs from enriched k-mers that combine information from ranked RNA sequences and their predicted secondary structure, obtained using various folding methods. Consequently, SMARTIV generates Position Weight Matrices (PWMs) in a combined sequence and structure alphabet with assigned P-values. SMARTIV concisely represents the sequence and structure motif content as a single graphical logo, which is informative and easy for visual perception. SMARTIV was examined extensively on a variety of high-throughput binding experiments for RBPs from different families, generated from different technologies, showing consistent and accurate results. Finally, SMARTIV is a user-friendly webserver, highly efficient in run-time and freely accessible via http://smartiv.technion.ac.il/.
Role of APOE Isoforms in the Pathogenesis of TBI induced Alzheimer’s Disease
2016-10-01
deletion, APOE targeted replacement, complex breeding, CCI model optimization, mRNA library generation, high throughput massive parallel sequencing...demonstrate that the lack of Abca1 increases amyloid plaques and decreased APOE protein levels in AD-model mice. In this proposal we will test the hypothesis...injury, inflammatory reaction, transcriptome, high throughput massive parallel sequencing, mRNA-seq., behavioral testing, memory impairment, recovery 3
The Adverse Outcome Pathway (AOP) framework provides a systematic way to describe linkages between molecular and cellular processes and organism or population level effects. The current AOP assembly methods however, are inefficient. Our goal is to generate computationally-pr...
To discover novel PPI signaling hubs for lung cancer, CTD2 Center at Emory utilized large-scale genomics datasets and literature to compile a set of lung cancer-associated genes. A library of expression vectors were generated for these genes and utilized for detecting pairwise PPIs with cell lysate-based TR-FRET assays in high-throughput screening format. Read the abstract.
Polonchuk, Liudmila
2014-01-01
Patch-clamping is a powerful technique for investigating the ion channel function and regulation. However, its low throughput hampered profiling of large compound series in early drug development. Fortunately, automation has revolutionized the area of experimental electrophysiology over the past decade. Whereas the first automated patch-clamp instruments using the planar patch-clamp technology demonstrated rather a moderate throughput, few second-generation automated platforms recently launched by various companies have significantly increased ability to form a high number of high-resistance seals. Among them is SyncroPatch(®) 96 (Nanion Technologies GmbH, Munich, Germany), a fully automated giga-seal patch-clamp system with the highest throughput on the market. By recording from up to 96 cells simultaneously, the SyncroPatch(®) 96 allows to substantially increase throughput without compromising data quality. This chapter describes features of the innovative automated electrophysiology system and protocols used for a successful transfer of the established hERG assay to this high-throughput automated platform.
Remodeling Cildb, a popular database for cilia and links for ciliopathies
2014-01-01
Background New generation technologies in cell and molecular biology generate large amounts of data hard to exploit for individual proteins. This is particularly true for ciliary and centrosomal research. Cildb is a multi–species knowledgebase gathering high throughput studies, which allows advanced searches to identify proteins involved in centrosome, basal body or cilia biogenesis, composition and function. Combined to localization of genetic diseases on human chromosomes given by OMIM links, candidate ciliopathy proteins can be compiled through Cildb searches. Methods Othology between recent versions of the whole proteomes was computed using Inparanoid and ciliary high throughput studies were remapped on these recent versions. Results Due to constant evolution of the ciliary and centrosomal field, Cildb has been recently upgraded twice, with new species whole proteomes and new ciliary studies, and the latter version displays a novel BioMart interface, much more intuitive than the previous ones. Conclusions This already popular database is designed now for easier use and is up to date in regard to high throughput ciliary studies. PMID:25422781
Hattrick-Simpers, Jason R.; Gregoire, John M.; Kusne, A. Gilad
2016-05-26
With their ability to rapidly elucidate composition-structure-property relationships, high-throughput experimental studies have revolutionized how materials are discovered, optimized, and commercialized. It is now possible to synthesize and characterize high-throughput libraries that systematically address thousands of individual cuts of fabrication parameter space. An unresolved issue remains transforming structural characterization data into phase mappings. This difficulty is related to the complex information present in diffraction and spectroscopic data and its variation with composition and processing. Here, we review the field of automated phase diagram attribution and discuss the impact that emerging computational approaches will have in the generation of phase diagrams andmore » beyond.« less
Identification of functional modules using network topology and high-throughput data.
Ulitsky, Igor; Shamir, Ron
2007-01-26
With the advent of systems biology, biological knowledge is often represented today by networks. These include regulatory and metabolic networks, protein-protein interaction networks, and many others. At the same time, high-throughput genomics and proteomics techniques generate very large data sets, which require sophisticated computational analysis. Usually, separate and different analysis methodologies are applied to each of the two data types. An integrated investigation of network and high-throughput information together can improve the quality of the analysis by accounting simultaneously for topological network properties alongside intrinsic features of the high-throughput data. We describe a novel algorithmic framework for this challenge. We first transform the high-throughput data into similarity values, (e.g., by computing pairwise similarity of gene expression patterns from microarray data). Then, given a network of genes or proteins and similarity values between some of them, we seek connected sub-networks (or modules) that manifest high similarity. We develop algorithms for this problem and evaluate their performance on the osmotic shock response network in S. cerevisiae and on the human cell cycle network. We demonstrate that focused, biologically meaningful and relevant functional modules are obtained. In comparison with extant algorithms, our approach has higher sensitivity and higher specificity. We have demonstrated that our method can accurately identify functional modules. Hence, it carries the promise to be highly useful in analysis of high throughput data.
Fu, Wei; Zhu, Pengyu; Wei, Shuang; Zhixin, Du; Wang, Chenguang; Wu, Xiyang; Li, Feiwu; Zhu, Shuifang
2017-04-01
Among all of the high-throughput detection methods, PCR-based methodologies are regarded as the most cost-efficient and feasible methodologies compared with the next-generation sequencing or ChIP-based methods. However, the PCR-based methods can only achieve multiplex detection up to 15-plex due to limitations imposed by the multiplex primer interactions. The detection throughput cannot meet the demands of high-throughput detection, such as SNP or gene expression analysis. Therefore, in our study, we have developed a new high-throughput PCR-based detection method, multiplex enrichment quantitative PCR (ME-qPCR), which is a combination of qPCR and nested PCR. The GMO content detection results in our study showed that ME-qPCR could achieve high-throughput detection up to 26-plex. Compared to the original qPCR, the Ct values of ME-qPCR were lower for the same group, which showed that ME-qPCR sensitivity is higher than the original qPCR. The absolute limit of detection for ME-qPCR could achieve levels as low as a single copy of the plant genome. Moreover, the specificity results showed that no cross-amplification occurred for irrelevant GMO events. After evaluation of all of the parameters, a practical evaluation was performed with different foods. The more stable amplification results, compared to qPCR, showed that ME-qPCR was suitable for GMO detection in foods. In conclusion, ME-qPCR achieved sensitive, high-throughput GMO detection in complex substrates, such as crops or food samples. In the future, ME-qPCR-based GMO content identification may positively impact SNP analysis or multiplex gene expression of food or agricultural samples. Graphical abstract For the first-step amplification, four primers (A, B, C, and D) have been added into the reaction volume. In this manner, four kinds of amplicons have been generated. All of these four amplicons could be regarded as the target of second-step PCR. For the second-step amplification, three parallels have been taken for the final evaluation. After the second evaluation, the final amplification curves and melting curves have been achieved.
Caraus, Iurie; Alsuwailem, Abdulaziz A; Nadon, Robert; Makarenkov, Vladimir
2015-11-01
Significant efforts have been made recently to improve data throughput and data quality in screening technologies related to drug design. The modern pharmaceutical industry relies heavily on high-throughput screening (HTS) and high-content screening (HCS) technologies, which include small molecule, complementary DNA (cDNA) and RNA interference (RNAi) types of screening. Data generated by these screening technologies are subject to several environmental and procedural systematic biases, which introduce errors into the hit identification process. We first review systematic biases typical of HTS and HCS screens. We highlight that study design issues and the way in which data are generated are crucial for providing unbiased screening results. Considering various data sets, including the publicly available ChemBank data, we assess the rates of systematic bias in experimental HTS by using plate-specific and assay-specific error detection tests. We describe main data normalization and correction techniques and introduce a general data preprocessing protocol. This protocol can be recommended for academic and industrial researchers involved in the analysis of current or next-generation HTS data. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Automated crystallographic system for high-throughput protein structure determination.
Brunzelle, Joseph S; Shafaee, Padram; Yang, Xiaojing; Weigand, Steve; Ren, Zhong; Anderson, Wayne F
2003-07-01
High-throughput structural genomic efforts require software that is highly automated, distributive and requires minimal user intervention to determine protein structures. Preliminary experiments were set up to test whether automated scripts could utilize a minimum set of input parameters and produce a set of initial protein coordinates. From this starting point, a highly distributive system was developed that could determine macromolecular structures at a high throughput rate, warehouse and harvest the associated data. The system uses a web interface to obtain input data and display results. It utilizes a relational database to store the initial data needed to start the structure-determination process as well as generated data. A distributive program interface administers the crystallographic programs which determine protein structures. Using a test set of 19 protein targets, 79% were determined automatically.
Huber, Robert; Ritter, Daniel; Hering, Till; Hillmer, Anne-Kathrin; Kensy, Frank; Müller, Carsten; Wang, Le; Büchs, Jochen
2009-01-01
Background In industry and academic research, there is an increasing demand for flexible automated microfermentation platforms with advanced sensing technology. However, up to now, conventional platforms cannot generate continuous data in high-throughput cultivations, in particular for monitoring biomass and fluorescent proteins. Furthermore, microfermentation platforms are needed that can easily combine cost-effective, disposable microbioreactors with downstream processing and analytical assays. Results To meet this demand, a novel automated microfermentation platform consisting of a BioLector and a liquid-handling robot (Robo-Lector) was sucessfully built and tested. The BioLector provides a cultivation system that is able to permanently monitor microbial growth and the fluorescence of reporter proteins under defined conditions in microtiter plates. Three examplary methods were programed on the Robo-Lector platform to study in detail high-throughput cultivation processes and especially recombinant protein expression. The host/vector system E. coli BL21(DE3) pRhotHi-2-EcFbFP, expressing the fluorescence protein EcFbFP, was hereby investigated. With the method 'induction profiling' it was possible to conduct 96 different induction experiments (varying inducer concentrations from 0 to 1.5 mM IPTG at 8 different induction times) simultaneously in an automated way. The method 'biomass-specific induction' allowed to automatically induce cultures with different growth kinetics in a microtiter plate at the same biomass concentration, which resulted in a relative standard deviation of the EcFbFP production of only ± 7%. The third method 'biomass-specific replication' enabled to generate equal initial biomass concentrations in main cultures from precultures with different growth kinetics. This was realized by automatically transferring an appropiate inoculum volume from the different preculture microtiter wells to respective wells of the main culture plate, where subsequently similar growth kinetics could be obtained. Conclusion The Robo-Lector generates extensive kinetic data in high-throughput cultivations, particularly for biomass and fluorescence protein formation. Based on the non-invasive on-line-monitoring signals, actions of the liquid-handling robot can easily be triggered. This interaction between the robot and the BioLector (Robo-Lector) combines high-content data generation with systematic high-throughput experimentation in an automated fashion, offering new possibilities to study biological production systems. The presented platform uses a standard liquid-handling workstation with widespread automation possibilities. Thus, high-throughput cultivations can now be combined with small-scale downstream processing techniques and analytical assays. Ultimately, this novel versatile platform can accelerate and intensify research and development in the field of systems biology as well as modelling and bioprocess optimization. PMID:19646274
Raja, Kalpana; Patrick, Matthew; Gao, Yilin; Madu, Desmond; Yang, Yuyang
2017-01-01
In the past decade, the volume of “omics” data generated by the different high-throughput technologies has expanded exponentially. The managing, storing, and analyzing of this big data have been a great challenge for the researchers, especially when moving towards the goal of generating testable data-driven hypotheses, which has been the promise of the high-throughput experimental techniques. Different bioinformatics approaches have been developed to streamline the downstream analyzes by providing independent information to interpret and provide biological inference. Text mining (also known as literature mining) is one of the commonly used approaches for automated generation of biological knowledge from the huge number of published articles. In this review paper, we discuss the recent advancement in approaches that integrate results from omics data and information generated from text mining approaches to uncover novel biomedical information. PMID:28331849
The French press: a repeatable and high-throughput approach to exercising zebrafish (Danio rerio).
Usui, Takuji; Noble, Daniel W A; O'Dea, Rose E; Fangmeier, Melissa L; Lagisz, Malgorzata; Hesselson, Daniel; Nakagawa, Shinichi
2018-01-01
Zebrafish are increasingly used as a vertebrate model organism for various traits including swimming performance, obesity and metabolism, necessitating high-throughput protocols to generate standardized phenotypic information. Here, we propose a novel and cost-effective method for exercising zebrafish, using a coffee plunger and magnetic stirrer. To demonstrate the use of this method, we conducted a pilot experiment to show that this simple system provides repeatable estimates of maximal swim performance (intra-class correlation [ICC] = 0.34-0.41) and observe that exercise training of zebrafish on this system significantly increases their maximum swimming speed. We propose this high-throughput and reproducible system as an alternative to traditional linear chamber systems for exercising zebrafish and similarly sized fishes.
Overcoming bias and systematic errors in next generation sequencing data.
Taub, Margaret A; Corrada Bravo, Hector; Irizarry, Rafael A
2010-12-10
Considerable time and effort has been spent in developing analysis and quality assessment methods to allow the use of microarrays in a clinical setting. As is the case for microarrays and other high-throughput technologies, data from new high-throughput sequencing technologies are subject to technological and biological biases and systematic errors that can impact downstream analyses. Only when these issues can be readily identified and reliably adjusted for will clinical applications of these new technologies be feasible. Although much work remains to be done in this area, we describe consistently observed biases that should be taken into account when analyzing high-throughput sequencing data. In this article, we review current knowledge about these biases, discuss their impact on analysis results, and propose solutions.
Loeffler 4.0: Diagnostic Metagenomics.
Höper, Dirk; Wylezich, Claudia; Beer, Martin
2017-01-01
A new world of possibilities for "virus discovery" was opened up with high-throughput sequencing becoming available in the last decade. While scientifically metagenomic analysis was established before the start of the era of high-throughput sequencing, the availability of the first second-generation sequencers was the kick-off for diagnosticians to use sequencing for the detection of novel pathogens. Today, diagnostic metagenomics is becoming the standard procedure for the detection and genetic characterization of new viruses or novel virus variants. Here, we provide an overview about technical considerations of high-throughput sequencing-based diagnostic metagenomics together with selected examples of "virus discovery" for animal diseases or zoonoses and metagenomics for food safety or basic veterinary research. © 2017 Elsevier Inc. All rights reserved.
The French press: a repeatable and high-throughput approach to exercising zebrafish (Danio rerio)
Usui, Takuji; Noble, Daniel W.A.; O’Dea, Rose E.; Fangmeier, Melissa L.; Lagisz, Malgorzata; Hesselson, Daniel
2018-01-01
Zebrafish are increasingly used as a vertebrate model organism for various traits including swimming performance, obesity and metabolism, necessitating high-throughput protocols to generate standardized phenotypic information. Here, we propose a novel and cost-effective method for exercising zebrafish, using a coffee plunger and magnetic stirrer. To demonstrate the use of this method, we conducted a pilot experiment to show that this simple system provides repeatable estimates of maximal swim performance (intra-class correlation [ICC] = 0.34–0.41) and observe that exercise training of zebrafish on this system significantly increases their maximum swimming speed. We propose this high-throughput and reproducible system as an alternative to traditional linear chamber systems for exercising zebrafish and similarly sized fishes. PMID:29372124
The next generation CdTe technology- Substrate foil based solar cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferekides, Chris
The main objective of this project was the development of one of the most promising Photovoltaic (PV) materials CdTe into a versatile, cost effective, and high throughput technology, by demonstrating substrate devices on foil substrates using high throughput fabrication conditions. The typical CdTe cell is of the superstrate configuration where the solar cell is fabricated on a glass superstrate by the sequential deposition of a TCO, n-type heterojunction partner, p-CdTe absorber, and back contact. Large glass modules are heavy and present significant challenges during manufacturing (uniform heating, etc.). If a substrate CdTe cell could be developed (the main goal ofmore » this project) a roll-to-toll high throughput technology could be developed.« less
NASA Astrophysics Data System (ADS)
Lagus, Todd P.; Edd, Jon F.
2013-03-01
Most cell biology experiments are performed in bulk cell suspensions where cell secretions become diluted and mixed in a contiguous sample. Confinement of single cells to small, picoliter-sized droplets within a continuous phase of oil provides chemical isolation of each cell, creating individual microreactors where rare cell qualities are highlighted and otherwise undetectable signals can be concentrated to measurable levels. Recent work in microfluidics has yielded methods for the encapsulation of cells in aqueous droplets and hydrogels at kilohertz rates, creating the potential for millions of parallel single-cell experiments. However, commercial applications of high-throughput microdroplet generation and downstream sensing and actuation methods are still emerging for cells. Using fluorescence-activated cell sorting (FACS) as a benchmark for commercially available high-throughput screening, this focused review discusses the fluid physics of droplet formation, methods for cell encapsulation in liquids and hydrogels, sensors and actuators and notable biological applications of high-throughput single-cell droplet microfluidics.
The application of the high throughput sequencing technology in the transposable elements.
Liu, Zhen; Xu, Jian-hong
2015-09-01
High throughput sequencing technology has dramatically improved the efficiency of DNA sequencing, and decreased the costs to a great extent. Meanwhile, this technology usually has advantages of better specificity, higher sensitivity and accuracy. Therefore, it has been applied to the research on genetic variations, transcriptomics and epigenomics. Recently, this technology has been widely employed in the studies of transposable elements and has achieved fruitful results. In this review, we summarize the application of high throughput sequencing technology in the fields of transposable elements, including the estimation of transposon content, preference of target sites and distribution, insertion polymorphism and population frequency, identification of rare copies, transposon horizontal transfers as well as transposon tagging. We also briefly introduce the major common sequencing strategies and algorithms, their advantages and disadvantages, and the corresponding solutions. Finally, we envision the developing trends of high throughput sequencing technology, especially the third generation sequencing technology, and its application in transposon studies in the future, hopefully providing a comprehensive understanding and reference for related scientific researchers.
High Throughput Assays for Exposure Science (NIEHS OHAT ...
High throughput screening (HTS) data that characterize chemically induced biological activity have been generated for thousands of chemicals by the US interagency Tox21 and the US EPA ToxCast programs. In many cases there are no data available for comparing bioactivity from HTS with relevant human exposures. The EPA’s ExpoCast program is developing high-throughput approaches to generate the needed exposure estimates using existing databases and new, high-throughput measurements. The exposure pathway (i.e., the route of chemical from manufacture to human intake) significantly impacts the level of exposure. The presence, concentration, and formulation of chemicals in consumer products and articles of commerce (e.g., clothing) can therefore provide critical information for estimating risk. We have found that there are only limited data available on the chemical constituents (e.g., flame retardants, plasticizers) within most articles of commerce. Furthermore, the presence of some chemicals in otherwise well characterized products may be due to product packaging. We are analyzing sample consumer products using 2D gas chromatograph (GC) x GC Time of Flight Mass Spectrometry (GCxGCTOF/MS), which is suited for forensic investigation of chemicals in complex matrices (including toys, cleaners, and food). In parallel, we are working to create a reference library of retention times and spectral information for the entire Tox21 chemical library. In an examination of five p
Assembly and diploid architecture of an individual human genome via single-molecule technologies
Pendleton, Matthew; Sebra, Robert; Pang, Andy Wing Chun; Ummat, Ajay; Franzen, Oscar; Rausch, Tobias; Stütz, Adrian M; Stedman, William; Anantharaman, Thomas; Hastie, Alex; Dai, Heng; Fritz, Markus Hsi-Yang; Cao, Han; Cohain, Ariella; Deikus, Gintaras; Durrett, Russell E; Blanchard, Scott C; Altman, Roger; Chin, Chen-Shan; Guo, Yan; Paxinos, Ellen E; Korbel, Jan O; Darnell, Robert B; McCombie, W Richard; Kwok, Pui-Yan; Mason, Christopher E; Schadt, Eric E; Bashir, Ali
2015-01-01
We present the first comprehensive analysis of a diploid human genome that combines single-molecule sequencing with single-molecule genome maps. Our hybrid assembly markedly improves upon the contiguity observed from traditional shotgun sequencing approaches, with scaffold N50 values approaching 30 Mb, and we identified complex structural variants (SVs) missed by other high-throughput approaches. Furthermore, by combining Illumina short-read data with long reads, we phased both single-nucleotide variants and SVs, generating haplotypes with over 99% consistency with previous trio-based studies. Our work shows that it is now possible to integrate single-molecule and high-throughput sequence data to generate de novo assembled genomes that approach reference quality. PMID:26121404
Assembly and diploid architecture of an individual human genome via single-molecule technologies.
Pendleton, Matthew; Sebra, Robert; Pang, Andy Wing Chun; Ummat, Ajay; Franzen, Oscar; Rausch, Tobias; Stütz, Adrian M; Stedman, William; Anantharaman, Thomas; Hastie, Alex; Dai, Heng; Fritz, Markus Hsi-Yang; Cao, Han; Cohain, Ariella; Deikus, Gintaras; Durrett, Russell E; Blanchard, Scott C; Altman, Roger; Chin, Chen-Shan; Guo, Yan; Paxinos, Ellen E; Korbel, Jan O; Darnell, Robert B; McCombie, W Richard; Kwok, Pui-Yan; Mason, Christopher E; Schadt, Eric E; Bashir, Ali
2015-08-01
We present the first comprehensive analysis of a diploid human genome that combines single-molecule sequencing with single-molecule genome maps. Our hybrid assembly markedly improves upon the contiguity observed from traditional shotgun sequencing approaches, with scaffold N50 values approaching 30 Mb, and we identified complex structural variants (SVs) missed by other high-throughput approaches. Furthermore, by combining Illumina short-read data with long reads, we phased both single-nucleotide variants and SVs, generating haplotypes with over 99% consistency with previous trio-based studies. Our work shows that it is now possible to integrate single-molecule and high-throughput sequence data to generate de novo assembled genomes that approach reference quality.
Patel, Rajesh; Tsan, Alison; Sumiyoshi, Teiko; Fu, Ling; Desai, Rupal; Schoenbrunner, Nancy; Myers, Thomas W.; Bauer, Keith; Smith, Edward; Raja, Rajiv
2014-01-01
Molecular profiling of tumor tissue to detect alterations, such as oncogenic mutations, plays a vital role in determining treatment options in oncology. Hence, there is an increasing need for a robust and high-throughput technology to detect oncogenic hotspot mutations. Although commercial assays are available to detect genetic alterations in single genes, only a limited amount of tissue is often available from patients, requiring multiplexing to allow for simultaneous detection of mutations in many genes using low DNA input. Even though next-generation sequencing (NGS) platforms provide powerful tools for this purpose, they face challenges such as high cost, large DNA input requirement, complex data analysis, and long turnaround times, limiting their use in clinical settings. We report the development of the next generation mutation multi-analyte panel (MUT-MAP), a high-throughput microfluidic, panel for detecting 120 somatic mutations across eleven genes of therapeutic interest (AKT1, BRAF, EGFR, FGFR3, FLT3, HRAS, KIT, KRAS, MET, NRAS, and PIK3CA) using allele-specific PCR (AS-PCR) and Taqman technology. This mutation panel requires as little as 2 ng of high quality DNA from fresh frozen or 100 ng of DNA from formalin-fixed paraffin-embedded (FFPE) tissues. Mutation calls, including an automated data analysis process, have been implemented to run 88 samples per day. Validation of this platform using plasmids showed robust signal and low cross-reactivity in all of the newly added assays and mutation calls in cell line samples were found to be consistent with the Catalogue of Somatic Mutations in Cancer (COSMIC) database allowing for direct comparison of our platform to Sanger sequencing. High correlation with NGS when compared to the SuraSeq500 panel run on the Ion Torrent platform in a FFPE dilution experiment showed assay sensitivity down to 0.45%. This multiplexed mutation panel is a valuable tool for high-throughput biomarker discovery in personalized medicine and cancer drug development. PMID:24658394
A multilayer microdevice for cell-based high-throughput drug screening
NASA Astrophysics Data System (ADS)
Liu, Chong; Wang, Lei; Xu, Zheng; Li, Jingmin; Ding, Xiping; Wang, Qi; Chunyu, Li
2012-06-01
A multilayer polydimethylsiloxane microdevice for cell-based high-throughput drug screening is described in this paper. This established microdevice was based on a modularization method and it integrated a drug/medium concentration gradient generator (CGG), pneumatic microvalves and a cell culture microchamber array. The CGG was able to generate five steps of linear concentrations with the same outlet flow rate. The medium/drug flowed through CGG and then into the pear-shaped cell culture microchambers vertically. This vertical perfusion mode was used to reduce the impact of the shear stress on the physiology of cells induced by the fluid flow in the microchambers. Pear-shaped microchambers with two arrays of miropillars at each outlet were adopted in this microdevice, which were beneficial to cell distribution. The chemotherapeutics Cisplatin (DDP)-induced Cisplatin-resistant cell line A549/DDP apoptotic experiments were performed well on this platform. The results showed that this novel microdevice could not only provide well-defined and stable conditions for cell culture, but was also useful for cell-based high-throughput drug screening with less reagents and time consumption.
Grandjean, Geoffrey; Graham, Ryan; Bartholomeusz, Geoffrey
2011-11-01
In recent years high throughput screening operations have become a critical application in functional and translational research. Although a seemingly unmanageable amount of data is generated by these high-throughput, large-scale techniques, through careful planning, an effective Laboratory Information Management System (LIMS) can be developed and implemented in order to streamline all phases of a workflow. Just as important as data mining and analysis procedures at the end of complex processes is the tracking of individual steps of applications that generate such data. Ultimately, the use of a customized LIMS will enable users to extract meaningful results from large datasets while trusting the robustness of their assays. To illustrate the design of a custom LIMS, this practical example is provided to highlight the important aspects of the design of a LIMS to effectively modulate all aspects of an siRNA screening service. This system incorporates inventory management, control of workflow, data handling and interaction with investigators, statisticians and administrators. All these modules are regulated in a synchronous manner within the LIMS. © 2011 Bentham Science Publishers
A versatile toolkit for high throughput functional genomics with Trichoderma reesei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuster, Andre; Bruno, Kenneth S.; Collett, James R.
2012-01-02
The ascomycete fungus, Trichoderma reesei (anamorph of Hypocrea jecorina), represents a biotechnological workhorse and is currently one of the most proficient cellulase producers. While strain improvement was traditionally accomplished by random mutagenesis, a detailed understanding of cellulase regulation can only be gained using recombinant technologies. RESULTS: Aiming at high efficiency and high throughput methods, we present here a construction kit for gene knock out in T. reesei. We provide a primer database for gene deletion using the pyr4, amdS and hph selection markers. For high throughput generation of gene knock outs, we constructed vectors using yeast mediated recombination and thenmore » transformed a T. reesei strain deficient in non-homologous end joining (NHEJ) by spore electroporation. This NHEJ-defect was subsequently removed by crossing of mutants with a sexually competent strain derived from the parental strain, QM9414.CONCLUSIONS:Using this strategy and the materials provided, high throughput gene deletion in T. reesei becomes feasible. Moreover, with the application of sexual development, the NHEJ-defect can be removed efficiently and without the need for additional selection markers. The same advantages apply for the construction of multiple mutants by crossing of strains with different gene deletions, which is now possible with considerably less hands-on time and minimal screening effort compared to a transformation approach. Consequently this toolkit can considerably boost research towards efficient exploitation of the resources of T. reesei for cellulase expression and hence second generation biofuel production.« less
USDA-ARS?s Scientific Manuscript database
High-throughput next-generation sequencing was used to scan the genome and generate reliable sequence of high copy number regions. Using this method, we examined whole plastid genomes as well as nearly 6000 bases of nuclear ribosomal DNA sequences for nine genotypes of Theobroma cacao and an indivi...
Assessing the health risks of the thousands of chemicals in use requires both toxicology and pharmacokinetic (PK) data that can be generated more quickly. For PK, in vitro clearance assays with hepatocytes and serum protein binding assays provide a means to generate high throughp...
High-throughput annotation of full-length long noncoding RNAs with capture long-read sequencing.
Lagarde, Julien; Uszczynska-Ratajczak, Barbara; Carbonell, Silvia; Pérez-Lluch, Sílvia; Abad, Amaya; Davis, Carrie; Gingeras, Thomas R; Frankish, Adam; Harrow, Jennifer; Guigo, Roderic; Johnson, Rory
2017-12-01
Accurate annotation of genes and their transcripts is a foundation of genomics, but currently no annotation technique combines throughput and accuracy. As a result, reference gene collections remain incomplete-many gene models are fragmentary, and thousands more remain uncataloged, particularly for long noncoding RNAs (lncRNAs). To accelerate lncRNA annotation, the GENCODE consortium has developed RNA Capture Long Seq (CLS), which combines targeted RNA capture with third-generation long-read sequencing. Here we present an experimental reannotation of the GENCODE intergenic lncRNA populations in matched human and mouse tissues that resulted in novel transcript models for 3,574 and 561 gene loci, respectively. CLS approximately doubled the annotated complexity of targeted loci, outperforming existing short-read techniques. Full-length transcript models produced by CLS enabled us to definitively characterize the genomic features of lncRNAs, including promoter and gene structure, and protein-coding potential. Thus, CLS removes a long-standing bottleneck in transcriptome annotation and generates manual-quality full-length transcript models at high-throughput scales.
Hu, Jiazhi; Meyers, Robin M; Dong, Junchao; Panchakshari, Rohit A; Alt, Frederick W; Frock, Richard L
2016-05-01
Unbiased, high-throughput assays for detecting and quantifying DNA double-stranded breaks (DSBs) across the genome in mammalian cells will facilitate basic studies of the mechanisms that generate and repair endogenous DSBs. They will also enable more applied studies, such as those to evaluate the on- and off-target activities of engineered nucleases. Here we describe a linear amplification-mediated high-throughput genome-wide sequencing (LAM-HTGTS) method for the detection of genome-wide 'prey' DSBs via their translocation in cultured mammalian cells to a fixed 'bait' DSB. Bait-prey junctions are cloned directly from isolated genomic DNA using LAM-PCR and unidirectionally ligated to bridge adapters; subsequent PCR steps amplify the single-stranded DNA junction library in preparation for Illumina Miseq paired-end sequencing. A custom bioinformatics pipeline identifies prey sequences that contribute to junctions and maps them across the genome. LAM-HTGTS differs from related approaches because it detects a wide range of broken end structures with nucleotide-level resolution. Familiarity with nucleic acid methods and next-generation sequencing analysis is necessary for library generation and data interpretation. LAM-HTGTS assays are sensitive, reproducible, relatively inexpensive, scalable and straightforward to implement with a turnaround time of <1 week.
Rapid high-throughput cloning and stable expression of antibodies in HEK293 cells.
Spidel, Jared L; Vaessen, Benjamin; Chan, Yin Yin; Grasso, Luigi; Kline, J Bradford
2016-12-01
Single-cell based amplification of immunoglobulin variable regions is a rapid and powerful technique for cloning antigen-specific monoclonal antibodies (mAbs) for purposes ranging from general laboratory reagents to therapeutic drugs. From the initial screening process involving small quantities of hundreds or thousands of mAbs through in vitro characterization and subsequent in vivo experiments requiring large quantities of only a few, having a robust system for generating mAbs from cloning through stable cell line generation is essential. A protocol was developed to decrease the time, cost, and effort required by traditional cloning and expression methods by eliminating bottlenecks in these processes. Removing the clonal selection steps from the cloning process using a highly efficient ligation-independent protocol and from the stable cell line process by utilizing bicistronic plasmids to generate stable semi-clonal cell pools facilitated an increased throughput of the entire process from plasmid assembly through transient transfections and selection of stable semi-clonal cell pools. Furthermore, the time required by a single individual to clone, express, and select stable cell pools in a high-throughput format was reduced from 4 to 6months to only 4 to 6weeks. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Automated recycling of chemistry for virtual screening and library design.
Vainio, Mikko J; Kogej, Thierry; Raubacher, Florian
2012-07-23
An early stage drug discovery project needs to identify a number of chemically diverse and attractive compounds. These hit compounds are typically found through high-throughput screening campaigns. The diversity of the chemical libraries used in screening is therefore important. In this study, we describe a virtual high-throughput screening system called Virtual Library. The system automatically "recycles" validated synthetic protocols and available starting materials to generate a large number of virtual compound libraries, and allows for fast searches in the generated libraries using a 2D fingerprint based screening method. Virtual Library links the returned virtual hit compounds back to experimental protocols to quickly assess the synthetic accessibility of the hits. The system can be used as an idea generator for library design to enrich the screening collection and to explore the structure-activity landscape around a specific active compound.
'PACLIMS': a component LIM system for high-throughput functional genomic analysis.
Donofrio, Nicole; Rajagopalon, Ravi; Brown, Douglas; Diener, Stephen; Windham, Donald; Nolin, Shelly; Floyd, Anna; Mitchell, Thomas; Galadima, Natalia; Tucker, Sara; Orbach, Marc J; Patel, Gayatri; Farman, Mark; Pampanwar, Vishal; Soderlund, Cari; Lee, Yong-Hwan; Dean, Ralph A
2005-04-12
Recent advances in sequencing techniques leading to cost reduction have resulted in the generation of a growing number of sequenced eukaryotic genomes. Computational tools greatly assist in defining open reading frames and assigning tentative annotations. However, gene functions cannot be asserted without biological support through, among other things, mutational analysis. In taking a genome-wide approach to functionally annotate an entire organism, in this application the approximately 11,000 predicted genes in the rice blast fungus (Magnaporthe grisea), an effective platform for tracking and storing both the biological materials created and the data produced across several participating institutions was required. The platform designed, named PACLIMS, was built to support our high throughput pipeline for generating 50,000 random insertion mutants of Magnaporthe grisea. To be a useful tool for materials and data tracking and storage, PACLIMS was designed to be simple to use, modifiable to accommodate refinement of research protocols, and cost-efficient. Data entry into PACLIMS was simplified through the use of barcodes and scanners, thus reducing the potential human error, time constraints, and labor. This platform was designed in concert with our experimental protocol so that it leads the researchers through each step of the process from mutant generation through phenotypic assays, thus ensuring that every mutant produced is handled in an identical manner and all necessary data is captured. Many sequenced eukaryotes have reached the point where computational analyses are no longer sufficient and require biological support for their predicted genes. Consequently, there is an increasing need for platforms that support high throughput genome-wide mutational analyses. While PACLIMS was designed specifically for this project, the source and ideas present in its implementation can be used as a model for other high throughput mutational endeavors.
'PACLIMS': A component LIM system for high-throughput functional genomic analysis
Donofrio, Nicole; Rajagopalon, Ravi; Brown, Douglas; Diener, Stephen; Windham, Donald; Nolin, Shelly; Floyd, Anna; Mitchell, Thomas; Galadima, Natalia; Tucker, Sara; Orbach, Marc J; Patel, Gayatri; Farman, Mark; Pampanwar, Vishal; Soderlund, Cari; Lee, Yong-Hwan; Dean, Ralph A
2005-01-01
Background Recent advances in sequencing techniques leading to cost reduction have resulted in the generation of a growing number of sequenced eukaryotic genomes. Computational tools greatly assist in defining open reading frames and assigning tentative annotations. However, gene functions cannot be asserted without biological support through, among other things, mutational analysis. In taking a genome-wide approach to functionally annotate an entire organism, in this application the ~11,000 predicted genes in the rice blast fungus (Magnaporthe grisea), an effective platform for tracking and storing both the biological materials created and the data produced across several participating institutions was required. Results The platform designed, named PACLIMS, was built to support our high throughput pipeline for generating 50,000 random insertion mutants of Magnaporthe grisea. To be a useful tool for materials and data tracking and storage, PACLIMS was designed to be simple to use, modifiable to accommodate refinement of research protocols, and cost-efficient. Data entry into PACLIMS was simplified through the use of barcodes and scanners, thus reducing the potential human error, time constraints, and labor. This platform was designed in concert with our experimental protocol so that it leads the researchers through each step of the process from mutant generation through phenotypic assays, thus ensuring that every mutant produced is handled in an identical manner and all necessary data is captured. Conclusion Many sequenced eukaryotes have reached the point where computational analyses are no longer sufficient and require biological support for their predicted genes. Consequently, there is an increasing need for platforms that support high throughput genome-wide mutational analyses. While PACLIMS was designed specifically for this project, the source and ideas present in its implementation can be used as a model for other high throughput mutational endeavors. PMID:15826298
REDItools: high-throughput RNA editing detection made easy.
Picardi, Ernesto; Pesole, Graziano
2013-07-15
The reliable detection of RNA editing sites from massive sequencing data remains challenging and, although several methodologies have been proposed, no computational tools have been released to date. Here, we introduce REDItools a suite of python scripts to perform high-throughput investigation of RNA editing using next-generation sequencing data. REDItools are in python programming language and freely available at http://code.google.com/p/reditools/. ernesto.picardi@uniba.it or graziano.pesole@uniba.it Supplementary data are available at Bioinformatics online.
High-Throughput Sequencing: A Roadmap Toward Community Ecology
Poisot, Timothée; Péquin, Bérangère; Gravel, Dominique
2013-01-01
High-throughput sequencing is becoming increasingly important in microbial ecology, yet it is surprisingly under-used to generate or test biogeographic hypotheses. In this contribution, we highlight how adding these methods to the ecologist toolbox will allow the detection of new patterns, and will help our understanding of the structure and dynamics of diversity. Starting with a review of ecological questions that can be addressed, we move on to the technical and analytical issues that will benefit from an increased collaboration between different disciplines. PMID:23610649
Microfluidic guillotine for single-cell wound repair studies
NASA Astrophysics Data System (ADS)
Blauch, Lucas R.; Gai, Ya; Khor, Jian Wei; Sood, Pranidhi; Marshall, Wallace F.; Tang, Sindy K. Y.
2017-07-01
Wound repair is a key feature distinguishing living from nonliving matter. Single cells are increasingly recognized to be capable of healing wounds. The lack of reproducible, high-throughput wounding methods has hindered single-cell wound repair studies. This work describes a microfluidic guillotine for bisecting single Stentor coeruleus cells in a continuous-flow manner. Stentor is used as a model due to its robust repair capacity and the ability to perform gene knockdown in a high-throughput manner. Local cutting dynamics reveals two regimes under which cells are bisected, one at low viscous stress where cells are cut with small membrane ruptures and high viability and one at high viscous stress where cells are cut with extended membrane ruptures and decreased viability. A cutting throughput up to 64 cells per minute—more than 200 times faster than current methods—is achieved. The method allows the generation of more than 100 cells in a synchronized stage of their repair process. This capacity, combined with high-throughput gene knockdown in Stentor, enables time-course mechanistic studies impossible with current wounding methods.
Microreactor Cells for High-Throughput X-ray Absorption Spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beesley, Angela; Tsapatsaris, Nikolaos; Weiher, Norbert
2007-01-19
High-throughput experimentation has been applied to X-ray Absorption spectroscopy as a novel route for increasing research productivity in the catalysis community. Suitable instrumentation has been developed for the rapid determination of the local structure in the metal component of precursors for supported catalysts. An automated analytical workflow was implemented that is much faster than traditional individual spectrum analysis. It allows the generation of structural data in quasi-real time. We describe initial results obtained from the automated high throughput (HT) data reduction and analysis of a sample library implemented through the 96 well-plate industrial standard. The results show that a fullymore » automated HT-XAS technology based on existing industry standards is feasible and useful for the rapid elucidation of geometric and electronic structure of materials.« less
Photon-Counting H33D Detector for Biological Fluorescence Imaging
Michalet, X.; Siegmund, O.H.W.; Vallerga, J.V.; Jelinsky, P.; Millaud, J.E.; Weiss, S.
2010-01-01
We have developed a photon-counting High-temporal and High-spatial resolution, High-throughput 3-Dimensional detector (H33D) for biological imaging of fluorescent samples. The design is based on a 25 mm diameter S20 photocathode followed by a 3-microchannel plate stack, and a cross delay line anode. We describe the bench performance of the H33D detector, as well as preliminary imaging results obtained with fluorescent beads, quantum dots and live cells and discuss applications of future generation detectors for single-molecule imaging and high-throughput study of biomolecular interactions. PMID:20151021
Protocols and programs for high-throughput growth and aging phenotyping in yeast.
Jung, Paul P; Christian, Nils; Kay, Daniel P; Skupin, Alexander; Linster, Carole L
2015-01-01
In microorganisms, and more particularly in yeasts, a standard phenotyping approach consists in the analysis of fitness by growth rate determination in different conditions. One growth assay that combines high throughput with high resolution involves the generation of growth curves from 96-well plate microcultivations in thermostated and shaking plate readers. To push the throughput of this method to the next level, we have adapted it in this study to the use of 384-well plates. The values of the extracted growth parameters (lag time, doubling time and yield of biomass) correlated well between experiments carried out in 384-well plates as compared to 96-well plates or batch cultures, validating the higher-throughput approach for phenotypic screens. The method is not restricted to the use of the budding yeast Saccharomyces cerevisiae, as shown by consistent results for other species selected from the Hemiascomycete class. Furthermore, we used the 384-well plate microcultivations to develop and validate a higher-throughput assay for yeast Chronological Life Span (CLS), a parameter that is still commonly determined by a cumbersome method based on counting "Colony Forming Units". To accelerate analysis of the large datasets generated by the described growth and aging assays, we developed the freely available software tools GATHODE and CATHODE. These tools allow for semi-automatic determination of growth parameters and CLS behavior from typical plate reader output files. The described protocols and programs will increase the time- and cost-efficiency of a number of yeast-based systems genetics experiments as well as various types of screens.
NASA Astrophysics Data System (ADS)
Mughal, A.; Newman, H.
2017-10-01
We review and demonstrate the design of efficient data transfer nodes (DTNs), from the perspective of the highest throughput over both local and wide area networks, as well as the highest performance per unit cost. A careful system-level design is required for the hardware, firmware, OS and software components. Furthermore, additional tuning of these components, and the identification and elimination of any remaining bottlenecks is needed once the system is assembled and commissioned, in order to obtain optimal performance. For high throughput data transfers, specialized software is used to overcome the traditional limits in performance caused by the OS, file system, file structures used, etc. Concretely, we will discuss and present the latest results using Fast Data Transfer (FDT), developed by Caltech. We present and discuss the design choices for three generations of Caltech DTNs. Their transfer capabilities range from 40 Gbps to 400 Gbps. Disk throughput is still the biggest challenge in the current generation of available hardware. However, new NVME drives combined with RDMA and a new NVME network fabric are expected to improve the overall data-transfer throughput and simultaneously reduce the CPU load on the end nodes.
High-throughput transformation of Saccharomyces cerevisiae using liquid handling robots.
Liu, Guangbo; Lanham, Clayton; Buchan, J Ross; Kaplan, Matthew E
2017-01-01
Saccharomyces cerevisiae (budding yeast) is a powerful eukaryotic model organism ideally suited to high-throughput genetic analyses, which time and again has yielded insights that further our understanding of cell biology processes conserved in humans. Lithium Acetate (LiAc) transformation of yeast with DNA for the purposes of exogenous protein expression (e.g., plasmids) or genome mutation (e.g., gene mutation, deletion, epitope tagging) is a useful and long established method. However, a reliable and optimized high throughput transformation protocol that runs almost no risk of human error has not been described in the literature. Here, we describe such a method that is broadly transferable to most liquid handling high-throughput robotic platforms, which are now commonplace in academic and industry settings. Using our optimized method, we are able to comfortably transform approximately 1200 individual strains per day, allowing complete transformation of typical genomic yeast libraries within 6 days. In addition, use of our protocol for gene knockout purposes also provides a potentially quicker, easier and more cost-effective approach to generating collections of double mutants than the popular and elegant synthetic genetic array methodology. In summary, our methodology will be of significant use to anyone interested in high throughput molecular and/or genetic analysis of yeast.
Next generation platforms for high-throughput biodosimetry
Repin, Mikhail; Turner, Helen C.; Garty, Guy; Brenner, David J.
2014-01-01
Here the general concept of the combined use of plates and tubes in racks compatible with the American National Standards Institute/the Society for Laboratory Automation and Screening microplate formats as the next generation platforms for increasing the throughput of biodosimetry assays was described. These platforms can be used at different stages of biodosimetry assays starting from blood collection into microtubes organised in standardised racks and ending with the cytogenetic analysis of samples in standardised multiwell and multichannel plates. Robotically friendly platforms can be used for different biodosimetry assays in minimally equipped laboratories and on cost-effective automated universal biotech systems. PMID:24837249
3D pulsed laser-triggered high-speed microfluidic fluorescence-activated cell sorter
Chen, Yue; Wu, Ting-Hsiang; Kung, Yu-Chun; Teitell, Michael A.; Chiou, Pei-Yu
2014-01-01
We report a 3D microfluidic pulsed laser-triggered fluorescence-activated cell sorter capable of sorting at a throughput of 23,000 cells sec−1 with 90% purity in high-purity mode and at a throughput of 45,000 cells sec−1 with 45% purity in enrichment mode in one stage and in a single channel. This performance is realized by exciting laser-induced cavitation bubbles in a 3D PDMS microfluidic channel to generate high-speed liquid jets that deflect detected fluorescent cells and particles focused by 3D sheath flows. The ultrafast switching mechanism (20 μsec complete on-off cycle), small liquid jet perturbation volume, and three-dimensional sheath flow focusing for accurate timing control of fast (1.5 m sec−1) passing cells and particles are three critical factors enabling high-purity sorting at high-throughput in this sorter. PMID:23844418
NASA Technical Reports Server (NTRS)
Lee, Jonathan A.
2005-01-01
High-throughput measurement techniques are reviewed for solid phase transformation from materials produced by combinatorial methods, which are highly efficient concepts to fabricate large variety of material libraries with different compositional gradients on a single wafer. Combinatorial methods hold high potential for reducing the time and costs associated with the development of new materials, as compared to time-consuming and labor-intensive conventional methods that test large batches of material, one- composition at a time. These high-throughput techniques can be automated to rapidly capture and analyze data, using the entire material library on a single wafer, thereby accelerating the pace of materials discovery and knowledge generation for solid phase transformations. The review covers experimental techniques that are applicable to inorganic materials such as shape memory alloys, graded materials, metal hydrides, ferric materials, semiconductors and industrial alloys.
Kennedy, Richard; Pankratz, V. Shane; Swanson, Eric; Watson, David; Golding, Hana; Poland, Gregory A.
2009-01-01
Because of the bioterrorism threat posed by agents such as variola virus, considerable time, resources, and effort have been devoted to biodefense preparation. One avenue of this research has been the development of rapid, sensitive, high-throughput assays to validate immune responses to poxviruses. Here we describe the adaptation of a β-galactosidase reporter-based vaccinia virus neutralization assay to large-scale use in a study that included over 1,000 subjects. We also describe the statistical methods involved in analyzing the large quantity of data generated. The assay and its associated methods should prove useful tools in monitoring immune responses to next-generation smallpox vaccines, studying poxvirus immunity, and evaluating therapeutic agents such as vaccinia virus immune globulin. PMID:19535540
Integrative Systems Biology for Data Driven Knowledge Discovery
Greene, Casey S.; Troyanskaya, Olga G.
2015-01-01
Integrative systems biology is an approach that brings together diverse high throughput experiments and databases to gain new insights into biological processes or systems at molecular through physiological levels. These approaches rely on diverse high-throughput experimental techniques that generate heterogeneous data by assaying varying aspects of complex biological processes. Computational approaches are necessary to provide an integrative view of these experimental results and enable data-driven knowledge discovery. Hypotheses generated from these approaches can direct definitive molecular experiments in a cost effective manner. Using integrative systems biology approaches, we can leverage existing biological knowledge and large-scale data to improve our understanding of yet unknown components of a system of interest and how its malfunction leads to disease. PMID:21044756
Nemenman, Ilya; Escola, G Sean; Hlavacek, William S; Unkefer, Pat J; Unkefer, Clifford J; Wall, Michael E
2007-12-01
We investigate the ability of algorithms developed for reverse engineering of transcriptional regulatory networks to reconstruct metabolic networks from high-throughput metabolite profiling data. For benchmarking purposes, we generate synthetic metabolic profiles based on a well-established model for red blood cell metabolism. A variety of data sets are generated, accounting for different properties of real metabolic networks, such as experimental noise, metabolite correlations, and temporal dynamics. These data sets are made available online. We use ARACNE, a mainstream algorithm for reverse engineering of transcriptional regulatory networks from gene expression data, to predict metabolic interactions from these data sets. We find that the performance of ARACNE on metabolic data is comparable to that on gene expression data.
Schönberg, Anna; Theunert, Christoph; Li, Mingkun; Stoneking, Mark; Nasidze, Ivan
2011-09-01
To investigate the demographic history of human populations from the Caucasus and surrounding regions, we used high-throughput sequencing to generate 147 complete mtDNA genome sequences from random samples of individuals from three groups from the Caucasus (Armenians, Azeri and Georgians), and one group each from Iran and Turkey. Overall diversity is very high, with 144 different sequences that fall into 97 different haplogroups found among the 147 individuals. Bayesian skyline plots (BSPs) of population size change through time show a population expansion around 40-50 kya, followed by a constant population size, and then another expansion around 15-18 kya for the groups from the Caucasus and Iran. The BSP for Turkey differs the most from the others, with an increase from 35 to 50 kya followed by a prolonged period of constant population size, and no indication of a second period of growth. An approximate Bayesian computation approach was used to estimate divergence times between each pair of populations; the oldest divergence times were between Turkey and the other four groups from the South Caucasus and Iran (~400-600 generations), while the divergence time of the three Caucasus groups from each other was comparable to their divergence time from Iran (average of ~360 generations). These results illustrate the value of random sampling of complete mtDNA genome sequences that can be obtained with high-throughput sequencing platforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davenport, Karen
Karen Davenport of Los Alamos National Laboratory discusses a high-throughput next generation genome finishing pipeline on June 3, 2010 at the "Sequencing, Finishing, Analysis in the Future" meeting in Santa Fe, NM.
The promise and challenge of high-throughput sequencing of the antibody repertoire
Georgiou, George; Ippolito, Gregory C; Beausang, John; Busse, Christian E; Wardemann, Hedda; Quake, Stephen R
2014-01-01
Efforts to determine the antibody repertoire encoded by B cells in the blood or lymphoid organs using high-throughput DNA sequencing technologies have been advancing at an extremely rapid pace and are transforming our understanding of humoral immune responses. Information gained from high-throughput DNA sequencing of immunoglobulin genes (Ig-seq) can be applied to detect B-cell malignancies with high sensitivity, to discover antibodies specific for antigens of interest, to guide vaccine development and to understand autoimmunity. Rapid progress in the development of experimental protocols and informatics analysis tools is helping to reduce sequencing artifacts, to achieve more precise quantification of clonal diversity and to extract the most pertinent biological information. That said, broader application of Ig-seq, especially in clinical settings, will require the development of a standardized experimental design framework that will enable the sharing and meta-analysis of sequencing data generated by different laboratories. PMID:24441474
Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu
2013-08-01
High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/.
Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu
2013-01-01
High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/. PMID:23657089
Putt, Karson S; Pugh, Randall B
2013-01-01
Peracetic acid is gaining usage in numerous industries who have found a myriad of uses for its antimicrobial activity. However, rapid high throughput quantitation methods for peracetic acid and hydrogen peroxide are lacking. Herein, we describe the development of a high-throughput microtiter plate based assay based upon the well known and trusted titration chemical reactions. The adaptation of these titration chemistries to rapid plate based absorbance methods for the sequential determination of hydrogen peroxide specifically and the total amount of peroxides present in solution are described. The results of these methods were compared to those of a standard titration and found to be in good agreement. Additionally, the utility of the developed method is demonstrated through the generation of degradation curves of both peracetic acid and hydrogen peroxide in a mixed solution.
Putt, Karson S.; Pugh, Randall B.
2013-01-01
Peracetic acid is gaining usage in numerous industries who have found a myriad of uses for its antimicrobial activity. However, rapid high throughput quantitation methods for peracetic acid and hydrogen peroxide are lacking. Herein, we describe the development of a high-throughput microtiter plate based assay based upon the well known and trusted titration chemical reactions. The adaptation of these titration chemistries to rapid plate based absorbance methods for the sequential determination of hydrogen peroxide specifically and the total amount of peroxides present in solution are described. The results of these methods were compared to those of a standard titration and found to be in good agreement. Additionally, the utility of the developed method is demonstrated through the generation of degradation curves of both peracetic acid and hydrogen peroxide in a mixed solution. PMID:24260173
Ion channel drug discovery and research: the automated Nano-Patch-Clamp technology.
Brueggemann, A; George, M; Klau, M; Beckler, M; Steindl, J; Behrends, J C; Fertig, N
2004-01-01
Unlike the genomics revolution, which was largely enabled by a single technological advance (high throughput sequencing), rapid advancement in proteomics will require a broader effort to increase the throughput of a number of key tools for functional analysis of different types of proteins. In the case of ion channels -a class of (membrane) proteins of great physiological importance and potential as drug targets- the lack of adequate assay technologies is felt particularly strongly. The available, indirect, high throughput screening methods for ion channels clearly generate insufficient information. The best technology to study ion channel function and screen for compound interaction is the patch clamp technique, but patch clamping suffers from low throughput, which is not acceptable for drug screening. A first step towards a solution is presented here. The nano patch clamp technology, which is based on a planar, microstructured glass chip, enables automatic whole cell patch clamp measurements. The Port-a-Patch is an automated electrophysiology workstation, which uses planar patch clamp chips. This approach enables high quality and high content ion channel and compound evaluation on a one-cell-at-a-time basis. The presented automation of the patch process and its scalability to an array format are the prerequisites for any higher throughput electrophysiology instruments.
ToxCast Data Generation: Chemical Workflow
This page describes the process EPA follows to select chemicals, procure chemicals, register chemicals, conduct a quality review of the chemicals, and prepare the chemicals for high-throughput screening.
Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis
2012-01-01
Background The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Results Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. Conclusions By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand. PMID:22276739
Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis.
Tu, Jing; Ge, Qinyu; Wang, Shengqin; Wang, Lei; Sun, Beili; Yang, Qi; Bai, Yunfei; Lu, Zuhong
2012-01-25
The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand.
Dreyer, Florian S; Cantone, Martina; Eberhardt, Martin; Jaitly, Tanushree; Walter, Lisa; Wittmann, Jürgen; Gupta, Shailendra K; Khan, Faiz M; Wolkenhauer, Olaf; Pützer, Brigitte M; Jäck, Hans-Martin; Heinzerling, Lucie; Vera, Julio
2018-06-01
Cellular phenotypes are established and controlled by complex and precisely orchestrated molecular networks. In cancer, mutations and dysregulations of multiple molecular factors perturb the regulation of these networks and lead to malignant transformation. High-throughput technologies are a valuable source of information to establish the complex molecular relationships behind the emergence of malignancy, but full exploitation of this massive amount of data requires bioinformatics tools that rely on network-based analyses. In this report we present the Virtual Melanoma Cell, an online tool developed to facilitate the mining and interpretation of high-throughput data on melanoma by biomedical researches. The platform is based on a comprehensive, manually generated and expert-validated regulatory map composed of signaling pathways important in malignant melanoma. The Virtual Melanoma Cell is a tool designed to accept, visualize and analyze user-generated datasets. It is available at: https://www.vcells.net/melanoma. To illustrate the utilization of the web platform and the regulatory map, we have analyzed a large publicly available dataset accounting for anti-PD1 immunotherapy treatment of malignant melanoma patients. Copyright © 2018 Elsevier B.V. All rights reserved.
Dotsey, Emmanuel Y.; Gorlani, Andrea; Ingale, Sampat; Achenbach, Chad J.; Forthal, Donald N.; Felgner, Philip L.; Gach, Johannes S.
2015-01-01
In recent years, high throughput discovery of human recombinant monoclonal antibodies (mAbs) has been applied to greatly advance our understanding of the specificity, and functional activity of antibodies against HIV. Thousands of antibodies have been generated and screened in functional neutralization assays, and antibodies associated with cross-strain neutralization and passive protection in primates, have been identified. To facilitate this type of discovery, a high throughput-screening tool is needed to accurately classify mAbs, and their antigen targets. In this study, we analyzed and evaluated a prototype microarray chip comprised of the HIV-1 recombinant proteins gp140, gp120, gp41, and several membrane proximal external region peptides. The protein microarray analysis of 11 HIV-1 envelope-specific mAbs revealed diverse binding affinities and specificities across clades. Half maximal effective concentrations, generated by our chip analysis, correlated significantly (P<0.0001) with concentrations from ELISA binding measurements. Polyclonal immune responses in plasma samples from HIV-1 infected subjects exhibited different binding patterns, and reactivity against printed proteins. Examining the totality of the specificity of the humoral response in this way reveals the exquisite diversity, and specificity of the humoral response to HIV. PMID:25938510
Zhang, Bing; Schmoyer, Denise; Kirov, Stefan; Snoddy, Jay
2004-01-01
Background Microarray and other high-throughput technologies are producing large sets of interesting genes that are difficult to analyze directly. Bioinformatics tools are needed to interpret the functional information in the gene sets. Results We have created a web-based tool for data analysis and data visualization for sets of genes called GOTree Machine (GOTM). This tool was originally intended to analyze sets of co-regulated genes identified from microarray analysis but is adaptable for use with other gene sets from other high-throughput analyses. GOTree Machine generates a GOTree, a tree-like structure to navigate the Gene Ontology Directed Acyclic Graph for input gene sets. This system provides user friendly data navigation and visualization. Statistical analysis helps users to identify the most important Gene Ontology categories for the input gene sets and suggests biological areas that warrant further study. GOTree Machine is available online at . Conclusion GOTree Machine has a broad application in functional genomic, proteomic and other high-throughput methods that generate large sets of interesting genes; its primary purpose is to help users sort for interesting patterns in gene sets. PMID:14975175
Kebschull, Moritz; Fittler, Melanie Julia; Demmer, Ryan T; Papapanou, Panos N
2017-01-01
Today, -omics analyses, including the systematic cataloging of messenger RNA and microRNA sequences or DNA methylation patterns in a cell population, organ, or tissue sample, allow for an unbiased, comprehensive genome-level analysis of complex diseases, offering a large advantage over earlier "candidate" gene or pathway analyses. A primary goal in the analysis of these high-throughput assays is the detection of those features among several thousand that differ between different groups of samples. In the context of oral biology, our group has successfully utilized -omics technology to identify key molecules and pathways in different diagnostic entities of periodontal disease.A major issue when inferring biological information from high-throughput -omics studies is the fact that the sheer volume of high-dimensional data generated by contemporary technology is not appropriately analyzed using common statistical methods employed in the biomedical sciences.In this chapter, we outline a robust and well-accepted bioinformatics workflow for the initial analysis of -omics data generated using microarrays or next-generation sequencing technology using open-source tools. Starting with quality control measures and necessary preprocessing steps for data originating from different -omics technologies, we next outline a differential expression analysis pipeline that can be used for data from both microarray and sequencing experiments, and offers the possibility to account for random or fixed effects. Finally, we present an overview of the possibilities for a functional analysis of the obtained data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderogba, S.; Meacham, J.M.; Degertekin, F.L.
2005-05-16
Ultrasonic electrospray ionization (ESI) for high-throughput mass spectrometry is demonstrated using a silicon micromachined microarray. The device uses a micromachined ultrasonic atomizer operating in the 900 kHz-2.5 MHz range for droplet generation and a metal electrode in the fluid cavity for ionization. Since the atomization and ionization processes are separated, the ultrasonic ESI source shows the potential for operation at low voltages with a wide range of solvents in contrast with conventional capillary ESI technology. This is demonstrated using the ultrasonic ESI microarray to obtain the mass spectrum of a 10 {mu}M reserpine sample on a time of flight massmore » spectrometer with 197:1 signal-to-noise ratio at an ionization potential of 200 V.« less
Optimization and high-throughput screening of antimicrobial peptides.
Blondelle, Sylvie E; Lohner, Karl
2010-01-01
While a well-established process for lead compound discovery in for-profit companies, high-throughput screening is becoming more popular in basic and applied research settings in academia. The development of combinatorial libraries combined with easy and less expensive access to new technologies have greatly contributed to the implementation of high-throughput screening in academic laboratories. While such techniques were earlier applied to simple assays involving single targets or based on binding affinity, they have now been extended to more complex systems such as whole cell-based assays. In particular, the urgent need for new antimicrobial compounds that would overcome the rapid rise of drug-resistant microorganisms, where multiple target assays or cell-based assays are often required, has forced scientists to focus onto high-throughput technologies. Based on their existence in natural host defense systems and their different mode of action relative to commercial antibiotics, antimicrobial peptides represent a new hope in discovering novel antibiotics against multi-resistant bacteria. The ease of generating peptide libraries in different formats has allowed a rapid adaptation of high-throughput assays to the search for novel antimicrobial peptides. Similarly, the availability nowadays of high-quantity and high-quality antimicrobial peptide data has permitted the development of predictive algorithms to facilitate the optimization process. This review summarizes the various library formats that lead to de novo antimicrobial peptide sequences as well as the latest structural knowledge and optimization processes aimed at improving the peptides selectivity.
Choi, Gihoon; Hassett, Daniel J; Choi, Seokheun
2015-06-21
There is a large global effort to improve microbial fuel cell (MFC) techniques and advance their translational potential toward practical, real-world applications. Significant boosts in MFC performance can be achieved with the development of new techniques in synthetic biology that can regulate microbial metabolic pathways or control their gene expression. For these new directions, a high-throughput and rapid screening tool for microbial biopower production is needed. In this work, a 48-well, paper-based sensing platform was developed for the high-throughput and rapid characterization of the electricity-producing capability of microbes. 48 spatially distinct wells of a sensor array were prepared by patterning 48 hydrophilic reservoirs on paper with hydrophobic wax boundaries. This paper-based platform exploited the ability of paper to quickly wick fluid and promoted bacterial attachment to the anode pads, resulting in instant current generation upon loading of the bacterial inoculum. We validated the utility of our MFC array by studying how strategic genetic modifications impacted the electrochemical activity of various Pseudomonas aeruginosa mutant strains. Within just 20 minutes, we successfully determined the electricity generation capacity of eight isogenic mutants of P. aeruginosa. These efforts demonstrate that our MFC array displays highly comparable performance characteristics and identifies genes in P. aeruginosa that can trigger a higher power density.
ac electroosmotic pumping induced by noncontact external electrodes.
Wang, Shau-Chun; Chen, Hsiao-Ping; Chang, Hsueh-Chia
2007-09-21
Electroosmotic (EO) pumps based on dc electroosmosis is plagued by bubble generation and other electrochemical reactions at the electrodes at voltages beyond 1 V for electrolytes. These disadvantages limit their throughput and offset their portability advantage over mechanical syringe or pneumatic pumps. ac electroosmotic pumps at high frequency (>100 kHz) circumvent the bubble problem by inducing polarization and slip velocity on embedded electrodes,1 but they require complex electrode designs to produce a net flow. We report a new high-throughput ac EO pump design based on induced-polarization on the entire channel surface instead of just on the electrodes. Like dc EO pumps, our pump electrodes are outside of the load section and form a cm-long pump unit consisting of three circular reservoirs (3 mm in diameter) connected by a 1x1 mm channel. The field-induced polarization can produce an effective Zeta potential exceeding 1 V and an ac slip velocity estimated as 1 mmsec or higher, both one order of magnitude higher than earlier dc and ac pumps, giving rise to a maximum throughput of 1 mulsec. Polarization over the entire channel surface, quadratic scaling with respect to the field and high voltage at high frequency without electrode bubble generation are the reasons why the current pump is superior to earlier dc and ac EO pumps.
Analysis of Ingredient Lists to Quantitatively Characterize Chemicals in Consumer Products
The EPA’s ExpoCast program is developing high throughput (HT) approaches to generate the needed exposure estimates to compare against HT bioactivity data generated from the US inter-agency Tox21 and the US EPA ToxCast programs. Assessing such exposures for the thousands of...
USDA-ARS?s Scientific Manuscript database
Next-generation sequencing (NGS) technologies are revolutionizing both medical and biological research through generation of massive SNP data sets for identifying heritable genome variation underlying key traits, from rare human diseases to important agronomic phenotypes in crop species. We evaluate...
USDA-ARS?s Scientific Manuscript database
Next-generation sequencing technologies are able to produce high-throughput short sequence reads in a cost-effective fashion. The emergence of these technologies has not only facilitated genome sequencing but also changed the landscape of life sciences. Here I survey their major applications ranging...
Next generation sequencers: methods and applications in food-borne pathogens
USDA-ARS?s Scientific Manuscript database
Next generation sequencers are able to produce millions of short sequence reads in a high-throughput, low-cost way. The emergence of these technologies has not only facilitated genome sequencing but also started to change the landscape of life sciences. This chapter will survey their methods and app...
Xu, Chun-Xiu; Yin, Xue-Feng
2011-02-04
A chip-based microfluidic system for high-throughput single-cell analysis is described. The system was integrated with continuous introduction of individual cells, rapid dynamic lysis, capillary electrophoretic (CE) separation and laser induced fluorescence (LIF) detection. A cross microfluidic chip with one sheath-flow channel located on each side of the sampling channel was designed. The labeled cells were hydrodynamically focused by sheath-flow streams and sequentially introduced into the cross section of the microchip under hydrostatic pressure generated by adjusting liquid levels in the reservoirs. Combined with the electric field applied on the separation channel, the aligned cells were driven into the separation channel and rapidly lysed within 33ms at the entry of the separation channel by Triton X-100 added in the sheath-flow solution. The maximum rate for introducing individual cells into the separation channel was about 150cells/min. The introduction of sheath-flow streams also significantly reduced the concentration of phosphate-buffered saline (PBS) injected into the separation channel along with single cells, thus reducing Joule heating during electrophoretic separation. The performance of this microfluidic system was evaluated by analysis of reduced glutathione (GSH) and reactive oxygen species (ROS) in single erythrocytes. A throughput of 38cells/min was obtained. The proposed method is simple and robust for high-throughput single-cell analysis, allowing for analysis of cell population with considerable size to generate results with statistical significance. Copyright © 2010 Elsevier B.V. All rights reserved.
Genecentric: a package to uncover graph-theoretic structure in high-throughput epistasis data.
Gallant, Andrew; Leiserson, Mark D M; Kachalov, Maxim; Cowen, Lenore J; Hescott, Benjamin J
2013-01-18
New technology has resulted in high-throughput screens for pairwise genetic interactions in yeast and other model organisms. For each pair in a collection of non-essential genes, an epistasis score is obtained, representing how much sicker (or healthier) the double-knockout organism will be compared to what would be expected from the sickness of the component single knockouts. Recent algorithmic work has identified graph-theoretic patterns in this data that can indicate functional modules, and even sets of genes that may occur in compensatory pathways, such as a BPM-type schema first introduced by Kelley and Ideker. However, to date, any algorithms for finding such patterns in the data were implemented internally, with no software being made publically available. Genecentric is a new package that implements a parallelized version of the Leiserson et al. algorithm (J Comput Biol 18:1399-1409, 2011) for generating generalized BPMs from high-throughput genetic interaction data. Given a matrix of weighted epistasis values for a set of double knock-outs, Genecentric returns a list of generalized BPMs that may represent compensatory pathways. Genecentric also has an extension, GenecentricGO, to query FuncAssociate (Bioinformatics 25:3043-3044, 2009) to retrieve GO enrichment statistics on generated BPMs. Python is the only dependency, and our web site provides working examples and documentation. We find that Genecentric can be used to find coherent functional and perhaps compensatory gene sets from high throughput genetic interaction data. Genecentric is made freely available for download under the GPLv2 from http://bcb.cs.tufts.edu/genecentric.
Genecentric: a package to uncover graph-theoretic structure in high-throughput epistasis data
2013-01-01
Background New technology has resulted in high-throughput screens for pairwise genetic interactions in yeast and other model organisms. For each pair in a collection of non-essential genes, an epistasis score is obtained, representing how much sicker (or healthier) the double-knockout organism will be compared to what would be expected from the sickness of the component single knockouts. Recent algorithmic work has identified graph-theoretic patterns in this data that can indicate functional modules, and even sets of genes that may occur in compensatory pathways, such as a BPM-type schema first introduced by Kelley and Ideker. However, to date, any algorithms for finding such patterns in the data were implemented internally, with no software being made publically available. Results Genecentric is a new package that implements a parallelized version of the Leiserson et al. algorithm (J Comput Biol 18:1399-1409, 2011) for generating generalized BPMs from high-throughput genetic interaction data. Given a matrix of weighted epistasis values for a set of double knock-outs, Genecentric returns a list of generalized BPMs that may represent compensatory pathways. Genecentric also has an extension, GenecentricGO, to query FuncAssociate (Bioinformatics 25:3043-3044, 2009) to retrieve GO enrichment statistics on generated BPMs. Python is the only dependency, and our web site provides working examples and documentation. Conclusion We find that Genecentric can be used to find coherent functional and perhaps compensatory gene sets from high throughput genetic interaction data. Genecentric is made freely available for download under the GPLv2 from http://bcb.cs.tufts.edu/genecentric. PMID:23331614
Miller, C.; Waddell, K.; Tang, N.
2010-01-01
RP-122 Peptide quantitation using Multiple Reaction Monitoring (MRM) has been established as an important methodology for biomarker verification andvalidation.This requires high throughput combined with high sensitivity to analyze potentially thousands of target peptides in each sample.Dynamic MRM allows the system to only acquire the required MRMs of the peptide during a retention window corresponding to when each peptide is eluting. This reduces the number of concurrent MRM and therefore improves quantitation and sensitivity. MRM Selector allows the user to generate an MRM transition list with retention time information from discovery data obtained on a QTOF MS system.This list can be directly imported into the triple quadrupole acquisition software.However, situations can exist where a) the list of MRMs contain an excess of MRM transitions allowable under the ideal acquisition conditions chosen ( allowing for cycle time and chromatography conditions), or b) too many transitions in a certain retention time region which would result in an unacceptably low dwell time and cycle time.A new tool - MRM viewer has been developed to help users automatically generate multiple dynamic MRM methods from a single MRM list.In this study, a list of 3293 MRM transitions from a human plasma sample was compiled.A single dynamic MRM method with 3293 transitions results in a minimum dwell time of 2.18ms.Using MRM viewer we can generate three dynamic MRM methods with a minimum dwell time of 20ms which can give a better quality MRM quantitation.This tool facilitates both high throughput and high sensitivity for MRM quantitation.
High-Throughput Next-Generation Sequencing of Polioviruses
Montmayeur, Anna M.; Schmidt, Alexander; Zhao, Kun; Magaña, Laura; Iber, Jane; Castro, Christina J.; Chen, Qi; Henderson, Elizabeth; Ramos, Edward; Shaw, Jing; Tatusov, Roman L.; Dybdahl-Sissoko, Naomi; Endegue-Zanga, Marie Claire; Adeniji, Johnson A.; Oberste, M. Steven; Burns, Cara C.
2016-01-01
ABSTRACT The poliovirus (PV) is currently targeted for worldwide eradication and containment. Sanger-based sequencing of the viral protein 1 (VP1) capsid region is currently the standard method for PV surveillance. However, the whole-genome sequence is sometimes needed for higher resolution global surveillance. In this study, we optimized whole-genome sequencing protocols for poliovirus isolates and FTA cards using next-generation sequencing (NGS), aiming for high sequence coverage, efficiency, and throughput. We found that DNase treatment of poliovirus RNA followed by random reverse transcription (RT), amplification, and the use of the Nextera XT DNA library preparation kit produced significantly better results than other preparations. The average viral reads per total reads, a measurement of efficiency, was as high as 84.2% ± 15.6%. PV genomes covering >99 to 100% of the reference length were obtained and validated with Sanger sequencing. A total of 52 PV genomes were generated, multiplexing as many as 64 samples in a single Illumina MiSeq run. This high-throughput, sequence-independent NGS approach facilitated the detection of a diverse range of PVs, especially for those in vaccine-derived polioviruses (VDPV), circulating VDPV, or immunodeficiency-related VDPV. In contrast to results from previous studies on other viruses, our results showed that filtration and nuclease treatment did not discernibly increase the sequencing efficiency of PV isolates. However, DNase treatment after nucleic acid extraction to remove host DNA significantly improved the sequencing results. This NGS method has been successfully implemented to generate PV genomes for molecular epidemiology of the most recent PV isolates. Additionally, the ability to obtain full PV genomes from FTA cards will aid in facilitating global poliovirus surveillance. PMID:27927929
High-Throughput Screening of Na(V)1.7 Modulators Using a Giga-Seal Automated Patch Clamp Instrument.
Chambers, Chris; Witton, Ian; Adams, Cathryn; Marrington, Luke; Kammonen, Juha
2016-03-01
Voltage-gated sodium (Na(V)) channels have an essential role in the initiation and propagation of action potentials in excitable cells, such as neurons. Of these channels, Na(V)1.7 has been indicated as a key channel for pain sensation. While extensive efforts have gone into discovering novel Na(V)1.7 modulating compounds for the treatment of pain, none has reached the market yet. In the last two years, new compound screening technologies have been introduced, which may speed up the discovery of such compounds. The Sophion Qube(®) is a next-generation 384-well giga-seal automated patch clamp (APC) screening instrument, capable of testing thousands of compounds per day. By combining high-throughput screening and follow-up compound testing on the same APC platform, it should be possible to accelerate the hit-to-lead stage of ion channel drug discovery and help identify the most interesting compounds faster. Following a period of instrument beta-testing, a Na(V)1.7 high-throughput screen was run with two Pfizer plate-based compound subsets. In total, data were generated for 158,000 compounds at a median success rate of 83%, which can be considered high in APC screening. In parallel, IC50 assay validation and protocol optimization was completed with a set of reference compounds to understand how the IC50 potencies generated on the Qube correlate with data generated on the more established Sophion QPatch(®) APC platform. In summary, the results presented here demonstrate that the Qube provides a comparable but much faster approach to study Na(V)1.7 in a robust and reliable APC assay for compound screening.
Payne, Philip R O; Kwok, Alan; Dhaval, Rakesh; Borlawsky, Tara B
2009-03-01
The conduct of large-scale translational studies presents significant challenges related to the storage, management and analysis of integrative data sets. Ideally, the application of methodologies such as conceptual knowledge discovery in databases (CKDD) provides a means for moving beyond intuitive hypothesis discovery and testing in such data sets, and towards the high-throughput generation and evaluation of knowledge-anchored relationships between complex bio-molecular and phenotypic variables. However, the induction of such high-throughput hypotheses is non-trivial, and requires correspondingly high-throughput validation methodologies. In this manuscript, we describe an evaluation of the efficacy of a natural language processing-based approach to validating such hypotheses. As part of this evaluation, we will examine a phenomenon that we have labeled as "Conceptual Dissonance" in which conceptual knowledge derived from two or more sources of comparable scope and granularity cannot be readily integrated or compared using conventional methods and automated tools.
Combinatorial and high-throughput screening of materials libraries: review of state of the art.
Potyrailo, Radislav; Rajan, Krishna; Stoewe, Klaus; Takeuchi, Ichiro; Chisholm, Bret; Lam, Hubert
2011-11-14
Rational materials design based on prior knowledge is attractive because it promises to avoid time-consuming synthesis and testing of numerous materials candidates. However with the increase of complexity of materials, the scientific ability for the rational materials design becomes progressively limited. As a result of this complexity, combinatorial and high-throughput (CHT) experimentation in materials science has been recognized as a new scientific approach to generate new knowledge. This review demonstrates the broad applicability of CHT experimentation technologies in discovery and optimization of new materials. We discuss general principles of CHT materials screening, followed by the detailed discussion of high-throughput materials characterization approaches, advances in data analysis/mining, and new materials developments facilitated by CHT experimentation. We critically analyze results of materials development in the areas most impacted by the CHT approaches, such as catalysis, electronic and functional materials, polymer-based industrial coatings, sensing materials, and biomaterials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katz, J., E-mail: jkat@lle.rochester.edu; Boni, R.; Rivlis, R.
A high-throughput, broadband optical spectrometer coupled to the Rochester optical streak system equipped with a Photonis P820 streak tube was designed to record time-resolved spectra with 1-ps time resolution. Spectral resolution of 0.8 nm is achieved over a wavelength coverage range of 480 to 580 nm, using a 300-groove/mm diffraction grating in conjunction with a pair of 225-mm-focal-length doublets operating at an f/2.9 aperture. Overall pulse-front tilt across the beam diameter generated by the diffraction grating is reduced by preferentially delaying discrete segments of the collimated input beam using a 34-element reflective echelon optic. The introduced delay temporally aligns themore » beam segments and the net pulse-front tilt is limited to the accumulation across an individual sub-element. The resulting spectrometer design balances resolving power and pulse-front tilt while maintaining high throughput.« less
Wen, X.; Datta, A.; Traverso, L. M.; Pan, L.; Xu, X.; Moon, E. E.
2015-01-01
Optical lithography, the enabling process for defining features, has been widely used in semiconductor industry and many other nanotechnology applications. Advances of nanotechnology require developments of high-throughput optical lithography capabilities to overcome the optical diffraction limit and meet the ever-decreasing device dimensions. We report our recent experimental advancements to scale up diffraction unlimited optical lithography in a massive scale using the near field nanolithography capabilities of bowtie apertures. A record number of near-field optical elements, an array of 1,024 bowtie antenna apertures, are simultaneously employed to generate a large number of patterns by carefully controlling their working distances over the entire array using an optical gap metrology system. Our experimental results reiterated the ability of using massively-parallel near-field devices to achieve high-throughput optical nanolithography, which can be promising for many important nanotechnology applications such as computation, data storage, communication, and energy. PMID:26525906
Development of New Sensing Materials Using Combinatorial and High-Throughput Experimentation
NASA Astrophysics Data System (ADS)
Potyrailo, Radislav A.; Mirsky, Vladimir M.
New sensors with improved performance characteristics are needed for applications as diverse as bedside continuous monitoring, tracking of environmental pollutants, monitoring of food and water quality, monitoring of chemical processes, and safety in industrial, consumer, and automotive settings. Typical requirements in sensor improvement are selectivity, long-term stability, sensitivity, response time, reversibility, and reproducibility. Design of new sensing materials is the important cornerstone in the effort to develop new sensors. Often, sensing materials are too complex to predict their performance quantitatively in the design stage. Thus, combinatorial and high-throughput experimentation methodologies provide an opportunity to generate new required data to discover new sensing materials and/or to optimize existing material compositions. The goal of this chapter is to provide an overview of the key concepts of experimental development of sensing materials using combinatorial and high-throughput experimentation tools, and to promote additional fruitful interactions between computational scientists and experimentalists.
A high-throughput exploration of magnetic materials by using structure predicting methods
NASA Astrophysics Data System (ADS)
Arapan, S.; Nieves, P.; Cuesta-López, S.
2018-02-01
We study the capability of a structure predicting method based on genetic/evolutionary algorithm for a high-throughput exploration of magnetic materials. We use the USPEX and VASP codes to predict stable and generate low-energy meta-stable structures for a set of representative magnetic structures comprising intermetallic alloys, oxides, interstitial compounds, and systems containing rare-earths elements, and for both types of ferromagnetic and antiferromagnetic ordering. We have modified the interface between USPEX and VASP codes to improve the performance of structural optimization as well as to perform calculations in a high-throughput manner. We show that exploring the structure phase space with a structure predicting technique reveals large sets of low-energy metastable structures, which not only improve currently exiting databases, but also may provide understanding and solutions to stabilize and synthesize magnetic materials suitable for permanent magnet applications.
High-throughput sequencing: a failure mode analysis.
Yang, George S; Stott, Jeffery M; Smailus, Duane; Barber, Sarah A; Balasundaram, Miruna; Marra, Marco A; Holt, Robert A
2005-01-04
Basic manufacturing principles are becoming increasingly important in high-throughput sequencing facilities where there is a constant drive to increase quality, increase efficiency, and decrease operating costs. While high-throughput centres report failure rates typically on the order of 10%, the causes of sporadic sequencing failures are seldom analyzed in detail and have not, in the past, been formally reported. Here we report the results of a failure mode analysis of our production sequencing facility based on detailed evaluation of 9,216 ESTs generated from two cDNA libraries. Two categories of failures are described; process-related failures (failures due to equipment or sample handling) and template-related failures (failures that are revealed by close inspection of electropherograms and are likely due to properties of the template DNA sequence itself). Preventative action based on a detailed understanding of failure modes is likely to improve the performance of other production sequencing pipelines.
Keenan, Martine; Alexander, Paul W; Chaplin, Jason H; Abbott, Michael J; Diao, Hugo; Wang, Zhisen; Best, Wayne M; Perez, Catherine J; Cornwall, Scott M J; Keatley, Sarah K; Thompson, R C Andrew; Charman, Susan A; White, Karen L; Ryan, Eileen; Chen, Gong; Ioset, Jean-Robert; von Geldern, Thomas W; Chatelain, Eric
2013-10-01
Inhibitors of Trypanosoma cruzi with novel mechanisms of action are urgently required to diversify the current clinical and preclinical pipelines. Increasing the number and diversity of hits available for assessment at the beginning of the discovery process will help to achieve this aim. We report the evaluation of multiple hits generated from a high-throughput screen to identify inhibitors of T. cruzi and from these studies the discovery of two novel series currently in lead optimization. Lead compounds from these series potently and selectively inhibit growth of T. cruzi in vitro and the most advanced compound is orally active in a subchronic mouse model of T. cruzi infection. High-throughput screening of novel compound collections has an important role to play in diversifying the trypanosomatid drug discovery portfolio. A new T. cruzi inhibitor series with good drug-like properties and promising in vivo efficacy has been identified through this process.
FPGA cluster for high-performance AO real-time control system
NASA Astrophysics Data System (ADS)
Geng, Deli; Goodsell, Stephen J.; Basden, Alastair G.; Dipper, Nigel A.; Myers, Richard M.; Saunter, Chris D.
2006-06-01
Whilst the high throughput and low latency requirements for the next generation AO real-time control systems have posed a significant challenge to von Neumann architecture processor systems, the Field Programmable Gate Array (FPGA) has emerged as a long term solution with high performance on throughput and excellent predictability on latency. Moreover, FPGA devices have highly capable programmable interfacing, which lead to more highly integrated system. Nevertheless, a single FPGA is still not enough: multiple FPGA devices need to be clustered to perform the required subaperture processing and the reconstruction computation. In an AO real-time control system, the memory bandwidth is often the bottleneck of the system, simply because a vast amount of supporting data, e.g. pixel calibration maps and the reconstruction matrix, need to be accessed within a short period. The cluster, as a general computing architecture, has excellent scalability in processing throughput, memory bandwidth, memory capacity, and communication bandwidth. Problems, such as task distribution, node communication, system verification, are discussed.
Morschett, Holger; Wiechert, Wolfgang; Oldiges, Marco
2016-02-09
Within the context of microalgal lipid production for biofuels and bulk chemical applications, specialized higher throughput devices for small scale parallelized cultivation are expected to boost the time efficiency of phototrophic bioprocess development. However, the increasing number of possible experiments is directly coupled to the demand for lipid quantification protocols that enable reliably measuring large sets of samples within short time and that can deal with the reduced sample volume typically generated at screening scale. To meet these demands, a dye based assay was established using a liquid handling robot to provide reproducible high throughput quantification of lipids with minimized hands-on-time. Lipid production was monitored using the fluorescent dye Nile red with dimethyl sulfoxide as solvent facilitating dye permeation. The staining kinetics of cells at different concentrations and physiological states were investigated to successfully down-scale the assay to 96 well microtiter plates. Gravimetric calibration against a well-established extractive protocol enabled absolute quantification of intracellular lipids improving precision from ±8 to ±2 % on average. Implementation into an automated liquid handling platform allows for measuring up to 48 samples within 6.5 h, reducing hands-on-time to a third compared to manual operation. Moreover, it was shown that automation enhances accuracy and precision compared to manual preparation. It was revealed that established protocols relying on optical density or cell number for biomass adjustion prior to staining may suffer from errors due to significant changes of the cells' optical and physiological properties during cultivation. Alternatively, the biovolume was used as a measure for biomass concentration so that errors from morphological changes can be excluded. The newly established assay proved to be applicable for absolute quantification of algal lipids avoiding limitations of currently established protocols, namely biomass adjustment and limited throughput. Automation was shown to improve data reliability, as well as experimental throughput simultaneously minimizing the needed hands-on-time to a third. Thereby, the presented protocol meets the demands for the analysis of samples generated by the upcoming generation of devices for higher throughput phototrophic cultivation and thereby contributes to boosting the time efficiency for setting up algae lipid production processes.
Assaying gene function by growth competition experiment.
Merritt, Joshua; Edwards, Jeremy S
2004-07-01
High-throughput screening and analysis is one of the emerging paradigms in biotechnology. In particular, high-throughput methods are essential in the field of functional genomics because of the vast amount of data generated in recent and ongoing genome sequencing efforts. In this report we discuss integrated functional analysis methodologies which incorporate both a growth competition component and a highly parallel assay used to quantify results of the growth competition. Several applications of the two most widely used technologies in the field, i.e., transposon mutagenesis and deletion strain library growth competition, and individual applications of several developing or less widely reported technologies are presented.
High-throughput bioinformatics with the Cyrille2 pipeline system
Fiers, Mark WEJ; van der Burgt, Ate; Datema, Erwin; de Groot, Joost CW; van Ham, Roeland CHJ
2008-01-01
Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1) a web based, graphical user interface (GUI) that enables a pipeline operator to manage the system; 2) the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3) the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines. PMID:18269742
Heiger-Bernays, Wendy J; Wegner, Susanna; Dix, David J
2018-01-16
The presence of industrial chemicals, consumer product chemicals, and pharmaceuticals is well documented in waters in the U.S. and globally. Most of these chemicals lack health-protective guidelines and many have been shown to have endocrine bioactivity. There is currently no systematic or national prioritization for monitoring waters for chemicals with endocrine disrupting activity. We propose ambient water bioactivity concentrations (AWBCs) generated from high throughput data as a health-based screen for endocrine bioactivity of chemicals in water. The U.S. EPA ToxCast program has screened over 1800 chemicals for estrogen receptor (ER) and androgen receptor (AR) pathway bioactivity. AWBCs are calculated for 110 ER and 212 AR bioactive chemicals using high throughput ToxCast data from in vitro screening assays and predictive pathway models, high-throughput toxicokinetic data, and data-driven assumptions about consumption of water. Chemical-specific AWBCs are compared with measured water concentrations in data sets from the greater Denver area, Minnesota lakes, and Oregon waters, demonstrating a framework for identifying endocrine bioactive chemicals. This approach can be used to screen potential cumulative endocrine activity in drinking water and to inform prioritization of future monitoring, chemical testing and pollution prevention efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daum, Christopher; Zane, Matthew; Han, James
2011-01-31
The U.S. Department of Energy (DOE) Joint Genome Institute's (JGI) Production Sequencing group is committed to the generation of high-quality genomic DNA sequence to support the mission areas of renewable energy generation, global carbon management, and environmental characterization and clean-up. Within the JGI's Production Sequencing group, a robust Illumina Genome Analyzer and HiSeq pipeline has been established. Optimization of the sesequencer pipelines has been ongoing with the aim of continual process improvement of the laboratory workflow, reducing operational costs and project cycle times to increases ample throughput, and improving the overall quality of the sequence generated. A sequence QC analysismore » pipeline has been implemented to automatically generate read and assembly level quality metrics. The foremost of these optimization projects, along with sequencing and operational strategies, throughput numbers, and sequencing quality results will be presented.« less
NASA Astrophysics Data System (ADS)
Lee, Hochul; Ebrahimi, Farbod; Amiri, Pedram Khalili; Wang, Kang L.
2017-05-01
A true random number generator based on perpendicularly magnetized voltage-controlled magnetic tunnel junction devices (MRNG) is presented. Unlike MTJs used in memory applications where a stable bit is needed to store information, in this work, the MTJ is intentionally designed with small perpendicular magnetic anisotropy (PMA). This allows one to take advantage of the thermally activated fluctuations of its free layer as a stochastic noise source. Furthermore, we take advantage of the voltage dependence of anisotropy to temporarily change the MTJ state into an unstable state when a voltage is applied. Since the MTJ has two energetically stable states, the final state is randomly chosen by thermal fluctuation. The voltage controlled magnetic anisotropy (VCMA) effect is used to generate the metastable state of the MTJ by lowering its energy barrier. The proposed MRNG achieves a high throughput (32 Gbps) by implementing a 64 ×64 MTJ array into CMOS circuits and executing operations in a parallel manner. Furthermore, the circuit consumes very low energy to generate a random bit (31.5 fJ/bit) due to the high energy efficiency of the voltage-controlled MTJ switching.
Quantifying collagen orientation in breast tissue biopsies using SLIM (Conference Presentation)
NASA Astrophysics Data System (ADS)
Majeed, Hassaan; Okoro, Chukwuemeka; Balla, Andre; Toussaint, Kimani C.; Popescu, Gabriel
2017-02-01
Breast cancer is a major public health problem worldwide, being the most common type of cancer among women according to the World Health Organization (WHO). The WHO has further stressed the importance of an early determination of the disease course through prognostic markers. Recent studies have shown that the alignment of collagen fibers in tumor adjacent stroma correlate with poorer health outcomes in patients. Such studies have typically been carried out using Second-Harmonic Generation (SHG) microscopy. SHG images are very useful for quantifying collagen fiber orientation due their specificity to non-centrosymmetric structures in tissue, leading to high contrast in collagen rich areas. However, the imaging throughput in SHG microscopy is limited by its point scanning geometry. In this work, we show that SLIM, a wide-field high-throughput QPI technique, can be used to obtain the same information on collagen fiber orientation as is obtainable through SHG microscopy. We imaged a tissue microarray containing both benign and malignant cores using both SHG microscopy and SLIM. The cellular (non-collagenous) structures in the SLIM images were next segmented out using an algorithm developed in-house. Using the previously published Fourier Transform Second Harmonic Generation (FT-SHG) tool, the fiber orientations in SHG and segmented SLIM images were then quantified. The resulting histograms of fiber orientation angles showed that both SHG and SLIM generate similar measurements of collagen fiber orientation. The SLIM modality, however, can generate these results at much higher throughput due to its wide-field, whole-slide scanning capabilities.
Sun, Changhong; Fan, Yu; Li, Juan; Wang, Gancheng; Zhang, Hanshuo; Xi, Jianzhong Jeff
2015-02-01
Transcription activator-like effectors (TALEs) are becoming powerful DNA-targeting tools in a variety of mammalian cells and model organisms. However, generating a stable cell line with specific gene mutations in a simple and rapid manner remains a challenging task. Here, we report a new method to efficiently produce monoclonal cells using integrated TALE nuclease technology and a series of high-throughput cell cloning approaches. Following this method, we obtained three mTOR mutant 293T cell lines within 2 months, which included one homozygous mutant line. © 2014 Society for Laboratory Automation and Screening.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hattrick-Simpers, Jason R.; Gregoire, John M.; Kusne, A. Gilad
With their ability to rapidly elucidate composition-structure-property relationships, high-throughput experimental studies have revolutionized how materials are discovered, optimized, and commercialized. It is now possible to synthesize and characterize high-throughput libraries that systematically address thousands of individual cuts of fabrication parameter space. An unresolved issue remains transforming structural characterization data into phase mappings. This difficulty is related to the complex information present in diffraction and spectroscopic data and its variation with composition and processing. Here, we review the field of automated phase diagram attribution and discuss the impact that emerging computational approaches will have in the generation of phase diagrams andmore » beyond.« less
De La Vega, Francisco M; Dailey, David; Ziegle, Janet; Williams, Julie; Madden, Dawn; Gilbert, Dennis A
2002-06-01
Since public and private efforts announced the first draft of the human genome last year, researchers have reported great numbers of single nucleotide polymorphisms (SNPs). We believe that the availability of well-mapped, quality SNP markers constitutes the gateway to a revolution in genetics and personalized medicine that will lead to better diagnosis and treatment of common complex disorders. A new generation of tools and public SNP resources for pharmacogenomic and genetic studies--specifically for candidate-gene, candidate-region, and whole-genome association studies--will form part of the new scientific landscape. This will only be possible through the greater accessibility of SNP resources and superior high-throughput instrumentation-assay systems that enable affordable, highly productive large-scale genetic studies. We are contributing to this effort by developing a high-quality linkage disequilibrium SNP marker map and an accompanying set of ready-to-use, validated SNP assays across every gene in the human genome. This effort incorporates both the public sequence and SNP data sources, and Celera Genomics' human genome assembly and enormous resource ofphysically mapped SNPs (approximately 4,000,000 unique records). This article discusses our approach and methodology for designing the map, choosing quality SNPs, designing and validating these assays, and obtaining population frequency ofthe polymorphisms. We also discuss an advanced, high-performance SNP assay chemisty--a new generation of the TaqMan probe-based, 5' nuclease assay-and high-throughput instrumentation-software system for large-scale genotyping. We provide the new SNP map and validation information, validated SNP assays and reagents, and instrumentation systems as a novel resource for genetic discoveries.
Next Generation Sequencing at the University of Chicago Genomics Core
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faber, Pieter
2013-04-24
The University of Chicago Genomics Core provides University of Chicago investigators (and external clients) access to State-of-the-Art genomics capabilities: next generation sequencing, Sanger sequencing / genotyping and micro-arrays (gene expression, genotyping, and methylation). The current presentation will highlight our capabilities in the area of ultra-high throughput sequencing analysis.
Liu, Ju; Li, Ruihua; Liu, Kun; Li, Liangliang; Zai, Xiaodong; Chi, Xiangyang; Fu, Ling; Xu, Junjie; Chen, Wei
2016-04-22
High-throughput sequencing of the antibody repertoire provides a large number of antibody variable region sequences that can be used to generate human monoclonal antibodies. However, current screening methods for identifying antigen-specific antibodies are inefficient. In the present study, we developed an antibody clone screening strategy based on clone dynamics and relative frequency, and used it to identify antigen-specific human monoclonal antibodies. Enzyme-linked immunosorbent assay showed that at least 52% of putative positive immunoglobulin heavy chains composed antigen-specific antibodies. Combining information on dynamics and relative frequency improved identification of positive clones and elimination of negative clones. and increase the credibility of putative positive clones. Therefore the screening strategy could simplify the subsequent experimental screening and may facilitate the generation of antigen-specific antibodies. Copyright © 2016 Elsevier Inc. All rights reserved.
"First generation" automated DNA sequencing technology.
Slatko, Barton E; Kieleczawa, Jan; Ju, Jingyue; Gardner, Andrew F; Hendrickson, Cynthia L; Ausubel, Frederick M
2011-10-01
Beginning in the 1980s, automation of DNA sequencing has greatly increased throughput, reduced costs, and enabled large projects to be completed more easily. The development of automation technology paralleled the development of other aspects of DNA sequencing: better enzymes and chemistry, separation and imaging technology, sequencing protocols, robotics, and computational advancements (including base-calling algorithms with quality scores, database developments, and sequence analysis programs). Despite the emergence of high-throughput sequencing platforms, automated Sanger sequencing technology remains useful for many applications. This unit provides background and a description of the "First-Generation" automated DNA sequencing technology. It also includes protocols for using the current Applied Biosystems (ABI) automated DNA sequencing machines. © 2011 by John Wiley & Sons, Inc.
The draft genome sequence of cork oak
Ramos, António Marcos; Usié, Ana; Barbosa, Pedro; Barros, Pedro M.; Capote, Tiago; Chaves, Inês; Simões, Fernanda; Abreu, Isabl; Carrasquinho, Isabel; Faro, Carlos; Guimarães, Joana B.; Mendonça, Diogo; Nóbrega, Filomena; Rodrigues, Leandra; Saibo, Nelson J. M.; Varela, Maria Carolina; Egas, Conceição; Matos, José; Miguel, Célia M.; Oliveira, M. Margarida; Ricardo, Cândido P.; Gonçalves, Sónia
2018-01-01
Cork oak (Quercus suber) is native to southwest Europe and northwest Africa where it plays a crucial environmental and economical role. To tackle the cork oak production and industrial challenges, advanced research is imperative but dependent on the availability of a sequenced genome. To address this, we produced the first draft version of the cork oak genome. We followed a de novo assembly strategy based on high-throughput sequence data, which generated a draft genome comprising 23,347 scaffolds and 953.3 Mb in size. A total of 79,752 genes and 83,814 transcripts were predicted, including 33,658 high-confidence genes. An InterPro signature assignment was detected for 69,218 transcripts, which represented 82.6% of the total. Validation studies demonstrated the genome assembly and annotation completeness and highlighted the usefulness of the draft genome for read mapping of high-throughput sequence data generated using different protocols. All data generated is available through the public databases where it was deposited, being therefore ready to use by the academic and industry communities working on cork oak and/or related species. PMID:29786699
The draft genome sequence of cork oak.
Ramos, António Marcos; Usié, Ana; Barbosa, Pedro; Barros, Pedro M; Capote, Tiago; Chaves, Inês; Simões, Fernanda; Abreu, Isabl; Carrasquinho, Isabel; Faro, Carlos; Guimarães, Joana B; Mendonça, Diogo; Nóbrega, Filomena; Rodrigues, Leandra; Saibo, Nelson J M; Varela, Maria Carolina; Egas, Conceição; Matos, José; Miguel, Célia M; Oliveira, M Margarida; Ricardo, Cândido P; Gonçalves, Sónia
2018-05-22
Cork oak (Quercus suber) is native to southwest Europe and northwest Africa where it plays a crucial environmental and economical role. To tackle the cork oak production and industrial challenges, advanced research is imperative but dependent on the availability of a sequenced genome. To address this, we produced the first draft version of the cork oak genome. We followed a de novo assembly strategy based on high-throughput sequence data, which generated a draft genome comprising 23,347 scaffolds and 953.3 Mb in size. A total of 79,752 genes and 83,814 transcripts were predicted, including 33,658 high-confidence genes. An InterPro signature assignment was detected for 69,218 transcripts, which represented 82.6% of the total. Validation studies demonstrated the genome assembly and annotation completeness and highlighted the usefulness of the draft genome for read mapping of high-throughput sequence data generated using different protocols. All data generated is available through the public databases where it was deposited, being therefore ready to use by the academic and industry communities working on cork oak and/or related species.
High-throughput sequence alignment using Graphics Processing Units
Schatz, Michael C; Trapnell, Cole; Delcher, Arthur L; Varshney, Amitabh
2007-01-01
Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs) in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU. PMID:18070356
Incorporating High-Throughput Exposure Predictions with ...
We previously integrated dosimetry and exposure with high-throughput screening (HTS) to enhance the utility of ToxCast™ HTS data by translating in vitro bioactivity concentrations to oral equivalent doses (OEDs) required to achieve these levels internally. These OEDs were compared against regulatory exposure estimates, providing an activity-to-exposure ratio (AER) useful for a risk-based ranking strategy. As ToxCast™ efforts expand (i.e., Phase II) beyond food-use pesticides towards a wider chemical domain that lacks exposure and toxicity information, prediction tools become increasingly important. In this study, in vitro hepatic clearance and plasma protein binding were measured to estimate OEDs for a subset of Phase II chemicals. OEDs were compared against high-throughput (HT) exposure predictions generated using probabilistic modeling and Bayesian approaches generated by the U.S. EPA ExpoCast™ program. This approach incorporated chemical-specific use and national production volume data with biomonitoring data to inform the exposure predictions. This HT exposure modeling approach provided predictions for all Phase II chemicals assessed in this study whereas estimates from regulatory sources were available for only 7% of chemicals. Of the 163 chemicals assessed in this study, three or 13 chemicals possessed AERs <1 or <100, respectively. Diverse bioactivities y across a range of assays and concentrations was also noted across the wider chemical space su
Microfabrication of a platform to measure and manipulate the mechanics of engineered microtissues.
Ramade, Alexandre; Legant, Wesley R; Picart, Catherine; Chen, Christopher S; Boudou, Thomas
2014-01-01
Engineered tissues can be used to understand fundamental features of biology, develop organotypic in vitro model systems, and as engineered tissue constructs for replacing damaged tissue in vivo. However, a key limitation is an inability to test the wide range of parameters that might impact the engineered tissue in a high-throughput manner and in an environment that mimics the three-dimensional (3D) native architecture. We developed a microfabricated platform to generate arrays of microtissues embedded within 3D micropatterned matrices. Microcantilevers simultaneously constrain microtissue formation and report forces generated by the microtissues in real time, opening the possibility to use high-throughput, low-volume screening for studies on engineered tissues. Thanks to the micrometer scale of the microtissues, this platform is also suitable for high-throughput monitoring of drug-induced effect on architecture and contractility in engineered tissues. Moreover, independent variations of the mechanical stiffness of the cantilevers and collagen matrix allow the measurement and manipulation of the mechanics of the microtissues. Thus, our approach will likely provide valuable opportunities to elucidate how biomechanical, electrical, biochemical, and genetic/epigenetic cues modulate the formation and maturation of 3D engineered tissues. In this chapter, we describe the microfabrication, preparation, and experimental use of such microfabricated tissue gauges. Copyright © 2014 Elsevier Inc. All rights reserved.
A high-throughput, multi-channel photon-counting detector with picosecond timing
NASA Astrophysics Data System (ADS)
Lapington, J. S.; Fraser, G. W.; Miller, G. M.; Ashton, T. J. R.; Jarron, P.; Despeisse, M.; Powolny, F.; Howorth, J.; Milnes, J.
2009-06-01
High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchannel plate devices with very high time resolution, and high-speed multi-channel ASIC electronics developed for the LHC at CERN, provides the necessary building blocks for a high-throughput detector system with up to 1024 parallel counting channels and 20 ps time resolution. We describe the detector and electronic design, discuss the current status of the HiContent project and present the results from a 64-channel prototype system. In the absence of an operational detector, we present measurements of the electronics performance using a pulse generator to simulate detector events. Event timing results from the NINO high-speed front-end ASIC captured using a fast digital oscilloscope are compared with data taken with the proposed electronic configuration which uses the multi-channel HPTDC timing ASIC.
ac electroosmotic pumping induced by noncontact external electrodes
Wang, Shau-Chun; Chen, Hsiao-Ping; Chang, Hsueh-Chia
2007-01-01
Electroosmotic (EO) pumps based on dc electroosmosis is plagued by bubble generation and other electrochemical reactions at the electrodes at voltages beyond 1 V for electrolytes. These disadvantages limit their throughput and offset their portability advantage over mechanical syringe or pneumatic pumps. ac electroosmotic pumps at high frequency (>100 kHz) circumvent the bubble problem by inducing polarization and slip velocity on embedded electrodes,1 but they require complex electrode designs to produce a net flow. We report a new high-throughput ac EO pump design based on induced-polarization on the entire channel surface instead of just on the electrodes. Like dc EO pumps, our pump electrodes are outside of the load section and form a cm-long pump unit consisting of three circular reservoirs (3 mm in diameter) connected by a 1×1 mm channel. The field-induced polarization can produce an effective Zeta potential exceeding 1 V and an ac slip velocity estimated as 1 mm∕sec or higher, both one order of magnitude higher than earlier dc and ac pumps, giving rise to a maximum throughput of 1 μl∕sec. Polarization over the entire channel surface, quadratic scaling with respect to the field and high voltage at high frequency without electrode bubble generation are the reasons why the current pump is superior to earlier dc and ac EO pumps. PMID:19693362
Inertial-ordering-assisted droplet microfluidics for high-throughput single-cell RNA-sequencing.
Moon, Hui-Sung; Je, Kwanghwi; Min, Jae-Woong; Park, Donghyun; Han, Kyung-Yeon; Shin, Seung-Ho; Park, Woong-Yang; Yoo, Chang Eun; Kim, Shin-Hyun
2018-02-27
Single-cell RNA-seq reveals the cellular heterogeneity inherent in the population of cells, which is very important in many clinical and research applications. Recent advances in droplet microfluidics have achieved the automatic isolation, lysis, and labeling of single cells in droplet compartments without complex instrumentation. However, barcoding errors occurring in the cell encapsulation process because of the multiple-beads-in-droplet and insufficient throughput because of the low concentration of beads for avoiding multiple-beads-in-a-droplet remain important challenges for precise and efficient expression profiling of single cells. In this study, we developed a new droplet-based microfluidic platform that significantly improved the throughput while reducing barcoding errors through deterministic encapsulation of inertially ordered beads. Highly concentrated beads containing oligonucleotide barcodes were spontaneously ordered in a spiral channel by an inertial effect, which were in turn encapsulated in droplets one-by-one, while cells were simultaneously encapsulated in the droplets. The deterministic encapsulation of beads resulted in a high fraction of single-bead-in-a-droplet and rare multiple-beads-in-a-droplet although the bead concentration increased to 1000 μl -1 , which diminished barcoding errors and enabled accurate high-throughput barcoding. We successfully validated our device with single-cell RNA-seq. In addition, we found that multiple-beads-in-a-droplet, generated using a normal Drop-Seq device with a high concentration of beads, underestimated transcript numbers and overestimated cell numbers. This accurate high-throughput platform can expand the capability and practicality of Drop-Seq in single-cell analysis.
Robotic Patterning a Superhydrophobic Surface for Collective Cell Migration Screening.
Pang, Yonggang; Yang, Jing; Hui, Zhixin; Grottkau, Brian E
2018-04-01
Collective cell migration, in which cells migrate as a group, is fundamental in many biological and pathological processes. There is increasing interest in studying the collective cell migration in high throughput. Cell scratching, insertion blocker, and gel-dissolving techniques are some methodologies used previously. However, these methods have the drawbacks of cell damage, substrate surface alteration, limitation in medium exchange, and solvent interference. The superhydrophobic surface, on which the water contact angle is greater than 150 degrees, has been recently utilized to generate patterned arrays. Independent cell culture areas can be generated on a substrate that functions the same as a conventional multiple well plate. However, so far there has been no report on superhydrophobic patterning for the study of cell migration. In this study, we report on the successful development of a robotically patterned superhydrophobic array for studying collective cell migration in high throughput. The array was developed on a rectangular single-well cell culture plate consisting of hydrophilic flat microwells separated by the superhydrophobic surface. The manufacturing process is robotic and includes patterning discrete protective masks to the substrate using 3D printing, robotic spray coating of silica nanoparticles, robotic mask removal, robotic mini silicone blocker patterning, automatic cell seeding, and liquid handling. Compared with a standard 96-well plate, our system increases the throughput by 2.25-fold and generates a cell-free area in each well non-destructively. Our system also demonstrates higher efficiency than conventional way of liquid handling using microwell plates, and shorter processing time than manual operating in migration assays. The superhydrophobic surface had no negative impact on cell viability. Using our system, we studied the collective migration of human umbilical vein endothelial cells and cancer cells using assays of endpoint quantification, dynamic cell tracking, and migration quantification following varied drug treatments. This system provides a versatile platform to study collective cell migration in high throughput for a broad range of applications.
Wright, Imogen A.; Travers, Simon A.
2014-01-01
The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. PMID:24861618
fluff: exploratory analysis and visualization of high-throughput sequencing data
Georgiou, Georgios
2016-01-01
Summary. In this article we describe fluff, a software package that allows for simple exploration, clustering and visualization of high-throughput sequencing data mapped to a reference genome. The package contains three command-line tools to generate publication-quality figures in an uncomplicated manner using sensible defaults. Genome-wide data can be aggregated, clustered and visualized in a heatmap, according to different clustering methods. This includes a predefined setting to identify dynamic clusters between different conditions or developmental stages. Alternatively, clustered data can be visualized in a bandplot. Finally, fluff includes a tool to generate genomic profiles. As command-line tools, the fluff programs can easily be integrated into standard analysis pipelines. The installation is straightforward and documentation is available at http://fluff.readthedocs.org. Availability. fluff is implemented in Python and runs on Linux. The source code is freely available for download at https://github.com/simonvh/fluff. PMID:27547532
On-chip polarimetry for high-throughput screening of nanoliter and smaller sample volumes
NASA Technical Reports Server (NTRS)
Bachmann, Brian O. (Inventor); Bornhop, Darryl J. (Inventor); Dotson, Stephen (Inventor)
2012-01-01
A polarimetry technique for measuring optical activity that is particularly suited for high throughput screening employs a chip or substrate (22) having one or more microfluidic channels (26) formed therein. A polarized laser beam (14) is directed onto optically active samples that are disposed in the channels. The incident laser beam interacts with the optically active molecules in the sample, which slightly alter the polarization of the laser beam as it passes multiple times through the sample. Interference fringe patterns (28) are generated by the interaction of the laser beam with the sample and the channel walls. A photodetector (34) is positioned to receive the interference fringe patterns and generate an output signal that is input to a computer or other analyzer (38) for analyzing the signal and determining the rotation of plane polarized light by optically active material in the channel from polarization rotation calculations.
Re-engineering adenovirus vector systems to enable high-throughput analyses of gene function.
Stanton, Richard J; McSharry, Brian P; Armstrong, Melanie; Tomasec, Peter; Wilkinson, Gavin W G
2008-12-01
With the enhanced capacity of bioinformatics to interrogate extensive banks of sequence data, more efficient technologies are needed to test gene function predictions. Replication-deficient recombinant adenovirus (Ad) vectors are widely used in expression analysis since they provide for extremely efficient expression of transgenes in a wide range of cell types. To facilitate rapid, high-throughput generation of recombinant viruses, we have re-engineered an adenovirus vector (designated AdZ) to allow single-step, directional gene insertion using recombineering technology. Recombineering allows for direct insertion into the Ad vector of PCR products, synthesized sequences, or oligonucleotides encoding shRNAs without requirement for a transfer vector Vectors were optimized for high-throughput applications by making them "self-excising" through incorporating the I-SceI homing endonuclease into the vector removing the need to linearize vectors prior to transfection into packaging cells. AdZ vectors allow genes to be expressed in their native form or with strep, V5, or GFP tags. Insertion of tetracycline operators downstream of the human cytomegalovirus major immediate early (HCMV MIE) promoter permits silencing of transgenes in helper cells expressing the tet repressor thus making the vector compatible with the cloning of toxic gene products. The AdZ vector system is robust, straightforward, and suited to both sporadic and high-throughput applications.
GlycoExtractor: a web-based interface for high throughput processing of HPLC-glycan data.
Artemenko, Natalia V; Campbell, Matthew P; Rudd, Pauline M
2010-04-05
Recently, an automated high-throughput HPLC platform has been developed that can be used to fully sequence and quantify low concentrations of N-linked sugars released from glycoproteins, supported by an experimental database (GlycoBase) and analytical tools (autoGU). However, commercial packages that support the operation of HPLC instruments and data storage lack platforms for the extraction of large volumes of data. The lack of resources and agreed formats in glycomics is now a major limiting factor that restricts the development of bioinformatic tools and automated workflows for high-throughput HPLC data analysis. GlycoExtractor is a web-based tool that interfaces with a commercial HPLC database/software solution to facilitate the extraction of large volumes of processed glycan profile data (peak number, peak areas, and glucose unit values). The tool allows the user to export a series of sample sets to a set of file formats (XML, JSON, and CSV) rather than a collection of disconnected files. This approach not only reduces the amount of manual refinement required to export data into a suitable format for data analysis but also opens the field to new approaches for high-throughput data interpretation and storage, including biomarker discovery and validation and monitoring of online bioprocessing conditions for next generation biotherapeutics.
High Throughput, Polymeric Aqueous Two-Phase Printing of Tumor Spheroids
Atefi, Ehsan; Lemmo, Stephanie; Fyffe, Darcy; Luker, Gary D.; Tavana, Hossein
2014-01-01
This paper presents a new 3D culture microtechnology for high throughput production of tumor spheroids and validates its utility for screening anti-cancer drugs. We use two immiscible polymeric aqueous solutions and microprint a submicroliter drop of the “patterning” phase containing cells into a bath of the “immersion” phase. Selecting proper formulations of biphasic systems using a panel of biocompatible polymers results in the formation of a round drop that confines cells to facilitate spontaneous formation of a spheroid without any external stimuli. Adapting this approach to robotic tools enables straightforward generation and maintenance of spheroids of well-defined size in standard microwell plates and biochemical analysis of spheroids in situ, which is not possible with existing techniques for spheroid culture. To enable high throughput screening, we establish a phase diagram to identify minimum cell densities within specific volumes of the patterning drop to result in a single spheroid. Spheroids show normal growth over long-term incubation and dose-dependent decrease in cellular viability when treated with drug compounds, but present significant resistance compared to monolayer cultures. The unprecedented ease of implementing this microtechnology and its robust performance will benefit high throughput studies of drug screening against cancer cells with physiologically-relevant 3D tumor models. PMID:25411577
Species-Specific Predictive Signatures of Developmental Toxicity Using the ToxCast Chemical Library
EPA’s ToxCastTM project is profiling the in vitro bioactivity of chemicals to generate predictive signatures that correlate with observed in vivo toxicity. In vitro profiling methods from ToxCast data consist of over 600 high-throughput screening (HTS) and high-content screening ...
Howard, Dougal P; Marchand, Peter; McCafferty, Liam; Carmalt, Claire J; Parkin, Ivan P; Darr, Jawwad A
2017-04-10
High-throughput continuous hydrothermal flow synthesis was used to generate a library of aluminum and gallium-codoped zinc oxide nanoparticles of specific atomic ratios. Resistivities of the materials were determined by Hall Effect measurements on heat-treated pressed discs and the results collated into a conductivity-composition map. Optimal resistivities of ∼9 × 10 -3 Ω cm were reproducibly achieved for several samples, for example, codoped ZnO with 2 at% Ga and 1 at% Al. The optimum sample on balance of performance and cost was deemed to be ZnO codoped with 3 at% Al and 1 at% Ga.
High-throughput ab-initio dilute solute diffusion database.
Wu, Henry; Mayeshiba, Tam; Morgan, Dane
2016-07-19
We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world.
Next generation platforms for high-throughput biodosimetry.
Repin, Mikhail; Turner, Helen C; Garty, Guy; Brenner, David J
2014-06-01
Here the general concept of the combined use of plates and tubes in racks compatible with the American National Standards Institute/the Society for Laboratory Automation and Screening microplate formats as the next generation platforms for increasing the throughput of biodosimetry assays was described. These platforms can be used at different stages of biodosimetry assays starting from blood collection into microtubes organised in standardised racks and ending with the cytogenetic analysis of samples in standardised multiwell and multichannel plates. Robotically friendly platforms can be used for different biodosimetry assays in minimally equipped laboratories and on cost-effective automated universal biotech systems. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A gas trapping method for high-throughput metabolic experiments.
Krycer, James R; Diskin, Ciana; Nelson, Marin E; Zeng, Xiao-Yi; Fazakerley, Daniel J; James, David E
2018-01-01
Research into cellular metabolism has become more high-throughput, with typical cell-culture experiments being performed in multiwell plates (microplates). This format presents a challenge when trying to collect gaseous products, such as carbon dioxide (CO2), which requires a sealed environment and a vessel separate from the biological sample. To address this limitation, we developed a gas trapping protocol using perforated plastic lids in sealed cell-culture multiwell plates. We used this trap design to measure CO2 production from glucose and fatty acid metabolism, as well as hydrogen sulfide production from cysteine-treated cells. Our data clearly show that this gas trap can be applied to liquid and solid gas-collection media and can be used to study gaseous product generation by both adherent cells and cells in suspension. Since our gas traps can be adapted to multiwell plates of various sizes, they present a convenient, cost-effective solution that can accommodate the trend toward high-throughput measurements in metabolic research.
A high-throughput screening approach for the optoelectronic properties of conjugated polymers.
Wilbraham, Liam; Berardo, Enrico; Turcani, Lukas; Jelfs, Kim E; Zwijnenburg, Martijn A
2018-06-25
We propose a general high-throughput virtual screening approach for the optical and electronic properties of conjugated polymers. This approach makes use of the recently developed xTB family of low-computational-cost density functional tight-binding methods from Grimme and co-workers, calibrated here to (TD-)DFT data computed for a representative diverse set of (co-)polymers. Parameters drawn from the resulting calibration using a linear model can then be applied to the xTB derived results for new polymers, thus generating near DFT-quality data with orders of magnitude reduction in computational cost. As a result, after an initial computational investment for calibration, this approach can be used to quickly and accurately screen on the order of thousands of polymers for target applications. We also demonstrate that the (opto)electronic properties of the conjugated polymers show only a very minor variation when considering different conformers and that the results of high-throughput screening are therefore expected to be relatively insensitive with respect to the conformer search methodology applied.
High-throughput technology for novel SO2 oxidation catalysts
Loskyll, Jonas; Stoewe, Klaus; Maier, Wilhelm F
2011-01-01
We review the state of the art and explain the need for better SO2 oxidation catalysts for the production of sulfuric acid. A high-throughput technology has been developed for the study of potential catalysts in the oxidation of SO2 to SO3. High-throughput methods are reviewed and the problems encountered with their adaptation to the corrosive conditions of SO2 oxidation are described. We show that while emissivity-corrected infrared thermography (ecIRT) can be used for primary screening, it is prone to errors because of the large variations in the emissivity of the catalyst surface. UV-visible (UV-Vis) spectrometry was selected instead as a reliable analysis method of monitoring the SO2 conversion. Installing plain sugar absorbents at reactor outlets proved valuable for the detection and quantitative removal of SO3 from the product gas before the UV-Vis analysis. We also overview some elements used for prescreening and those remaining after the screening of the first catalyst generations. PMID:27877427
Ngo, Tony; Coleman, James L J; Smith, Nicola J
2015-01-01
Orphan G protein-coupled receptors represent an underexploited resource for drug discovery but pose a considerable challenge for assay development because their cognate G protein signaling pathways are often unknown. In this methodological chapter, we describe the use of constitutive activity, that is, the inherent ability of receptors to couple to their cognate G proteins in the absence of ligand, to inform the development of high-throughput screening assays for a particular orphan receptor. We specifically focus on a two-step process, whereby constitutive G protein coupling is first determined using yeast Gpa1/human G protein chimeras linked to growth and β-galactosidase generation. Coupling selectivity is then confirmed in mammalian cells expressing endogenous G proteins and driving accumulation of transcription factor-fused luciferase reporters specific to each of the classes of G protein. Based on these findings, high-throughput screening campaigns can be performed on the already miniaturized mammalian reporter system.
High-throughput characterization for solar fuels materials discovery
NASA Astrophysics Data System (ADS)
Mitrovic, Slobodan; Becerra, Natalie; Cornell, Earl; Guevarra, Dan; Haber, Joel; Jin, Jian; Jones, Ryan; Kan, Kevin; Marcin, Martin; Newhouse, Paul; Soedarmadji, Edwin; Suram, Santosh; Xiang, Chengxiang; Gregoire, John; High-Throughput Experimentation Team
2014-03-01
In this talk I will present the status of the High-Throughput Experimentation (HTE) project of the Joint Center for Artificial Photosynthesis (JCAP). JCAP is an Energy Innovation Hub of the U.S. Department of Energy with a mandate to deliver a solar fuel generator based on an integrated photoelectrochemical cell (PEC). However, efficient and commercially viable catalysts or light absorbers for the PEC do not exist. The mission of HTE is to provide the accelerated discovery through combinatorial synthesis and rapid screening of material properties. The HTE pipeline also features high-throughput material characterization using x-ray diffraction and x-ray photoemission spectroscopy (XPS). In this talk I present the currently operating pipeline and focus on our combinatorial XPS efforts to build the largest free database of spectra from mixed-metal oxides, nitrides, sulfides and alloys. This work was performed at Joint Center for Artificial Photosynthesis, a DOE Energy Innovation Hub, supported through the Office of Science of the U.S. Department of Energy under Award No. DE-SC0004993.
Structuring intuition with theory: The high-throughput way
NASA Astrophysics Data System (ADS)
Fornari, Marco
2015-03-01
First principles methodologies have grown in accuracy and applicability to the point where large databases can be built, shared, and analyzed with the goal of predicting novel compositions, optimizing functional properties, and discovering unexpected relationships between the data. In order to be useful to a large community of users, data should be standardized, validated, and distributed. In addition, tools to easily manage large datasets should be made available to effectively lead to materials development. Within the AFLOW consortium we have developed a simple frame to expand, validate, and mine data repositories: the MTFrame. Our minimalistic approach complement AFLOW and other existing high-throughput infrastructures and aims to integrate data generation with data analysis. We present few examples from our work on materials for energy conversion. Our intent s to pinpoint the usefulness of high-throughput methodologies to guide the discovery process by quantitatively structuring the scientific intuition. This work was supported by ONR-MURI under Contract N00014-13-1-0635 and the Duke University Center for Materials Genomics.
An automated high throughput tribometer for adhesion, wear, and friction measurements
NASA Astrophysics Data System (ADS)
Kalihari, Vivek; Timpe, Shannon J.; McCarty, Lyle; Ninke, Matthew; Whitehead, Jim
2013-03-01
Understanding the origin and correlation of different surface properties under a multitude of operating conditions is critical in tribology. Diverse tribological properties and a lack of a single instrument to measure all make it difficult to compare and correlate properties, particularly in light of the wide range of interfaces commonly investigated. In the current work, a novel automated tribometer has been designed and validated, providing a unique experimental platform capable of high throughput adhesion, wear, kinetic friction, and static friction measurements. The innovative design aspects are discussed that allow for a variety of probes, sample surfaces, and testing conditions. Critical components of the instrument and their design criteria are described along with examples of data collection schemes. A case study is presented with multiple surface measurements performed on a set of characteristic substrates. Adhesion, wear, kinetic friction, and static friction are analyzed and compared across surfaces, highlighting the comprehensive nature of the surface data that can be generated using the automated high throughput tribometer.
Even-Desrumeaux, Klervi; Baty, Daniel; Chames, Patrick
2010-01-01
Antibodies microarrays are among the novel class of rapidly emerging proteomic technologies that will allow us to efficiently perform specific diagnosis and proteome analysis. Recombinant antibody fragments are especially suited for this approach but their stability is often a limiting factor. Camelids produce functional antibodies devoid of light chains (HCAbs) of which the single N-terminal domain is fully capable of antigen binding. When produced as an independent domain, these so-called single domain antibody fragments (sdAbs) have several advantages for biotechnological applications thanks to their unique properties of size (15 kDa), stability, solubility, and expression yield. These features should allow sdAbs to outperform other antibody formats in a number of applications, notably as capture molecule for antibody arrays. In this study, we have produced antibody microarrays using direct and oriented immobilization of sdAbs produced in crude bacterial lysates to generate proof-of-principle of a high-throughput compatible array design. Several sdAb immobilization strategies have been explored. Immobilization of in vivo biotinylated sdAbs by direct spotting of bacterial lysate on streptavidin and sandwich detection was developed to achieve high sensitivity and specificity, whereas immobilization of “multi-tagged” sdAbs via anti-tag antibodies and direct labeled sample detection strategy was optimized for the design of high-density antibody arrays for high-throughput proteomics and identification of potential biomarkers. PMID:20859568
From big data analysis to personalized medicine for all: challenges and opportunities.
Alyass, Akram; Turcotte, Michelle; Meyre, David
2015-06-27
Recent advances in high-throughput technologies have led to the emergence of systems biology as a holistic science to achieve more precise modeling of complex diseases. Many predict the emergence of personalized medicine in the near future. We are, however, moving from two-tiered health systems to a two-tiered personalized medicine. Omics facilities are restricted to affluent regions, and personalized medicine is likely to widen the growing gap in health systems between high and low-income countries. This is mirrored by an increasing lag between our ability to generate and analyze big data. Several bottlenecks slow-down the transition from conventional to personalized medicine: generation of cost-effective high-throughput data; hybrid education and multidisciplinary teams; data storage and processing; data integration and interpretation; and individual and global economic relevance. This review provides an update of important developments in the analysis of big data and forward strategies to accelerate the global transition to personalized medicine.
DockoMatic: automated peptide analog creation for high throughput virtual screening.
Jacob, Reed B; Bullock, Casey W; Andersen, Tim; McDougal, Owen M
2011-10-01
The purpose of this manuscript is threefold: (1) to describe an update to DockoMatic that allows the user to generate cyclic peptide analog structure files based on protein database (pdb) files, (2) to test the accuracy of the peptide analog structure generation utility, and (3) to evaluate the high throughput capacity of DockoMatic. The DockoMatic graphical user interface interfaces with the software program Treepack to create user defined peptide analogs. To validate this approach, DockoMatic produced cyclic peptide analogs were tested for three-dimensional structure consistency and binding affinity against four experimentally determined peptide structure files available in the Research Collaboratory for Structural Bioinformatics database. The peptides used to evaluate this new functionality were alpha-conotoxins ImI, PnIA, and their published analogs. Peptide analogs were generated by DockoMatic and tested for their ability to bind to X-ray crystal structure models of the acetylcholine binding protein originating from Aplysia californica. The results, consisting of more than 300 simulations, demonstrate that DockoMatic predicts the binding energy of peptide structures to within 3.5 kcal mol(-1), and the orientation of bound ligand compares to within 1.8 Å root mean square deviation for ligand structures as compared to experimental data. Evaluation of high throughput virtual screening capacity demonstrated that Dockomatic can collect, evaluate, and summarize the output of 10,000 AutoDock jobs in less than 2 hours of computational time, while 100,000 jobs requires approximately 15 hours and 1,000,000 jobs is estimated to take up to a week. Copyright © 2011 Wiley Periodicals, Inc.
UPIC + GO: Zeroing in on informative markers
USDA-ARS?s Scientific Manuscript database
Microsatellites/SSRs (simple sequence repeats) have become a powerful tool in genomic biology because of their broad range of applications and availability. An efficient method recently developed to generate microsatellite-enriched libraries used in combination with high throughput DNA pyrosequencin...
Identifying populations sensitive to environmental chemicals by simulating toxicokinetic variability
We incorporate inter-individual variability, including variability across demographic subgroups, into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of...
Breeding nursery tissue collection for possible genomic analysis
USDA-ARS?s Scientific Manuscript database
Phenotyping is considered a major bottleneck in breeding programs. With new genomic technologies, high throughput genotype schemes are constantly being developed. However, every genomic technology requires phenotypic data to inform prediction models generated from the technology. Forage breeders con...
Analysis of Illumina Microbial Assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clum, Alicia; Foster, Brian; Froula, Jeff
2010-05-28
Since the emerging of second generation sequencing technologies, the evaluation of different sequencing approaches and their assembly strategies for different types of genomes has become an important undertaken. Next generation sequencing technologies dramatically increase sequence throughput while decreasing cost, making them an attractive tool for whole genome shotgun sequencing. To compare different approaches for de-novo whole genome assembly, appropriate tools and a solid understanding of both quantity and quality of the underlying sequence data are crucial. Here, we performed an in-depth analysis of short-read Illumina sequence assembly strategies for bacterial and archaeal genomes. Different types of Illumina libraries as wellmore » as different trim parameters and assemblers were evaluated. Results of the comparative analysis and sequencing platforms will be presented. The goal of this analysis is to develop a cost-effective approach for the increased throughput of the generation of high quality microbial genomes.« less
High-throughput purification of recombinant proteins using self-cleaving intein tags.
Coolbaugh, M J; Shakalli Tang, M J; Wood, D W
2017-01-01
High throughput methods for recombinant protein production using E. coli typically involve the use of affinity tags for simple purification of the protein of interest. One drawback of these techniques is the occasional need for tag removal before study, which can be hard to predict. In this work, we demonstrate two high throughput purification methods for untagged protein targets based on simple and cost-effective self-cleaving intein tags. Two model proteins, E. coli beta-galactosidase (βGal) and superfolder green fluorescent protein (sfGFP), were purified using self-cleaving versions of the conventional chitin-binding domain (CBD) affinity tag and the nonchromatographic elastin-like-polypeptide (ELP) precipitation tag in a 96-well filter plate format. Initial tests with shake flask cultures confirmed that the intein purification scheme could be scaled down, with >90% pure product generated in a single step using both methods. The scheme was then validated in a high throughput expression platform using 24-well plate cultures followed by purification in 96-well plates. For both tags and with both target proteins, the purified product was consistently obtained in a single-step, with low well-to-well and plate-to-plate variability. This simple method thus allows the reproducible production of highly pure untagged recombinant proteins in a convenient microtiter plate format. Copyright © 2016 Elsevier Inc. All rights reserved.
Wang, Guang-Li; Yuan, Fang; Gu, Tiantian; Dong, Yuming; Wang, Qian; Zhao, Wei-Wei
2018-02-06
Herein we report a general and novel strategy for high-throughput photoelectrochemical (PEC) enzymatic bioanalysis on the basis of enzyme-initiated quinone-chitosan conjugation chemistry (QCCC). Specifically, the strategy was illustrated by using a model quinones-generating oxidase of tyrosinase (Tyr) to catalytically produce 1,2-bezoquinone or its derivative, which can easily and selectively be conjugated onto the surface of the chitosan deposited PbS/NiO/FTO photocathode via the QCCC. Upon illumination, the covalently attached quinones could act as electron acceptors of PbS quantum dots (QDs), improving the photocurrent generation and thus allowing the elegant probing of Tyr activity. Enzyme cascades, such as alkaline phosphatase (ALP)/Tyr and β-galactosidase (Gal)/Tyr, were further introduced into the system for the successful probing of the corresponding targets. This work features not only the first use of QCCC in PEC bioanalysis but also the separation of enzymatic reaction from the photoelectrode as well as the direct signal recording in a split-type protocol, which enables quite convenient and high-throughput detection as compared to previous formats. More importantly, by using numerous other oxidoreductases that involve quinones as reactants/products, this protocol could serve as a common basis for the development of a new class of QCCC-based PEC enzymatic bioanalysis and further extended for general enzyme-labeled PEC bioanalysis of versatile targets.
Adissu, Hibret A.; Estabel, Jeanne; Sunter, David; Tuck, Elizabeth; Hooks, Yvette; Carragher, Damian M.; Clarke, Kay; Karp, Natasha A.; Project, Sanger Mouse Genetics; Newbigging, Susan; Jones, Nora; Morikawa, Lily; White, Jacqueline K.; McKerlie, Colin
2014-01-01
The Mouse Genetics Project (MGP) at the Wellcome Trust Sanger Institute aims to generate and phenotype over 800 genetically modified mouse lines over the next 5 years to gain a better understanding of mammalian gene function and provide an invaluable resource to the scientific community for follow-up studies. Phenotyping includes the generation of a standardized biobank of paraffin-embedded tissues for each mouse line, but histopathology is not routinely performed. In collaboration with the Pathology Core of the Centre for Modeling Human Disease (CMHD) we report the utility of histopathology in a high-throughput primary phenotyping screen. Histopathology was assessed in an unbiased selection of 50 mouse lines with (n=30) or without (n=20) clinical phenotypes detected by the standard MGP primary phenotyping screen. Our findings revealed that histopathology added correlating morphological data in 19 of 30 lines (63.3%) in which the primary screen detected a phenotype. In addition, seven of the 50 lines (14%) presented significant histopathology findings that were not associated with or predicted by the standard primary screen. Three of these seven lines had no clinical phenotype detected by the standard primary screen. Incidental and strain-associated background lesions were present in all mutant lines with good concordance to wild-type controls. These findings demonstrate the complementary and unique contribution of histopathology to high-throughput primary phenotyping of mutant mice. PMID:24652767
Microarray-Based Gene Expression Analysis for Veterinary Pathologists: A Review.
Raddatz, Barbara B; Spitzbarth, Ingo; Matheis, Katja A; Kalkuhl, Arno; Deschl, Ulrich; Baumgärtner, Wolfgang; Ulrich, Reiner
2017-09-01
High-throughput, genome-wide transcriptome analysis is now commonly used in all fields of life science research and is on the cusp of medical and veterinary diagnostic application. Transcriptomic methods such as microarrays and next-generation sequencing generate enormous amounts of data. The pathogenetic expertise acquired from understanding of general pathology provides veterinary pathologists with a profound background, which is essential in translating transcriptomic data into meaningful biological knowledge, thereby leading to a better understanding of underlying disease mechanisms. The scientific literature concerning high-throughput data-mining techniques usually addresses mathematicians or computer scientists as the target audience. In contrast, the present review provides the reader with a clear and systematic basis from a veterinary pathologist's perspective. Therefore, the aims are (1) to introduce the reader to the necessary methodological background; (2) to introduce the sequential steps commonly performed in a microarray analysis including quality control, annotation, normalization, selection of differentially expressed genes, clustering, gene ontology and pathway analysis, analysis of manually selected genes, and biomarker discovery; and (3) to provide references to publically available and user-friendly software suites. In summary, the data analysis methods presented within this review will enable veterinary pathologists to analyze high-throughput transcriptome data obtained from their own experiments, supplemental data that accompany scientific publications, or public repositories in order to obtain a more in-depth insight into underlying disease mechanisms.
Green, Anthony P; Turner, Nicholas J; O'Reilly, Elaine
2014-01-01
The widespread application of ω-transaminases as biocatalysts for chiral amine synthesis has been hampered by fundamental challenges, including unfavorable equilibrium positions and product inhibition. Herein, an efficient process that allows reactions to proceed in high conversion in the absence of by-product removal using only one equivalent of a diamine donor (ortho-xylylenediamine) is reported. This operationally simple method is compatible with the most widely used (R)- and (S)-selective ω-TAs and is particularly suitable for the conversion of substrates with unfavorable equilibrium positions (e.g., 1-indanone). Significantly, spontaneous polymerization of the isoindole by-product generates colored derivatives, providing a high-throughput screening platform to identify desired ω-TA activity. PMID:25138082
STOP using just GO: a multi-ontology hypothesis generation tool for high throughput experimentation
2013-01-01
Background Gene Ontology (GO) enrichment analysis remains one of the most common methods for hypothesis generation from high throughput datasets. However, we believe that researchers strive to test other hypotheses that fall outside of GO. Here, we developed and evaluated a tool for hypothesis generation from gene or protein lists using ontological concepts present in manually curated text that describes those genes and proteins. Results As a consequence we have developed the method Statistical Tracking of Ontological Phrases (STOP) that expands the realm of testable hypotheses in gene set enrichment analyses by integrating automated annotations of genes to terms from over 200 biomedical ontologies. While not as precise as manually curated terms, we find that the additional enriched concepts have value when coupled with traditional enrichment analyses using curated terms. Conclusion Multiple ontologies have been developed for gene and protein annotation, by using a dataset of both manually curated GO terms and automatically recognized concepts from curated text we can expand the realm of hypotheses that can be discovered. The web application STOP is available at http://mooneygroup.org/stop/. PMID:23409969
A Robotic Platform for Quantitative High-Throughput Screening
Michael, Sam; Auld, Douglas; Klumpp, Carleen; Jadhav, Ajit; Zheng, Wei; Thorne, Natasha; Austin, Christopher P.; Inglese, James
2008-01-01
Abstract High-throughput screening (HTS) is increasingly being adopted in academic institutions, where the decoupling of screening and drug development has led to unique challenges, as well as novel uses of instrumentation, assay formulations, and software tools. Advances in technology have made automated unattended screening in the 1,536-well plate format broadly accessible and have further facilitated the exploration of new technologies and approaches to screening. A case in point is our recently developed quantitative HTS (qHTS) paradigm, which tests each library compound at multiple concentrations to construct concentration-response curves (CRCs) generating a comprehensive data set for each assay. The practical implementation of qHTS for cell-based and biochemical assays across libraries of > 100,000 compounds (e.g., between 700,000 and 2,000,000 sample wells tested) requires maximal efficiency and miniaturization and the ability to easily accommodate many different assay formats and screening protocols. Here, we describe the design and utilization of a fully integrated and automated screening system for qHTS at the National Institutes of Health's Chemical Genomics Center. We report system productivity, reliability, and flexibility, as well as modifications made to increase throughput, add additional capabilities, and address limitations. The combination of this system and qHTS has led to the generation of over 6 million CRCs from > 120 assays in the last 3 years and is a technology that can be widely implemented to increase efficiency of screening and lead generation. PMID:19035846
D'Aiuto, Leonardo; Zhi, Yun; Kumar Das, Dhanjit; Wilcox, Madeleine R; Johnson, Jon W; McClain, Lora; MacDonald, Matthew L; Di Maio, Roberto; Schurdak, Mark E; Piazza, Paolo; Viggiano, Luigi; Sweet, Robert; Kinchington, Paul R; Bhattacharjee, Ayantika G; Yolken, Robert; Nimgaonka, Vishwajit L; Nimgaonkar, Vishwajit L
2014-01-01
Induced pluripotent stem cell (iPSC)-based technologies offer an unprecedented opportunity to perform high-throughput screening of novel drugs for neurological and neurodegenerative diseases. Such screenings require a robust and scalable method for generating large numbers of mature, differentiated neuronal cells. Currently available methods based on differentiation of embryoid bodies (EBs) or directed differentiation of adherent culture systems are either expensive or are not scalable. We developed a protocol for large-scale generation of neuronal stem cells (NSCs)/early neural progenitor cells (eNPCs) and their differentiation into neurons. Our scalable protocol allows robust and cost-effective generation of NSCs/eNPCs from iPSCs. Following culture in neurobasal medium supplemented with B27 and BDNF, NSCs/eNPCs differentiate predominantly into vesicular glutamate transporter 1 (VGLUT1) positive neurons. Targeted mass spectrometry analysis demonstrates that iPSC-derived neurons express ligand-gated channels and other synaptic proteins and whole-cell patch-clamp experiments indicate that these channels are functional. The robust and cost-effective differentiation protocol described here for large-scale generation of NSCs/eNPCs and their differentiation into neurons paves the way for automated high-throughput screening of drugs for neurological and neurodegenerative diseases.
Overcoming HERG affinity in the discovery of the CCR5 antagonist maraviroc.
Price, David A; Armour, Duncan; de Groot, Marcel; Leishman, Derek; Napier, Carolyn; Perros, Manos; Stammen, Blanda L; Wood, Anthony
2006-09-01
The discovery of maraviroc 17 is described with particular reference to the generation of high selectivity over affinity for the HERG potassium channel. This was achieved through the use of a high throughput binding assay for the HERG channel that is known to show an excellent correlation with functional effects.
MGIS: Managing banana (Musa spp.) genetic resources information and high-throughput genotyping data
USDA-ARS?s Scientific Manuscript database
Unraveling genetic diversity held in genebanks on a large scale is underway, due to the advances in Next-generation sequence-based technologies that produce high-density genetic markers for a large number of samples at low cost. Genebank users should be in a position to identify and select germplasm...
Species-specific predictive models of developmental toxicity using the ToxCast chemical library
EPA’s ToxCastTM project is profiling the in vitro bioactivity of chemicals to generate predictive models that correlate with observed in vivo toxicity. In vitro profiling methods are based on ToxCast data, consisting of over 600 high-throughput screening (HTS) and high-content sc...
Enabling a high throughput real time data pipeline for a large radio telescope array with GPUs
NASA Astrophysics Data System (ADS)
Edgar, R. G.; Clark, M. A.; Dale, K.; Mitchell, D. A.; Ord, S. M.; Wayth, R. B.; Pfister, H.; Greenhill, L. J.
2010-10-01
The Murchison Widefield Array (MWA) is a next-generation radio telescope currently under construction in the remote Western Australia Outback. Raw data will be generated continuously at 5 GiB s-1, grouped into 8 s cadences. This high throughput motivates the development of on-site, real time processing and reduction in preference to archiving, transport and off-line processing. Each batch of 8 s data must be completely reduced before the next batch arrives. Maintaining real time operation will require a sustained performance of around 2.5 TFLOP s-1 (including convolutions, FFTs, interpolations and matrix multiplications). We describe a scalable heterogeneous computing pipeline implementation, exploiting both the high computing density and FLOP-per-Watt ratio of modern GPUs. The architecture is highly parallel within and across nodes, with all major processing elements performed by GPUs. Necessary scatter-gather operations along the pipeline are loosely synchronized between the nodes hosting the GPUs. The MWA will be a frontier scientific instrument and a pathfinder for planned peta- and exa-scale facilities.
Cheminformatic Analysis of the US EPA ToxCast Chemical Library
The ToxCast project is employing high throughput screening (HTS) technologies, along with chemical descriptors and computational models, to develop approaches for screening and prioritizing environmental chemicals for further toxicity testing. ToxCast Phase I generated HTS data f...
Guimaraes, S; Pruvost, M; Daligault, J; Stoetzel, E; Bennett, E A; Côté, N M-L; Nicolas, V; Lalis, A; Denys, C; Geigl, E-M; Grange, T
2017-05-01
We present a cost-effective metabarcoding approach, aMPlex Torrent, which relies on an improved multiplex PCR adapted to highly degraded DNA, combining barcoding and next-generation sequencing to simultaneously analyse many heterogeneous samples. We demonstrate the strength of these improvements by generating a phylochronology through the genotyping of ancient rodent remains from a Moroccan cave whose stratigraphy covers the last 120 000 years. Rodents are important for epidemiology, agronomy and ecological investigations and can act as bioindicators for human- and/or climate-induced environmental changes. Efficient and reliable genotyping of ancient rodent remains has the potential to deliver valuable phylogenetic and paleoecological information. The analysis of multiple ancient skeletal remains of very small size with poor DNA preservation, however, requires a sensitive high-throughput method to generate sufficient data. We show this approach to be particularly adapted at accessing this otherwise difficult taxonomic and genetic resource. As a highly scalable, lower cost and less labour-intensive alternative to targeted sequence capture approaches, we propose the aMPlex Torrent strategy to be a useful tool for the genetic analysis of multiple degraded samples in studies involving ecology, archaeology, conservation and evolutionary biology. © 2016 John Wiley & Sons Ltd.
SUGAR: graphical user interface-based data refiner for high-throughput DNA sequencing.
Sato, Yukuto; Kojima, Kaname; Nariai, Naoki; Yamaguchi-Kabata, Yumi; Kawai, Yosuke; Takahashi, Mamoru; Mimori, Takahiro; Nagasaki, Masao
2014-08-08
Next-generation sequencers (NGSs) have become one of the main tools for current biology. To obtain useful insights from the NGS data, it is essential to control low-quality portions of the data affected by technical errors such as air bubbles in sequencing fluidics. We develop a software SUGAR (subtile-based GUI-assisted refiner) which can handle ultra-high-throughput data with user-friendly graphical user interface (GUI) and interactive analysis capability. The SUGAR generates high-resolution quality heatmaps of the flowcell, enabling users to find possible signals of technical errors during the sequencing. The sequencing data generated from the error-affected regions of a flowcell can be selectively removed by automated analysis or GUI-assisted operations implemented in the SUGAR. The automated data-cleaning function based on sequence read quality (Phred) scores was applied to a public whole human genome sequencing data and we proved the overall mapping quality was improved. The detailed data evaluation and cleaning enabled by SUGAR would reduce technical problems in sequence read mapping, improving subsequent variant analysis that require high-quality sequence data and mapping results. Therefore, the software will be especially useful to control the quality of variant calls to the low population cells, e.g., cancers, in a sample with technical errors of sequencing procedures.
Micro-patterned agarose gel devices for single-cell high-throughput microscopy of E. coli cells.
Priest, David G; Tanaka, Nobuyuki; Tanaka, Yo; Taniguchi, Yuichi
2017-12-21
High-throughput microscopy of bacterial cells elucidated fundamental cellular processes including cellular heterogeneity and cell division homeostasis. Polydimethylsiloxane (PDMS)-based microfluidic devices provide advantages including precise positioning of cells and throughput, however device fabrication is time-consuming and requires specialised skills. Agarose pads are a popular alternative, however cells often clump together, which hinders single cell quantitation. Here, we imprint agarose pads with micro-patterned 'capsules', to trap individual cells and 'lines', to direct cellular growth outwards in a straight line. We implement this micro-patterning into multi-pad devices called CapsuleHotel and LineHotel for high-throughput imaging. CapsuleHotel provides ~65,000 capsule structures per mm 2 that isolate individual Escherichia coli cells. In contrast, LineHotel provides ~300 line structures per mm that direct growth of micro-colonies. With CapsuleHotel, a quantitative single cell dataset of ~10,000 cells across 24 samples can be acquired and analysed in under 1 hour. LineHotel allows tracking growth of > 10 micro-colonies across 24 samples simultaneously for up to 4 generations. These easy-to-use devices can be provided in kit format, and will accelerate discoveries in diverse fields ranging from microbiology to systems and synthetic biology.
SVS: data and knowledge integration in computational biology.
Zycinski, Grzegorz; Barla, Annalisa; Verri, Alessandro
2011-01-01
In this paper we present a framework for structured variable selection (SVS). The main concept of the proposed schema is to take a step towards the integration of two different aspects of data mining: database and machine learning perspective. The framework is flexible enough to use not only microarray data, but other high-throughput data of choice (e.g. from mass spectrometry, microarray, next generation sequencing). Moreover, the feature selection phase incorporates prior biological knowledge in a modular way from various repositories and is ready to host different statistical learning techniques. We present a proof of concept of SVS, illustrating some implementation details and describing current results on high-throughput microarray data.
High-throughput ab-initio dilute solute diffusion database
Wu, Henry; Mayeshiba, Tam; Morgan, Dane
2016-01-01
We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world. PMID:27434308
Wright, Imogen A; Travers, Simon A
2014-07-01
The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
A new fungal large subunit ribosomal RNA primer for high throughput sequencing surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Rebecca C.; Gallegos-Graves, La Verne; Kuske, Cheryl R.
The inclusion of phylogenetic metrics in community ecology has provided insights into important ecological processes, particularly when combined with high-throughput sequencing methods; however, these approaches have not been widely used in studies of fungal communities relative to other microbial groups. Two obstacles have been considered: (1) the internal transcribed spacer (ITS) region has limited utility for constructing phylogenies and (2) most PCR primers that target the large subunit (LSU) ribosomal unit generate amplicons that exceed current limits of high-throughput sequencing platforms. We designed and tested a PCR primer (LR22R) to target approximately 300–400 bp region of the D2 hypervariable regionmore » of the fungal LSU for use with the Illumina MiSeq platform. Both in silico and empirical analyses showed that the LR22R–LR3 pair captured a broad range of fungal taxonomic groups with a small fraction of non-fungal groups. Phylogenetic placement of publically available LSU D2 sequences showed broad agreement with taxonomic classification. Comparisons of the LSU D2 and the ITS2 ribosomal regions from environmental samples and known communities showed similar discriminatory abilities of the two primer sets. Altogether, these findings show that the LR22R–LR3 primer pair has utility for phylogenetic analyses of fungal communities using high-throughput sequencing methods.« less
A new fungal large subunit ribosomal RNA primer for high throughput sequencing surveys
Mueller, Rebecca C.; Gallegos-Graves, La Verne; Kuske, Cheryl R.
2015-12-09
The inclusion of phylogenetic metrics in community ecology has provided insights into important ecological processes, particularly when combined with high-throughput sequencing methods; however, these approaches have not been widely used in studies of fungal communities relative to other microbial groups. Two obstacles have been considered: (1) the internal transcribed spacer (ITS) region has limited utility for constructing phylogenies and (2) most PCR primers that target the large subunit (LSU) ribosomal unit generate amplicons that exceed current limits of high-throughput sequencing platforms. We designed and tested a PCR primer (LR22R) to target approximately 300–400 bp region of the D2 hypervariable regionmore » of the fungal LSU for use with the Illumina MiSeq platform. Both in silico and empirical analyses showed that the LR22R–LR3 pair captured a broad range of fungal taxonomic groups with a small fraction of non-fungal groups. Phylogenetic placement of publically available LSU D2 sequences showed broad agreement with taxonomic classification. Comparisons of the LSU D2 and the ITS2 ribosomal regions from environmental samples and known communities showed similar discriminatory abilities of the two primer sets. Altogether, these findings show that the LR22R–LR3 primer pair has utility for phylogenetic analyses of fungal communities using high-throughput sequencing methods.« less
Demonstration of lithography patterns using reflective e-beam direct write
NASA Astrophysics Data System (ADS)
Freed, Regina; Sun, Jeff; Brodie, Alan; Petric, Paul; McCord, Mark; Ronse, Kurt; Haspeslagh, Luc; Vereecke, Bart
2011-04-01
Traditionally, e-beam direct write lithography has been too slow for most lithography applications. E-beam direct write lithography has been used for mask writing rather than wafer processing since the maximum blur requirements limit column beam current - which drives e-beam throughput. To print small features and a fine pitch with an e-beam tool requires a sacrifice in processing time unless one significantly increases the total number of beams on a single writing tool. Because of the uncertainty with regards to the optical lithography roadmap beyond the 22 nm technology node, the semiconductor equipment industry is in the process of designing and testing e-beam lithography tools with the potential for high volume wafer processing. For this work, we report on the development and current status of a new maskless, direct write e-beam lithography tool which has the potential for high volume lithography at and below the 22 nm technology node. A Reflective Electron Beam Lithography (REBL) tool is being developed for high throughput electron beam direct write maskless lithography. The system is targeting critical patterning steps at the 22 nm node and beyond at a capital cost equivalent to conventional lithography. Reflective Electron Beam Lithography incorporates a number of novel technologies to generate and expose lithographic patterns with a throughput and footprint comparable to current 193 nm immersion lithography systems. A patented, reflective electron optic or Digital Pattern Generator (DPG) enables the unique approach. The Digital Pattern Generator is a CMOS ASIC chip with an array of small, independently controllable lens elements (lenslets), which act as an array of electron mirrors. In this way, the REBL system is capable of generating the pattern to be written using massively parallel exposure by ~1 million beams at extremely high data rates (~ 1Tbps). A rotary stage concept using a rotating platen carrying multiple wafers optimizes the writing strategy of the DPG to achieve the capability of high throughput for sparse pattern wafer levels. The lens elements on the DPG are fabricated at IMEC (Leuven, Belgium) under IMEC's CMORE program. The CMOS fabricated DPG contains ~ 1,000,000 lens elements, allowing for 1,000,000 individually controllable beamlets. A single lens element consists of 5 electrodes, each of which can be set at controlled voltage levels to either absorb or reflect the electron beam. A system using a linear movable stage and the DPG integrated into the electron optics module was used to expose patterns on device representative wafers. Results of these exposure tests are discussed.
GENETIC-BASED ANALYTICAL METHODS FOR BACTERIA AND FUNGI
In the past two decades, advances in high-throughput sequencing technologies have lead to a veritable explosion in the generation of nucleic acid sequence information (1). While these advances are illustrated most prominently by the successful sequencing of the human genome, they...
Discovery of 100K SNP array and its utilization in sugarcane
USDA-ARS?s Scientific Manuscript database
Next generation sequencing (NGS) enable us to identify thousands of single nucleotide polymorphisms (SNPs) marker for genotyping and fingerprinting. However, the process requires very precise bioinformatics analysis and filtering process. High throughput SNP array with predefined genomic location co...
Adverse outcome pathways (AOPs): A framework to support predictive toxicology
High throughput and in silico methods are providing the regulatory toxicology community with capacity to rapidly and cost effectively generate data concerning a chemical’s ability to initiate one or more biological perturbations that may culminate in an adverse ecological o...
Phenotypic mutant library: potential for gene discovery
USDA-ARS?s Scientific Manuscript database
The rapid development of high throughput and affordable Next- Generation Sequencing (NGS) techniques has renewed interest in gene discovery using forward genetics. The conventional forward genetic approach starts with isolation of mutants with a phenotype of interest, mapping the mutation within a s...
Analysis, annotation, and profiling of the oat seed transcriptome
USDA-ARS?s Scientific Manuscript database
Novel high-throughput next generation sequencing (NGS) technologies are providing opportunities to explore genomes and transcriptomes in a cost-effective manner. To construct a gene expression atlas of developing oat (Avena sativa) seeds, two software packages specifically designed for RNA-seq (Trin...
Web-based visual analysis for high-throughput genomics
2013-01-01
Background Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. Results We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Conclusions Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput genomics experiments. PMID:23758618
Akeroyd, Michiel; Olsthoorn, Maurien; Gerritsma, Jort; Gutker-Vermaas, Diana; Ekkelkamp, Laurens; van Rij, Tjeerd; Klaassen, Paul; Plugge, Wim; Smit, Ed; Strupat, Kerstin; Wenzel, Thibaut; van Tilborg, Marcel; van der Hoeven, Rob
2013-03-10
In the discovery of new enzymes genomic and cDNA expression libraries containing thousands of differential clones are generated to obtain biodiversity. These libraries need to be screened for the activity of interest. Removing so-called empty and redundant clones significantly reduces the size of these expression libraries and therefore speeds up new enzyme discovery. Here, we present a sensitive, generic workflow for high throughput screening of successful microbial protein over-expression in microtiter plates containing a complex matrix based on mass spectrometry techniques. MALDI-LTQ-Orbitrap screening followed by principal component analysis and peptide mass fingerprinting was developed to obtain a throughput of ∼12,000 samples per week. Alternatively, a UHPLC-MS(2) approach including MS(2) protein identification was developed for microorganisms with a complex protein secretome with a throughput of ∼2000 samples per week. TCA-induced protein precipitation enhanced by addition of bovine serum albumin is used for protein purification prior to MS detection. We show that this generic workflow can effectively reduce large expression libraries from fungi and bacteria to their minimal size by detection of successful protein over-expression using MS. Copyright © 2012 Elsevier B.V. All rights reserved.
Pan, Yuchen; Sackmann, Eric K; Wypisniak, Karolina; Hornsby, Michael; Datwani, Sammy S; Herr, Amy E
2016-12-23
High-quality immunoreagents enhance the performance and reproducibility of immunoassays and, in turn, the quality of both biological and clinical measurements. High quality recombinant immunoreagents are generated using antibody-phage display. One metric of antibody quality - the binding affinity - is quantified through the dissociation constant (K D ) of each recombinant antibody and the target antigen. To characterize the K D of recombinant antibodies and target antigen, we introduce affinity electrophoretic mobility shift assays (EMSAs) in a high-throughput format suitable for small volume samples. A microfluidic card comprised of free-standing polyacrylamide gel (fsPAG) separation lanes supports 384 concurrent EMSAs in 30 s using a single power source. Sample is dispensed onto the microfluidic EMSA card by acoustic droplet ejection (ADE), which reduces EMSA variability compared to sample dispensing using manual or pin tools. The K D for each of a six-member fragment antigen-binding fragment library is reported using ~25-fold less sample mass and ~5-fold less time than conventional heterogeneous assays. Given the form factor and performance of this micro- and mesofluidic workflow, we have developed a sample-sparing, high-throughput, solution-phase alternative for biomolecular affinity characterization.
Pan, Yuchen; Sackmann, Eric K.; Wypisniak, Karolina; Hornsby, Michael; Datwani, Sammy S.; Herr, Amy E.
2016-01-01
High-quality immunoreagents enhance the performance and reproducibility of immunoassays and, in turn, the quality of both biological and clinical measurements. High quality recombinant immunoreagents are generated using antibody-phage display. One metric of antibody quality – the binding affinity – is quantified through the dissociation constant (KD) of each recombinant antibody and the target antigen. To characterize the KD of recombinant antibodies and target antigen, we introduce affinity electrophoretic mobility shift assays (EMSAs) in a high-throughput format suitable for small volume samples. A microfluidic card comprised of free-standing polyacrylamide gel (fsPAG) separation lanes supports 384 concurrent EMSAs in 30 s using a single power source. Sample is dispensed onto the microfluidic EMSA card by acoustic droplet ejection (ADE), which reduces EMSA variability compared to sample dispensing using manual or pin tools. The KD for each of a six-member fragment antigen-binding fragment library is reported using ~25-fold less sample mass and ~5-fold less time than conventional heterogeneous assays. Given the form factor and performance of this micro- and mesofluidic workflow, we have developed a sample-sparing, high-throughput, solution-phase alternative for biomolecular affinity characterization. PMID:28008969
SapTrap, a Toolkit for High-Throughput CRISPR/Cas9 Gene Modification in Caenorhabditis elegans.
Schwartz, Matthew L; Jorgensen, Erik M
2016-04-01
In principle, clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 allows genetic tags to be inserted at any locus. However, throughput is limited by the laborious construction of repair templates and guide RNA constructs and by the identification of modified strains. We have developed a reagent toolkit and plasmid assembly pipeline, called "SapTrap," that streamlines the production of targeting vectors for tag insertion, as well as the selection of modified Caenorhabditis elegans strains. SapTrap is a high-efficiency modular plasmid assembly pipeline that produces single plasmid targeting vectors, each of which encodes both a guide RNA transcript and a repair template for a particular tagging event. The plasmid is generated in a single tube by cutting modular components with the restriction enzyme SapI, which are then "trapped" in a fixed order by ligation to generate the targeting vector. A library of donor plasmids supplies a variety of protein tags, a selectable marker, and regulatory sequences that allow cell-specific tagging at either the N or the C termini. All site-specific sequences, such as guide RNA targeting sequences and homology arms, are supplied as annealed synthetic oligonucleotides, eliminating the need for PCR or molecular cloning during plasmid assembly. Each tag includes an embedded Cbr-unc-119 selectable marker that is positioned to allow concurrent expression of both the tag and the marker. We demonstrate that SapTrap targeting vectors direct insertion of 3- to 4-kb tags at six different loci in 10-37% of injected animals. Thus SapTrap vectors introduce the possibility for high-throughput generation of CRISPR/Cas9 genome modifications. Copyright © 2016 by the Genetics Society of America.
Carmona, Santiago J.; Nielsen, Morten; Schafer-Nielsen, Claus; Mucci, Juan; Altcheh, Jaime; Balouz, Virginia; Tekiel, Valeria; Frasch, Alberto C.; Campetella, Oscar; Buscaglia, Carlos A.; Agüero, Fernán
2015-01-01
Complete characterization of antibody specificities associated to natural infections is expected to provide a rich source of serologic biomarkers with potential applications in molecular diagnosis, follow-up of chemotherapeutic treatments, and prioritization of targets for vaccine development. Here, we developed a highly-multiplexed platform based on next-generation high-density peptide microarrays to map these specificities in Chagas Disease, an exemplar of a human infectious disease caused by the protozoan Trypanosoma cruzi. We designed a high-density peptide microarray containing more than 175,000 overlapping 15mer peptides derived from T. cruzi proteins. Peptides were synthesized in situ on microarray slides, spanning the complete length of 457 parasite proteins with fully overlapped 15mers (1 residue shift). Screening of these slides with antibodies purified from infected patients and healthy donors demonstrated both a high technical reproducibility as well as epitope mapping consistency when compared with earlier low-throughput technologies. Using a conservative signal threshold to classify positive (reactive) peptides we identified 2,031 disease-specific peptides and 97 novel parasite antigens, effectively doubling the number of known antigens and providing a 10-fold increase in the number of fine mapped antigenic determinants for this disease. Finally, further analysis of the chip data showed that optimizing the amount of sequence overlap of displayed peptides can increase the protein space covered in a single chip by at least ∼threefold without sacrificing sensitivity. In conclusion, we show the power of high-density peptide chips for the discovery of pathogen-specific linear B-cell epitopes from clinical samples, thus setting the stage for high-throughput biomarker discovery screenings and proteome-wide studies of immune responses against pathogens. PMID:25922409
Carmona, Santiago J; Nielsen, Morten; Schafer-Nielsen, Claus; Mucci, Juan; Altcheh, Jaime; Balouz, Virginia; Tekiel, Valeria; Frasch, Alberto C; Campetella, Oscar; Buscaglia, Carlos A; Agüero, Fernán
2015-07-01
Complete characterization of antibody specificities associated to natural infections is expected to provide a rich source of serologic biomarkers with potential applications in molecular diagnosis, follow-up of chemotherapeutic treatments, and prioritization of targets for vaccine development. Here, we developed a highly-multiplexed platform based on next-generation high-density peptide microarrays to map these specificities in Chagas Disease, an exemplar of a human infectious disease caused by the protozoan Trypanosoma cruzi. We designed a high-density peptide microarray containing more than 175,000 overlapping 15 mer peptides derived from T. cruzi proteins. Peptides were synthesized in situ on microarray slides, spanning the complete length of 457 parasite proteins with fully overlapped 15 mers (1 residue shift). Screening of these slides with antibodies purified from infected patients and healthy donors demonstrated both a high technical reproducibility as well as epitope mapping consistency when compared with earlier low-throughput technologies. Using a conservative signal threshold to classify positive (reactive) peptides we identified 2,031 disease-specific peptides and 97 novel parasite antigens, effectively doubling the number of known antigens and providing a 10-fold increase in the number of fine mapped antigenic determinants for this disease. Finally, further analysis of the chip data showed that optimizing the amount of sequence overlap of displayed peptides can increase the protein space covered in a single chip by at least ∼ threefold without sacrificing sensitivity. In conclusion, we show the power of high-density peptide chips for the discovery of pathogen-specific linear B-cell epitopes from clinical samples, thus setting the stage for high-throughput biomarker discovery screenings and proteome-wide studies of immune responses against pathogens. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
As defined by Wikipedia (https://en.wikipedia.org/wiki/Metamodeling), “(a) metamodel or surrogate model is a model of a model, and metamodeling is the process of generating such metamodels.” The goals of metamodeling include, but are not limited to (1) developing func...
Chiaraviglio, Lucius; Kang, Yoon-Suk; Kirby, James E.
2016-01-01
Traditional measures of intracellular antimicrobial activity and eukaryotic cell cytotoxicity rely on endpoint assays. Such endpoint assays require several additional experimental steps prior to readout, such as cell lysis, colony forming unit determination, or reagent addition. When performing thousands of assays, for example, during high-throughput screening, the downstream effort required for these types of assays is considerable. Therefore, to facilitate high-throughput antimicrobial discovery, we developed a real-time assay to simultaneously identify inhibitors of intracellular bacterial growth and assess eukaryotic cell cytotoxicity. Specifically, real-time intracellular bacterial growth detection was enabled by marking bacterial screening strains with either a bacterial lux operon (1st generation assay) or fluorescent protein reporters (2nd generation, orthogonal assay). A non-toxic, cell membrane-impermeant, nucleic acid-binding dye was also added during initial infection of macrophages. These dyes are excluded from viable cells. However, non-viable host cells lose membrane integrity permitting entry and fluorescent labeling of nuclear DNA (deoxyribonucleic acid). Notably, DNA binding is associated with a large increase in fluorescent quantum yield that provides a solution-based readout of host cell death. We have used this combined assay to perform a high-throughput screen in microplate format, and to assess intracellular growth and cytotoxicity by microscopy. Notably, antimicrobials may demonstrate synergy in which the combined effect of two or more antimicrobials when applied together is greater than when applied separately. Testing for in vitro synergy against intracellular pathogens is normally a prodigious task as combinatorial permutations of antibiotics at different concentrations must be assessed. However, we found that our real-time assay combined with automated, digital dispensing technology permitted facile synergy testing. Using these approaches, we were able to systematically survey action of a large number of antimicrobials alone and in combination against the intracellular pathogen, Legionella pneumophila. PMID:27911388
High-throughput screening and small animal models, where are we?
Giacomotto, Jean; Ségalat, Laurent
2010-01-01
Current high-throughput screening methods for drug discovery rely on the existence of targets. Moreover, most of the hits generated during screenings turn out to be invalid after further testing in animal models. To by-pass these limitations, efforts are now being made to screen chemical libraries on whole animals. One of the most commonly used animal model in biology is the murine model Mus musculus. However, its cost limit its use in large-scale therapeutic screening. In contrast, the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster, and the fish Danio rerio are gaining momentum as screening tools. These organisms combine genetic amenability, low cost and culture conditions that are compatible with large-scale screens. Their main advantage is to allow high-throughput screening in a whole-animal context. Moreover, their use is not dependent on the prior identification of a target and permits the selection of compounds with an improved safety profile. This review surveys the versatility of these animal models for drug discovery and discuss the options available at this day. PMID:20423335
Research progress of plant population genomics based on high-throughput sequencing.
Wang, Yun-sheng
2016-08-01
Population genomics, a new paradigm for population genetics, combine the concepts and techniques of genomics with the theoretical system of population genetics and improve our understanding of microevolution through identification of site-specific effect and genome-wide effects using genome-wide polymorphic sites genotypeing. With the appearance and improvement of the next generation high-throughput sequencing technology, the numbers of plant species with complete genome sequences increased rapidly and large scale resequencing has also been carried out in recent years. Parallel sequencing has also been done in some plant species without complete genome sequences. These studies have greatly promoted the development of population genomics and deepened our understanding of the genetic diversity, level of linking disequilibium, selection effect, demographical history and molecular mechanism of complex traits of relevant plant population at a genomic level. In this review, I briely introduced the concept and research methods of population genomics and summarized the research progress of plant population genomics based on high-throughput sequencing. I also discussed the prospect as well as existing problems of plant population genomics in order to provide references for related studies.
NASA Astrophysics Data System (ADS)
Pfeiffer, Hans
1995-12-01
IBM's high-throughput e-beam stepper approach PRojection Exposure with Variable Axis Immersion Lenses (PREVAIL) is reviewed. The PREVAIL concept combines technology building blocks of our probe-forming EL-3 and EL-4 systems with the exposure efficiency of pattern projection. The technology represents an extension of the shaped-beam approach toward massively parallel pixel projection. As demonstrated, the use of variable-axis lenses can provide large field coverage through reduction of off-axis aberrations which limit the performance of conventional projection systems. Subfield pattern sections containing 107 or more pixels can be electronically selected (mask plane), projected and positioned (wafer plane) at high speed. To generate the entire chip pattern subfields must be stitched together sequentially in a combination of electronic and mechanical positioning of mask and wafer. The PREVAIL technology promises throughput levels competitive with those of optical steppers at superior resolution. The PREVAIL project is being pursued to demonstrate the viability of the technology and to develop an e-beam alternative to “suboptical” lithography.
Bell, Robert T; Jacobs, Alan G; Sorg, Victoria C; Jung, Byungki; Hill, Megan O; Treml, Benjamin E; Thompson, Michael O
2016-09-12
A high-throughput method for characterizing the temperature dependence of material properties following microsecond to millisecond thermal annealing, exploiting the temperature gradients created by a lateral gradient laser spike anneal (lgLSA), is presented. Laser scans generate spatial thermal gradients of up to 5 °C/μm with peak temperatures ranging from ambient to in excess of 1400 °C, limited only by laser power and materials thermal limits. Discrete spatial property measurements across the temperature gradient are then equivalent to independent measurements after varying temperature anneals. Accurate temperature calibrations, essential to quantitative analysis, are critical and methods for both peak temperature and spatial/temporal temperature profile characterization are presented. These include absolute temperature calibrations based on melting and thermal decomposition, and time-resolved profiles measured using platinum thermistors. A variety of spatially resolved measurement probes, ranging from point-like continuous profiling to large area sampling, are discussed. Examples from annealing of III-V semiconductors, CdSe quantum dots, low-κ dielectrics, and block copolymers are included to demonstrate the flexibility, high throughput, and precision of this technique.
Fast and accurate enzyme activity measurements using a chip-based microfluidic calorimeter.
van Schie, Morten M C H; Ebrahimi, Kourosh Honarmand; Hagen, Wilfred R; Hagedoorn, Peter-Leon
2018-03-01
Recent developments in microfluidic and nanofluidic technologies have resulted in development of new chip-based microfluidic calorimeters with potential use in different fields. One application would be the accurate high-throughput measurement of enzyme activity. Calorimetry is a generic way to measure activity of enzymes, but unlike conventional calorimeters, chip-based calorimeters can be easily automated and implemented in high-throughput screening platforms. However, application of chip-based microfluidic calorimeters to measure enzyme activity has been limited due to problems associated with miniaturization such as incomplete mixing and a decrease in volumetric heat generated. To address these problems we introduced a calibration method and devised a convenient protocol for using a chip-based microfluidic calorimeter. Using the new calibration method, the progress curve of alkaline phosphatase, which has product inhibition for phosphate, measured by the calorimeter was the same as that recorded by UV-visible spectroscopy. Our results may enable use of current chip-based microfluidic calorimeters in a simple manner as a tool for high-throughput screening of enzyme activity with potential applications in drug discovery and enzyme engineering. Copyright © 2017. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chhabra, S.R.; Butland, G.; Elias, D.
The ability to conduct advanced functional genomic studies of the thousands of sequenced bacteria has been hampered by the lack of available tools for making high- throughput chromosomal manipulations in a systematic manner that can be applied across diverse species. In this work, we highlight the use of synthetic biological tools to assemble custom suicide vectors with reusable and interchangeable DNA “parts” to facilitate chromosomal modification at designated loci. These constructs enable an array of downstream applications including gene replacement and creation of gene fusions with affinity purification or localization tags. We employed this approach to engineer chromosomal modifications inmore » a bacterium that has previously proven difficult to manipulate genetically, Desulfovibrio vulgaris Hildenborough, to generate a library of over 700 strains. Furthermore, we demonstrate how these modifications can be used for examining metabolic pathways, protein-protein interactions, and protein localization. The ubiquity of suicide constructs in gene replacement throughout biology suggests that this approach can be applied to engineer a broad range of species for a diverse array of systems biological applications and is amenable to high-throughput implementation.« less
A high-throughput method for GMO multi-detection using a microfluidic dynamic array.
Brod, Fábio Cristiano Angonesi; van Dijk, Jeroen P; Voorhuijzen, Marleen M; Dinon, Andréia Zilio; Guimarães, Luis Henrique S; Scholtens, Ingrid M J; Arisi, Ana Carolina Maisonnave; Kok, Esther J
2014-02-01
The ever-increasing production of genetically modified crops generates a demand for high-throughput DNA-based methods for the enforcement of genetically modified organisms (GMO) labelling requirements. The application of standard real-time PCR will become increasingly costly with the growth of the number of GMOs that is potentially present in an individual sample. The present work presents the results of an innovative approach in genetically modified crops analysis by DNA based methods, which is the use of a microfluidic dynamic array as a high throughput multi-detection system. In order to evaluate the system, six test samples with an increasing degree of complexity were prepared, preamplified and subsequently analysed in the Fluidigm system. Twenty-eight assays targeting different DNA elements, GM events and species-specific reference genes were used in the experiment. The large majority of the assays tested presented expected results. The power of low level detection was assessed and elements present at concentrations as low as 0.06 % were successfully detected. The approach proposed in this work presents the Fluidigm system as a suitable and promising platform for GMO multi-detection.
High-Throughput and Cost-Effective Characterization of Induced Pluripotent Stem Cells.
D'Antonio, Matteo; Woodruff, Grace; Nathanson, Jason L; D'Antonio-Chronowska, Agnieszka; Arias, Angelo; Matsui, Hiroko; Williams, Roy; Herrera, Cheryl; Reyna, Sol M; Yeo, Gene W; Goldstein, Lawrence S B; Panopoulos, Athanasia D; Frazer, Kelly A
2017-04-11
Reprogramming somatic cells to induced pluripotent stem cells (iPSCs) offers the possibility of studying the molecular mechanisms underlying human diseases in cell types difficult to extract from living patients, such as neurons and cardiomyocytes. To date, studies have been published that use small panels of iPSC-derived cell lines to study monogenic diseases. However, to study complex diseases, where the genetic variation underlying the disorder is unknown, a sizable number of patient-specific iPSC lines and controls need to be generated. Currently the methods for deriving and characterizing iPSCs are time consuming, expensive, and, in some cases, descriptive but not quantitative. Here we set out to develop a set of simple methods that reduce cost and increase throughput in the characterization of iPSC lines. Specifically, we outline methods for high-throughput quantification of surface markers, gene expression analysis of in vitro differentiation potential, and evaluation of karyotype with markedly reduced cost. Published by Elsevier Inc.
Cheng, Chialin; Fass, Daniel M; Folz-Donahue, Kat; MacDonald, Marcy E; Haggarty, Stephen J
2017-01-11
Reprogramming of human somatic cells into induced pluripotent stem (iPS) cells has greatly expanded the set of research tools available to investigate the molecular and cellular mechanisms underlying central nervous system (CNS) disorders. Realizing the promise of iPS cell technology for the identification of novel therapeutic targets and for high-throughput drug screening requires implementation of methods for the large-scale production of defined CNS cell types. Here we describe a protocol for generating stable, highly expandable, iPS cell-derived CNS neural progenitor cells (NPC) using multi-dimensional fluorescence activated cell sorting (FACS) to purify NPC defined by cell surface markers. In addition, we describe a rapid, efficient, and reproducible method for generating excitatory cortical-like neurons from these NPC through inducible expression of the pro-neural transcription factor Neurogenin 2 (iNgn2-NPC). Finally, we describe methodology for the use of iNgn2-NPC for probing human neuroplasticity and mechanisms underlying CNS disorders using high-content, single-cell-level automated microscopy assays. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Beeman, Katrin; Baumgärtner, Jens; Laubenheimer, Manuel; Hergesell, Karlheinz; Hoffmann, Martin; Pehl, Ulrich; Fischer, Frank; Pieck, Jan-Carsten
2017-12-01
Mass spectrometry (MS) is known for its label-free detection of substrates and products from a variety of enzyme reactions. Recent hardware improvements have increased interest in the use of matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) MS for high-throughput drug discovery. Despite interest in this technology, several challenges remain and must be overcome before MALDI-MS can be integrated as an automated "in-line reader" for high-throughput drug discovery. Two such hurdles include in situ sample processing and deposition, as well as integration of MALDI-MS for enzymatic screening assays that usually contain high levels of MS-incompatible components. Here we adapt our c-MET kinase assay to optimize for MALDI-MS compatibility and test its feasibility for compound screening. The pros and cons of the Echo (Labcyte) as a transfer system for in situ MALDI-MS sample preparation are discussed. We demonstrate that this method generates robust data in a 1536-grid format. We use the MALDI-MS to directly measure the ratio of c-MET substrate and phosphorylated product to acquire IC50 curves and demonstrate that the pharmacology is unaffected. The resulting IC50 values correlate well between the common label-based capillary electrophoresis and the label-free MALDI-MS detection method. We predict that label-free MALDI-MS-based high-throughput screening will become increasingly important and more widely used for drug discovery.
Okagbare, Paul I.; Soper, Steven A.
2011-01-01
Microfluidics represents a viable platform for performing High Throughput Screening (HTS) due to its ability to automate fluid handling and generate fluidic networks with high number densities over small footprints appropriate for the simultaneous optical interrogation of many screening assays. While most HTS campaigns depend on fluorescence, readers typically use point detection and serially address the assay results significantly lowering throughput or detection sensitivity due to a low duty cycle. To address this challenge, we present here the fabrication of a high density microfluidic network packed into the imaging area of a large field-of-view (FoV) ultrasensitive fluorescence detection system. The fluidic channels were 1, 5 or 10 μm (width), 1 μm (depth) with a pitch of 1–10 μm and each fluidic processor was individually addressable. The fluidic chip was produced from a molding tool using hot embossing and thermal fusion bonding to enclose the fluidic channels. A 40X microscope objective (numerical aperture = 0.75) created a FoV of 200 μm, providing the ability to interrogate ~25 channels using the current fluidic configuration. An ultrasensitive fluorescence detection system with a large FoV was used to transduce fluorescence signals simultaneously from each fluidic processor onto the active area of an electron multiplying charge-coupled device (EMCCD). The utility of these multichannel networks for HTS was demonstrated by carrying out the high throughput monitoring of the activity of an enzyme, APE1, used as a model screening assay. PMID:20872611
Prediction of in vivo hepatotoxicity effects using in vitro transcriptomics data (SOT)
High-throughput in vitro transcriptomics data support molecular understanding of chemical-induced toxicity. Here, we evaluated the utility of such data to predict liver toxicity. First, in vitro gene expression data for 93 genes was generated following exposure of metabolically c...
Microtiter plate-based antibody microarrays for bacteria and toxins
USDA-ARS?s Scientific Manuscript database
Research has focused on the development of rapid biosensor-based, high-throughput, and multiplexed detection of pathogenic bacteria in foods. Specifically, antibody microarrays in 96-well microtiter plates have been generated for the purpose of selective detection of Shiga toxin-producing E. coli (...
Adverse outcome pathways (AOPs): A framework to support predictive toxicology (presentation)
High throughput and in silico methods are providing the regulatory toxicology community with capacity to rapidly and cost effectively generate data concerning a chemical’s ability to initiate one or more biological perturbations that may culminate in an adverse ecological o...
Exposure Space: Integrating Exposure Data and Modeling with Toxicity Information
Recent advances have been made in high-throughput (HTP) toxicity testing, e.g. from ToxCast, which will ultimately be combined with HTP predictions of exposure potential to support next-generation chemical safety assessment. Rapid exposure methods are essential in selecting chemi...
ExpoCast Framework for Rapid Exposure Forecasts (ISES ExpoDat symposium presentation)
The U.S. E.P.A. ExpoCast project uses high throughput exposure models (simulation) and any easily-obtained exposure heuristics to generate forward predictions of potential exposures from chemical properties. By comparison with exposures inferred via reverse pharmacokinetic modeli...
Germplasm Management in the Post-genomics Era-a case study with lettuce
USDA-ARS?s Scientific Manuscript database
High-throughput genotyping platforms and next-generation sequencing technologies revolutionized our ways in germplasm characterization. In collaboration with UC Davis Genome Center, we completed a project of genotyping the entire cultivated lettuce (Lactuca sativa L.) collection of 1,066 accessions ...
Bioinformatics and the Undergraduate Curriculum
ERIC Educational Resources Information Center
Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael
2010-01-01
Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…
Advances in Toxico-Cheminformatics: Supporting a New Paradigm for Predictive Toxicology
EPA’s National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction through the harnessing of legacy toxicity data, creation of data linkages, and generation of new high-throughput screening (HTS) data. The D...
Physico-chemical foundations underpinning microarray and next-generation sequencing experiments
Harrison, Andrew; Binder, Hans; Buhot, Arnaud; Burden, Conrad J.; Carlon, Enrico; Gibas, Cynthia; Gamble, Lara J.; Halperin, Avraham; Hooyberghs, Jef; Kreil, David P.; Levicky, Rastislav; Noble, Peter A.; Ott, Albrecht; Pettitt, B. Montgomery; Tautz, Diethard; Pozhitkov, Alexander E.
2013-01-01
Hybridization of nucleic acids on solid surfaces is a key process involved in high-throughput technologies such as microarrays and, in some cases, next-generation sequencing (NGS). A physical understanding of the hybridization process helps to determine the accuracy of these technologies. The goal of a widespread research program is to develop reliable transformations between the raw signals reported by the technologies and individual molecular concentrations from an ensemble of nucleic acids. This research has inputs from many areas, from bioinformatics and biostatistics, to theoretical and experimental biochemistry and biophysics, to computer simulations. A group of leading researchers met in Ploen Germany in 2011 to discuss present knowledge and limitations of our physico-chemical understanding of high-throughput nucleic acid technologies. This meeting inspired us to write this summary, which provides an overview of the state-of-the-art approaches based on physico-chemical foundation to modeling of the nucleic acids hybridization process on solid surfaces. In addition, practical application of current knowledge is emphasized. PMID:23307556
Building biochips: a protein production pipeline
NASA Astrophysics Data System (ADS)
de Carvalho-Kavanagh, Marianne G. S.; Albala, Joanna S.
2004-06-01
Protein arrays are emerging as a practical format in which to study proteins in high-throughput using many of the same techniques as that of the DNA microarray. The key advantage to array-based methods for protein study is the potential for parallel analysis of thousands of samples in an automated, high-throughput fashion. Building protein arrays capable of this analysis capacity requires a robust expression and purification system capable of generating hundreds to thousands of purified recombinant proteins. We have developed a method to utilize LLNL-I.M.A.G.E. cDNAs to generate recombinant protein libraries using a baculovirus-insect cell expression system. We have used this strategy to produce proteins for analysis of protein/DNA and protein/protein interactions using protein microarrays in order to understand the complex interactions of proteins involved in homologous recombination and DNA repair. Using protein array techniques, a novel interaction between the DNA repair protein, Rad51B, and histones has been identified.
Nebula: reconstruction and visualization of scattering data in reciprocal space.
Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H
2015-04-01
Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time-scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula , is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware.
Nebula: reconstruction and visualization of scattering data in reciprocal space
Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H.
2015-01-01
Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute timescales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula, is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware. PMID:25844083
Techniques for Mapping Synthetic Aperture Radar Processing Algorithms to Multi-GPU Clusters
2012-12-01
Experimental results were generated with 10 nVidia Tesla C2050 GPUs having maximum throughput of 972 Gflop /s. Our approach scales well for output...Experimental results were generated with 10 nVidia Tesla C2050 GPUs having maximum throughput of 972 Gflop /s. Our approach scales well for output
A versatile and efficient high-throughput cloning tool for structural biology.
Geertsma, Eric R; Dutzler, Raimund
2011-04-19
Methods for the cloning of large numbers of open reading frames into expression vectors are of critical importance for challenging structural biology projects. Here we describe a system termed fragment exchange (FX) cloning that facilitates the high-throughput generation of expression constructs. The method is based on a class IIS restriction enzyme and negative selection markers. FX cloning combines attractive features of established recombination- and ligation-independent cloning methods: It allows the straightforward transfer of an open reading frame into a variety of expression vectors and is highly efficient and very economic in its use. In addition, FX cloning avoids the common but undesirable feature of significantly extending target open reading frames with cloning related sequences, as it leaves a minimal seam of only a single extra amino acid to either side of the protein. The method has proven to be very robust and suitable for all common pro- and eukaryotic expression systems. It considerably speeds up the generation of expression constructs compared to traditional methods and thus facilitates a broader expression screening.
Sandwich ELISA Microarrays: Generating Reliable and Reproducible Assays for High-Throughput Screens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez, Rachel M.; Varnum, Susan M.; Zangar, Richard C.
The sandwich ELISA microarray is a powerful screening tool in biomarker discovery and validation due to its ability to simultaneously probe for multiple proteins in a miniaturized assay. The technical challenges of generating and processing the arrays are numerous. However, careful attention to possible pitfalls in the development of your antibody microarray assay can overcome these challenges. In this chapter, we describe in detail the steps that are involved in generating a reliable and reproducible sandwich ELISA microarray assay.
A high-throughput Sanger strategy for human mitochondrial genome sequencing
2013-01-01
Background A population reference database of complete human mitochondrial genome (mtGenome) sequences is needed to enable the use of mitochondrial DNA (mtDNA) coding region data in forensic casework applications. However, the development of entire mtGenome haplotypes to forensic data quality standards is difficult and laborious. A Sanger-based amplification and sequencing strategy that is designed for automated processing, yet routinely produces high quality sequences, is needed to facilitate high-volume production of these mtGenome data sets. Results We developed a robust 8-amplicon Sanger sequencing strategy that regularly produces complete, forensic-quality mtGenome haplotypes in the first pass of data generation. The protocol works equally well on samples representing diverse mtDNA haplogroups and DNA input quantities ranging from 50 pg to 1 ng, and can be applied to specimens of varying DNA quality. The complete workflow was specifically designed for implementation on robotic instrumentation, which increases throughput and reduces both the opportunities for error inherent to manual processing and the cost of generating full mtGenome sequences. Conclusions The described strategy will assist efforts to generate complete mtGenome haplotypes which meet the highest data quality expectations for forensic genetic and other applications. Additionally, high-quality data produced using this protocol can be used to assess mtDNA data developed using newer technologies and chemistries. Further, the amplification strategy can be used to enrich for mtDNA as a first step in sample preparation for targeted next-generation sequencing. PMID:24341507
High-Throughput Non-Contact Vitrification of Cell-Laden Droplets Based on Cell Printing
NASA Astrophysics Data System (ADS)
Shi, Meng; Ling, Kai; Yong, Kar Wey; Li, Yuhui; Feng, Shangsheng; Zhang, Xiaohui; Pingguan-Murphy, Belinda; Lu, Tian Jian; Xu, Feng
2015-12-01
Cryopreservation is the most promising way for long-term storage of biological samples e.g., single cells and cellular structures. Among various cryopreservation methods, vitrification is advantageous by employing high cooling rate to avoid the formation of harmful ice crystals in cells. Most existing vitrification methods adopt direct contact of cells with liquid nitrogen to obtain high cooling rates, which however causes the potential contamination and difficult cell collection. To address these limitations, we developed a non-contact vitrification device based on an ultra-thin freezing film to achieve high cooling/warming rate and avoid direct contact between cells and liquid nitrogen. A high-throughput cell printer was employed to rapidly generate uniform cell-laden microdroplets into the device, where the microdroplets were hung on one side of the film and then vitrified by pouring the liquid nitrogen onto the other side via boiling heat transfer. Through theoretical and experimental studies on vitrification processes, we demonstrated that our device offers a high cooling/warming rate for vitrification of the NIH 3T3 cells and human adipose-derived stem cells (hASCs) with maintained cell viability and differentiation potential. This non-contact vitrification device provides a novel and effective way to cryopreserve cells at high throughput and avoid the contamination and collection problems.
High-Throughput Non-Contact Vitrification of Cell-Laden Droplets Based on Cell Printing
Shi, Meng; Ling, Kai; Yong, Kar Wey; Li, Yuhui; Feng, Shangsheng; Zhang, Xiaohui; Pingguan-Murphy, Belinda; Lu, Tian Jian; Xu, Feng
2015-01-01
Cryopreservation is the most promising way for long-term storage of biological samples e.g., single cells and cellular structures. Among various cryopreservation methods, vitrification is advantageous by employing high cooling rate to avoid the formation of harmful ice crystals in cells. Most existing vitrification methods adopt direct contact of cells with liquid nitrogen to obtain high cooling rates, which however causes the potential contamination and difficult cell collection. To address these limitations, we developed a non-contact vitrification device based on an ultra-thin freezing film to achieve high cooling/warming rate and avoid direct contact between cells and liquid nitrogen. A high-throughput cell printer was employed to rapidly generate uniform cell-laden microdroplets into the device, where the microdroplets were hung on one side of the film and then vitrified by pouring the liquid nitrogen onto the other side via boiling heat transfer. Through theoretical and experimental studies on vitrification processes, we demonstrated that our device offers a high cooling/warming rate for vitrification of the NIH 3T3 cells and human adipose-derived stem cells (hASCs) with maintained cell viability and differentiation potential. This non-contact vitrification device provides a novel and effective way to cryopreserve cells at high throughput and avoid the contamination and collection problems. PMID:26655688
High-Resolution X-Ray Telescopes
NASA Technical Reports Server (NTRS)
ODell, Stephen L.; Brissenden, Roger J.; Davis, William; Elsner, Ronald F.; Elvis, Martin; Freeman, Mark; Gaetz, Terry; Gorenstein, Paul; Gubarev, Mikhail V.
2010-01-01
Fundamental needs for future x-ray telescopes: a) Sharp images => excellent angular resolution. b) High throughput => large aperture areas. Generation-X optics technical challenges: a) High resolution => precision mirrors & alignment. b) Large apertures => lots of lightweight mirrors. Innovation needed for technical readiness: a) 4 top-level error terms contribute to image size. b) There are approaches to controlling those errors. Innovation needed for manufacturing readiness. Programmatic issues are comparably challenging.
A robust robotic high-throughput antibody purification platform.
Schmidt, Peter M; Abdo, Michael; Butcher, Rebecca E; Yap, Min-Yin; Scotney, Pierre D; Ramunno, Melanie L; Martin-Roussety, Genevieve; Owczarek, Catherine; Hardy, Matthew P; Chen, Chao-Guang; Fabri, Louis J
2016-07-15
Monoclonal antibodies (mAbs) have become the fastest growing segment in the drug market with annual sales of more than 40 billion US$ in 2013. The selection of lead candidate molecules involves the generation of large repertoires of antibodies from which to choose a final therapeutic candidate. Improvements in the ability to rapidly produce and purify many antibodies in sufficient quantities reduces the lead time for selection which ultimately impacts on the speed with which an antibody may transition through the research stage and into product development. Miniaturization and automation of chromatography using micro columns (RoboColumns(®) from Atoll GmbH) coupled to an automated liquid handling instrument (ALH; Freedom EVO(®) from Tecan) has been a successful approach to establish high throughput process development platforms. Recent advances in transient gene expression (TGE) using the high-titre Expi293F™ system have enabled recombinant mAb titres of greater than 500mg/L. These relatively high protein titres reduce the volume required to generate several milligrams of individual antibodies for initial biochemical and biological downstream assays, making TGE in the Expi293F™ system ideally suited to high throughput chromatography on an ALH. The present publication describes a novel platform for purifying Expi293F™-expressed recombinant mAbs directly from cell-free culture supernatant on a Perkin Elmer JANUS-VariSpan ALH equipped with a plate shuttle device. The purification platform allows automated 2-step purification (Protein A-desalting/size exclusion chromatography) of several hundred mAbs per week. The new robotic method can purify mAbs with high recovery (>90%) at sub-milligram level with yields of up to 2mg from 4mL of cell-free culture supernatant. Copyright © 2016 Elsevier B.V. All rights reserved.
High-Throughput Silencing Using the CRISPR-Cas9 System: A Review of the Benefits and Challenges.
Wade, Mark
2015-09-01
The clustered regularly interspaced short palindromic repeats (CRISPR)/Cas system has been seized upon with a fervor enjoyed previously by small interfering RNA (siRNA) and short hairpin RNA (shRNA) technologies and has enormous potential for high-throughput functional genomics studies. The decision to use this approach must be balanced with respect to adoption of existing platforms versus awaiting the development of more "mature" next-generation systems. Here, experience from siRNA and shRNA screening plays an important role, as issues such as targeting efficiency, pooling strategies, and off-target effects with those technologies are already framing debates in the CRISPR field. CRISPR/Cas can be exploited not only to knockout genes but also to up- or down-regulate gene transcription-in some cases in a multiplex fashion. This provides a powerful tool for studying the interaction among multiple signaling cascades in the same genetic background. Furthermore, the documented success of CRISPR/Cas-mediated gene correction (or the corollary, introduction of disease-specific mutations) provides proof of concept for the rapid generation of isogenic cell lines for high-throughput screening. In this review, the advantages and limitations of CRISPR/Cas are discussed and current and future applications are highlighted. It is envisaged that complementarities between CRISPR, siRNA, and shRNA will ensure that all three technologies remain critical to the success of future functional genomics projects. © 2015 Society for Laboratory Automation and Screening.
Hsieh, Huangpin Ben; Fitch, John; White, Dave; Torres, Frank; Roy, Joy; Matusiak, Robert; Krivacic, Bob; Kowalski, Bob; Bruce, Richard; Elrod, Scott
2004-03-01
The authors have constructed an array of 12 piezoelectric ejectors for printing biological materials. A single-ejector footprint is 8 mm in diameter, standing 4 mm high with 2 reservoirs totaling 76 micro L. These ejectors have been tested by dispensing various fluids in several environmental conditions. Reliable drop ejection can be expected in both humidity-controlled and ambient environments over extended periods of time and in hot and cold room temperatures. In a prototype system, 12 ejectors are arranged in a rack, together with an X - Y stage, to allow printing any pattern desired. Printed arrays of features are created with a biological solution containing bovine serum albumin conjugated oligonucleotides, dye, and salty buffer. This ejector system is designed for the ultra-high-throughput generation of arrays on a variety of surfaces. These single or racked ejectors could be used as long-term storage vessels for materials such as small molecules, nucleic acids, proteins, or cell libraries, which would allow for efficient preprogrammed selection of individual clones and greatly reduce the chance of cross-contamination and loss due to transfer. A new generation of design ideas includes plastic injection molded ejectors that are inexpensive and disposable and handheld personal pipettes for liquid transfer in the nanoliter regime.
NASA Astrophysics Data System (ADS)
Ponce de Leon, Philip J.; Hill, Frances A.; Heubel, Eric V.; Velásquez-García, Luis F.
2015-06-01
We report the design, fabrication, and characterization of planar arrays of externally-fed silicon electrospinning emitters for high-throughput generation of polymer nanofibers. Arrays with as many as 225 emitters and with emitter density as large as 100 emitters cm-2 were characterized using a solution of dissolved PEO in water and ethanol. Devices with emitter density as high as 25 emitters cm-2 deposit uniform imprints comprising fibers with diameters on the order of a few hundred nanometers. Mass flux rates as high as 417 g hr-1 m-2 were measured, i.e., four times the reported production rate of the leading commercial free-surface electrospinning sources. Throughput increases with increasing array size at constant emitter density, suggesting the design can be scaled up with no loss of productivity. Devices with emitter density equal to 100 emitters cm-2 fail to generate fibers but uniformly generate electrosprayed droplets. For the arrays tested, the largest measured mass flux resulted from arrays with larger emitter separation operating at larger bias voltages, indicating the strong influence of electrical field enhancement on the performance of the devices. Incorporation of a ground electrode surrounding the array tips helps equalize the emitter field enhancement across the array as well as control the spread of the imprints over larger distances.
Detecting and removing multiplicative spatial bias in high-throughput screening technologies.
Caraus, Iurie; Mazoure, Bogdan; Nadon, Robert; Makarenkov, Vladimir
2017-10-15
Considerable attention has been paid recently to improve data quality in high-throughput screening (HTS) and high-content screening (HCS) technologies widely used in drug development and chemical toxicity research. However, several environmentally- and procedurally-induced spatial biases in experimental HTS and HCS screens decrease measurement accuracy, leading to increased numbers of false positives and false negatives in hit selection. Although effective bias correction methods and software have been developed over the past decades, almost all of these tools have been designed to reduce the effect of additive bias only. Here, we address the case of multiplicative spatial bias. We introduce three new statistical methods meant to reduce multiplicative spatial bias in screening technologies. We assess the performance of the methods with synthetic and real data affected by multiplicative spatial bias, including comparisons with current bias correction methods. We also describe a wider data correction protocol that integrates methods for removing both assay and plate-specific spatial biases, which can be either additive or multiplicative. The methods for removing multiplicative spatial bias and the data correction protocol are effective in detecting and cleaning experimental data generated by screening technologies. As our protocol is of a general nature, it can be used by researchers analyzing current or next-generation high-throughput screens. The AssayCorrector program, implemented in R, is available on CRAN. makarenkov.vladimir@uqam.ca. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Daily, Neil J.; Du, Zhong-Wei
2017-01-01
Abstract Electrophysiology of excitable cells, including muscle cells and neurons, has been measured by making direct contact with a single cell using a micropipette electrode. To increase the assay throughput, optical devices such as microscopes and microplate readers have been used to analyze electrophysiology of multiple cells. We have established a high-throughput (HTP) analysis of action potentials (APs) in highly enriched motor neurons and cardiomyocytes (CMs) that are differentiated from human induced pluripotent stem cells (iPSCs). A multichannel electric field stimulation (EFS) device enabled the ability to electrically stimulate cells and measure dynamic changes in APs of excitable cells ultra-rapidly (>100 data points per second) by imaging entire 96-well plates. We found that the activities of both neurons and CMs and their response to EFS and chemicals are readily discerned by our fluorescence imaging-based HTP phenotyping assay. The latest generation of calcium (Ca2+) indicator dyes, FLIPR Calcium 6 and Cal-520, with the HTP device enables physiological analysis of human iPSC-derived samples highlighting its potential application for understanding disease mechanisms and discovering new therapeutic treatments. PMID:28525289
High-throughput electrophysiological assays for voltage gated ion channels using SyncroPatch 768PE.
Li, Tianbo; Lu, Gang; Chiang, Eugene Y; Chernov-Rogan, Tania; Grogan, Jane L; Chen, Jun
2017-01-01
Ion channels regulate a variety of physiological processes and represent an important class of drug target. Among the many methods of studying ion channel function, patch clamp electrophysiology is considered the gold standard by providing the ultimate precision and flexibility. However, its utility in ion channel drug discovery is impeded by low throughput. Additionally, characterization of endogenous ion channels in primary cells remains technical challenging. In recent years, many automated patch clamp (APC) platforms have been developed to overcome these challenges, albeit with varying throughput, data quality and success rate. In this study, we utilized SyncroPatch 768PE, one of the latest generation APC platforms which conducts parallel recording from two-384 modules with giga-seal data quality, to push these 2 boundaries. By optimizing various cell patching parameters and a two-step voltage protocol, we developed a high throughput APC assay for the voltage-gated sodium channel Nav1.7. By testing a group of Nav1.7 reference compounds' IC50, this assay was proved to be highly consistent with manual patch clamp (R > 0.9). In a pilot screening of 10,000 compounds, the success rate, defined by > 500 MΩ seal resistance and >500 pA peak current, was 79%. The assay was robust with daily throughput ~ 6,000 data points and Z' factor 0.72. Using the same platform, we also successfully recorded endogenous voltage-gated potassium channel Kv1.3 in primary T cells. Together, our data suggest that SyncroPatch 768PE provides a powerful platform for ion channel research and drug discovery.
Zimmerlin, Alfred; Kiffe, Michael
2013-01-01
New enabling MS technologies have made it possible to elucidate metabolic pathways present in ex vivo (blood, bile and/or urine) or in vitro (liver microsomes, hepatocytes and/or S9) samples. When investigating samples from high throughput assays the challenge that the user is facing now is to extract the appropriate information and compile it so that it is understandable to all. Medicinal chemist may then design the next generation of (better) drug candidates combining the needs for potency and metabolic stability and their synthetic creativity. This review focuses on the comparison of these enabling MS technologies and the IT tools developed for their interpretation.
Mining high-throughput experimental data to link gene and function
Blaby-Haas, Crysten E.; de Crécy-Lagard, Valérie
2011-01-01
Nearly 2200 genomes encoding some 6 million proteins have now been sequenced. Around 40% of these proteins are of unknown function even when function is loosely and minimally defined as “belonging to a superfamily”. In addition to in silico methods, the swelling stream of high-throughput experimental data can give valuable clues for linking these “unknowns” with precise biological roles. The goal is to develop integrative data-mining platforms that allow the scientific community at large to access and utilize this rich source of experimental knowledge. To this end, we review recent advances in generating whole-genome experimental datasets, where this data can be accessed, and how it can be used to drive prediction of gene function. PMID:21310501
DOE Office of Scientific and Technical Information (OSTI.GOV)
PANDOLFI, RONALD; KUMAR, DINESH; VENKATAKRISHNAN, SINGANALLUR
Xi-CAM aims to provide a community driven platform for multimodal analysis in synchrotron science. The platform core provides a robust plugin infrastructure for extensibility, allowing continuing development to simply add further functionality. Current modules include tools for characterization with (GI)SAXS, Tomography, and XAS. This will continue to serve as a development base as algorithms for multimodal analysis develop. Seamless remote data access, visualization and analysis are key elements of Xi-CAM, and will become critical to synchrotron data infrastructure as expectations for future data volume and acquisition rates rise with continuously increasing throughputs. The highly interactive design elements of Xi-cam willmore » similarly support a generation of users which depend on immediate data quality feedback during high-throughput or burst acquisition modes.« less
Camattari, Andrea; Weinhandl, Katrin; Gudiminchi, Rama K
2014-01-01
The methylotrophic yeast Pichia pastoris is becoming one of the favorite industrial workhorses for protein expression. Due to the widespread use of integration vectors, which generates significant clonal variability, screening methods allowing assaying hundreds of individual clones are of particular importance. Here we describe methods to detect and analyze protein expression, developed in a 96-well format for high-throughput screening of recombinant P. pastoris strains. The chapter covers essentially three common scenarios: (1) an enzymatic assay for proteins expressed in the cell cytoplasm, requiring cell lysis; (2) a whole-cell assay for a fungal cytochrome P450; and (3) a nonenzymatic assay for detection and quantification of tagged protein secreted into the supernatant.
USDA Potato Small RNA Database
USDA-ARS?s Scientific Manuscript database
Small RNAs (sRNAs) are now understood to be involved in gene regulation, function and development. High throughput sequencing (HTS) of sRNAs generates large data sets for analyzing the abundance, source and roles for specific sRNAs. These sRNAs result from transcript degradation as well as specific ...
77 FR 68773 - FIFRA Scientific Advisory Panel; Notice of Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-16
... for physical chemical properties that cannot be easily tested in in vitro systems or stable enough for.... Quantitative structural-activity relationship (QSAR) models and estrogen receptor (ER) expert systems development. High-throughput data generation and analysis (expertise focused on how this methodology can be...
ToxiFly: Can Fruit Flies be Used to Identify Toxicity Pathways for Airborne Chemicals?
Current high-throughput and alternative screening assays for chemical toxicity are unable to test volatile organic compounds (VOCs), thus limiting their scope. Further, the data generated by these assays require mechanistic information to link effects at molecular targets to adve...
Fish models such as zebrafish and medaka are increasingly used as alternatives to rodents in developmental and toxicological studies. These developmental and toxicological studies can be facilitated by the use of transgenic reporters that permit the real-time, noninvasive observa...
Effects of Toluene, Acrolein and Vinyl Chloride on Motor Activity of Drosophila Melanogaster
The data generated by current high-throughput assays for chemical toxicity require information to link effects at molecular targets to adverse outcomes in whole animals. In addition, more efficient methods for testing volatile chemicals are needed. Here we begin to address these ...
Advances in high-throughput next-generation sequencing (NGS) technology for direct sequencing of environmental DNA (i.e. shotgun metagenomics) is transforming the field of microbiology. NGS technologies are now regularly being applied in comparative metagenomic studies, which pr...
Fragment-based drug discovery using rational design.
Jhoti, H
2007-01-01
Fragment-based drug discovery (FBDD) is established as an alternative approach to high-throughput screening for generating novel small molecule drug candidates. In FBDD, relatively small libraries of low molecular weight compounds (or fragments) are screened using sensitive biophysical techniques to detect their binding to the target protein. A lower absolute affinity of binding is expected from fragments, compared to much higher molecular weight hits detected by high-throughput screening, due to their reduced size and complexity. Through the use of iterative cycles of medicinal chemistry, ideally guided by three-dimensional structural data, it is often then relatively straightforward to optimize these weak binding fragment hits into potent and selective lead compounds. As with most other lead discovery methods there are two key components of FBDD; the detection technology and the compound library. In this review I outline the two main approaches used for detecting the binding of low affinity fragments and also some of the key principles that are used to generate a fragment library. In addition, I describe an example of how FBDD has led to the generation of a drug candidate that is now being tested in clinical trials for the treatment of cancer.
NASA Astrophysics Data System (ADS)
Rowlette, Jeremy A.; Fotheringham, Edeline; Nichols, David; Weida, Miles J.; Kane, Justin; Priest, Allen; Arnone, David B.; Bird, Benjamin; Chapman, William B.; Caffey, David B.; Larson, Paul; Day, Timothy
2017-02-01
The field of infrared spectral imaging and microscopy is advancing rapidly due in large measure to the recent commercialization of the first high-throughput, high-spatial-definition quantum cascade laser (QCL) microscope. Having speed, resolution and noise performance advantages while also eliminating the need for cryogenic cooling, its introduction has established a clear path to translating the well-established diagnostic capability of infrared spectroscopy into clinical and pre-clinical histology, cytology and hematology workflows. Demand for even higher throughput while maintaining high-spectral fidelity and low-noise performance continues to drive innovation in QCL-based spectral imaging instrumentation. In this talk, we will present for the first time, recent technological advances in tunable QCL photonics which have led to an additional 10X enhancement in spectral image data collection speed while preserving the high spectral fidelity and SNR exhibited by the first generation of QCL microscopes. This new approach continues to leverage the benefits of uncooled microbolometer focal plane array cameras, which we find to be essential for ensuring both reproducibility of data across instruments and achieving the high-reliability needed in clinical applications. We will discuss the physics underlying these technological advancements as well as the new biomedical applications these advancements are enabling, including automated whole-slide infrared chemical imaging on clinically relevant timescales.
The US EPA ToxCast Program: Moving from Data Generation ...
The U.S. EPA ToxCast program is entering its tenth year. Significant learning and progress have occurred towards collection, analysis, and interpretation of the data. The library of ~1,800 chemicals has been subject to ongoing characterization (e.g., identity, purity, stability) and is unique in its scope, structural diversity, and use scenarios making it ideally suited to investigate the underlying molecular mechanisms of toxicity. The ~700 high-throughput in vitro assay endpoints cover 327 genes and 293 pathways as well as other integrated cellular processes and responses. The integrated analysis of high-throughput screening data has shown that most environmental and industrial chemicals are very non-selective in the biological targets they perturb, while a small subset of chemicals are relatively selective for specific biological targets. The selectivity of a chemical informs interpretation of the screening results while also guiding future mode-of-action or adverse outcome pathway approaches. Coupling the high-throughput in vitro assays with medium-throughput pharmacokinetic assays and reverse dosimetry allows conversion of the potency estimates to an administered dose. Comparison of the administered dose to human exposure provides a risk-based context. The lessons learned from this effort will be presented and discussed towards application to chemical safety decision making and the future of the computational toxicology program at the U.S. EPA. SOT pr
NASA Astrophysics Data System (ADS)
Hayasaki, Yoshio
2017-02-01
Femtosecond laser processing is a promising tool for fabricating novel and useful structures on the surfaces of and inside materials. An enormous number of pulse irradiation points will be required for fabricating actual structures with millimeter scale, and therefore, the throughput of femtosecond laser processing must be improved for practical adoption of this technique. One promising method to improve throughput is parallel pulse generation based on a computer-generated hologram (CGH) displayed on a spatial light modulator (SLM), a technique called holographic femtosecond laser processing. The holographic method has the advantages such as high throughput, high light use efficiency, and variable, instantaneous, and 3D patterning. Furthermore, the use of an SLM gives an ability to correct unknown imperfections of the optical system and inhomogeneity in a sample using in-system optimization of the CGH. Furthermore, the CGH can adaptively compensate in response to dynamic unpredictable mechanical movements, air and liquid disturbances, a shape variation and deformation of the target sample, as well as adaptive wavefront control for environmental changes. Therefore, it is a powerful tool for the fabrication of biological cells and tissues, because they have free form, variable, and deformable structures. In this paper, we present the principle and the experimental setup of holographic femtosecond laser processing, and the effective way for processing the biological sample. We demonstrate the femtosecond laser processing of biological materials and the processing properties.
As defined by Wikipedia (https://en.wikipedia.org/wiki/Metamodeling), “(a) metamodel or surrogate model is a model of a model, and metamodeling is the process of generating such metamodels.” The goals of metamodeling include, but are not limited to (1) developing functional or st...
2014-01-01
Background RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. Results We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification” includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module “mRNA identification” includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module “Target screening” provides expression profiling analyses and graphic visualization. The module “Self-testing” offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program’s functionality. Conclusions eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory. PMID:24593312
Yuan, Tiezheng; Huang, Xiaoyi; Dittmar, Rachel L; Du, Meijun; Kohli, Manish; Boardman, Lisa; Thibodeau, Stephen N; Wang, Liang
2014-03-05
RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification" includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module "mRNA identification" includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module "Target screening" provides expression profiling analyses and graphic visualization. The module "Self-testing" offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program's functionality. eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory.
A high-throughput method for the detection of homoeologous gene deletions in hexaploid wheat
2010-01-01
Background Mutational inactivation of plant genes is an essential tool in gene function studies. Plants with inactivated or deleted genes may also be exploited for crop improvement if such mutations/deletions produce a desirable agronomical and/or quality phenotype. However, the use of mutational gene inactivation/deletion has been impeded in polyploid plant species by genetic redundancy, as polyploids contain multiple copies of the same genes (homoeologous genes) encoded by each of the ancestral genomes. Similar to many other crop plants, bread wheat (Triticum aestivum L.) is polyploid; specifically allohexaploid possessing three progenitor genomes designated as 'A', 'B', and 'D'. Recently modified TILLING protocols have been developed specifically for mutation detection in wheat. Whilst extremely powerful in detecting single nucleotide changes and small deletions, these methods are not suitable for detecting whole gene deletions. Therefore, high-throughput methods for screening of candidate homoeologous gene deletions are needed for application to wheat populations generated by the use of certain mutagenic agents (e.g. heavy ion irradiation) that frequently generate whole-gene deletions. Results To facilitate the screening for specific homoeologous gene deletions in hexaploid wheat, we have developed a TaqMan qPCR-based method that allows high-throughput detection of deletions in homoeologous copies of any gene of interest, provided that sufficient polymorphism (as little as a single nucleotide difference) amongst homoeologues exists for specific probe design. We used this method to identify deletions of individual TaPFT1 homoeologues, a wheat orthologue of the disease susceptibility and flowering regulatory gene PFT1 in Arabidopsis. This method was applied to wheat nullisomic-tetrasomic lines as well as other chromosomal deletion lines to locate the TaPFT1 gene to the long arm of chromosome 5. By screening of individual DNA samples from 4500 M2 mutant wheat lines generated by heavy ion irradiation, we detected multiple mutants with deletions of each TaPFT1 homoeologue, and confirmed these deletions using a CAPS method. We have subsequently designed, optimized, and applied this method for the screening of homoeologous deletions of three additional wheat genes putatively involved in plant disease resistance. Conclusions We have developed a method for automated, high-throughput screening to identify deletions of individual homoeologues of a wheat gene. This method is also potentially applicable to other polyploidy plants. PMID:21114819
A high-throughput method for generating uniform microislands for autaptic neuronal cultures
Sgro, Allyson E.; Nowak, Amy L.; Austin, Naola S.; Custer, Kenneth L.; Allen, Peter B.; Chiu, Daniel T.; Bajjalieh, Sandra M.
2013-01-01
Generating microislands of culture substrate on coverslips by spray application of poly-D lysine is a commonly used method for culturing isolated neurons that form self (autaptic) synapses. This preparation has multiple advantages for studying synaptic transmission in isolation; however, generating microislands by spraying produces islands of non-uniform size and thus cultures vary widely in the number of islands containing single neurons. To address these problems, we developed a high-throughput method for reliably generating uniformly-shaped microislands of culture substrate. Stamp molds formed of poly(dimethylsiloxane) (PDMS) were fabricated with arrays of circles and used to generate stamps made of 9.2% agarose. The agarose stamps were capable of loading sufficient poly D-lysine and collagen dissolved in acetic acid to rapidly generate coverslips containing at least 64 microislands per coverslip. When hippocampal neurons were cultured on these coverslips, there were significantly more single-neuron islands per coverslip. We noted that single neurons tended to form one of three distinct neurite-arbor morphologies, which varied with island size and the location of the cell body on the island. To our surprise, the number of synapses per autaptic neuron did not correlate with arbor shape or island size, suggesting that other factors regulate the number of synapses formed by isolated neurons. The stamping method we report can be used to increase the number of single-neuron islands per culture and aid in the rapid visualization of microislands. PMID:21515305
The NIH Common Fund Human Biomolecular Atlas Program (HuBMAP) aims to develop a framework for functional mapping the human body with cellular resolution to enhance our understanding of cellular organization-function. HuBMAP will accelerate the development of the next generation of tools and techniques to generate 3D tissue maps using validated high-content, high-throughput imaging and omics assays, and establish an open data platform for integrating, visualizing data to build multi-dimensional maps.
Next-Generation Technologies for Multiomics Approaches Including Interactome Sequencing
Ohashi, Hiroyuki; Miyamoto-Sato, Etsuko
2015-01-01
The development of high-speed analytical techniques such as next-generation sequencing and microarrays allows high-throughput analysis of biological information at a low cost. These techniques contribute to medical and bioscience advancements and provide new avenues for scientific research. Here, we outline a variety of new innovative techniques and discuss their use in omics research (e.g., genomics, transcriptomics, metabolomics, proteomics, and interactomics). We also discuss the possible applications of these methods, including an interactome sequencing technology that we developed, in future medical and life science research. PMID:25649523
USDA-ARS?s Scientific Manuscript database
Next generation fungal amplicon sequencing is being used with increasing frequency to study fungal diversity in various ecosystems; however, the influence of sample preparation on the characterization of fungal community is poorly understood. We investigated the effects of four procedural modificati...
USDA-ARS?s Scientific Manuscript database
The USDA-APHIS Plant Germplasm Quarantine Program (PGQP) safeguards U.S. agriculture and natural resources against the entry, establishment, and spread of economically and environmentally significant pathogens, and facilitates the safe international movement of propagative plant parts. PGQP is the o...
USDA-ARS?s Scientific Manuscript database
Genetic diversity is an essential resource for breeders to improve new cultivars with desirable characteristics. Recently genotyping-by-sequencing (GBS), a next generation sequencing (NGS) based technology that can simplify complex genomes, has been used as a high-throughput and cost-effective molec...
High-Throughput resequencing of maize landraces at genomic regions associated with flowering time
USDA-ARS?s Scientific Manuscript database
Despite the reduction in the price of sequencing, it remains expensive to sequence and assemble whole, complex genomes of multiple samples for population studies, particularly for large genomes like those of many crop species. Enrichment of target genome regions coupled with next generation sequenci...
Ultra high-throughput nucleic acid sequencing as a tool for virus discovery in the turkey gut.
USDA-ARS?s Scientific Manuscript database
Recently, the use of the next generation of nucleic acid sequencing technology (i.e., 454 pyrosequencing, as developed by Roche/454 Life Sciences) has allowed an in-depth look at the uncultivated microorganisms present in complex environmental samples, including samples with agricultural importance....
Recent Applications of DNA Sequencing Technologies in Food, Nutrition and Agriculture
USDA-ARS?s Scientific Manuscript database
Next-generation DNA sequencing technologies are able to produce millions of short sequence reads in a high-throughput, cost-effective fashion. The emergence of these technologies has not only facilitated genome sequencing but also changed the landscape of life sciences. This review surveys their rec...
Toots, Mart; Ustav, Mart; Männik, Andres; Mumm, Karl; Tämm, Kaido; Tamm, Tarmo; Ustav, Mart
2017-01-01
Human papillomaviruses (HPVs) are oncogenic viruses that cause numerous different cancers as well as benign lesions in the epithelia. To date, there is no effective cure for an ongoing HPV infection. Here, we describe the generation process of a platform for the development of anti-HPV drugs. This system consists of engineered full-length HPV genomes that express reporter genes for evaluation of the viral copy number in all three HPV replication stages. We demonstrate the usefulness of this system by conducting high-throughput screens to identify novel high-risk HPV-specific inhibitors. At least five of the inhibitors block the function of Tdp1 and PARP1, which have been identified as essential cellular proteins for HPV replication and promising candidates for the development of antivirals against HPV and possibly against HPV-related cancers. PMID:28182794
Kuhn, Alexandre; Ong, Yao Min; Quake, Stephen R; Burkholder, William F
2015-07-08
Like other structural variants, transposable element insertions can be highly polymorphic across individuals. Their functional impact, however, remains poorly understood. Current genome-wide approaches for genotyping insertion-site polymorphisms based on targeted or whole-genome sequencing remain very expensive and can lack accuracy, hence new large-scale genotyping methods are needed. We describe a high-throughput method for genotyping transposable element insertions and other types of structural variants that can be assayed by breakpoint PCR. The method relies on next-generation sequencing of multiplex, site-specific PCR amplification products and read count-based genotype calls. We show that this method is flexible, efficient (it does not require rounds of optimization), cost-effective and highly accurate. This method can benefit a wide range of applications from the routine genotyping of animal and plant populations to the functional study of structural variants in humans.
Role of APOE Isoforms in the Pathogenesis of TBI Induced Alzheimer’s Disease
2015-10-01
global deletion, APOE targeted replacement, complex breeding, CCI model optimization, mRNA library generation, high throughput massive parallel ...ATP binding cassette transporter A1 (ABCA1) is a lipid transporter that controls the generation of HDL in plasma and ApoE-containing lipoproteins in... parallel sequencing, mRNA-seq, behavioral testing, mem- ory impairement, recovery. 3 Overall Project Summary During the reported period, we have been able
Controlled electrosprayed formation of non-spherical microparticles
NASA Astrophysics Data System (ADS)
Jeyhani, Morteza; Mak, Sze Yi; Sammut, Stephen; Shum, Ho Cheung; Hwang, Dae Kun; Tsai, Scott S. H.
2017-11-01
Fabrication of biocompatible microparticles, such as alginate particles, with the possibility of controlling the particles' morphology in a high-throughput manner, is essential for pharmaceutical and cosmetic industries. Even though the shape of alginate particles has been shown to be an important parameter in controlling drug delivery, there are very limited manufacturing methods to produce non-spherical alginate microparticles in a high-throughput fashion. Here, we present a system that generates non-spherical biocompatible alginate microparticles with a tunable size and shape, and at high-throughput, using an electrospray technique. Alginate solution, which is a highly biocompatible material, is flown through a needle using a constant flow rate syringe pump. The alginate phase is connected to a high-voltage power supply to charge it positively. There is a metallic ring underneath the needle that is charged negatively. The applied voltage creates an electric field that forces the dispensing droplets to pass through the metallic ring toward the collection bath. During this migration, droplets break up to smaller droplets to dissipate their energy. When the droplets reach the calcium chloride bath, polymerization happens and solidifies the droplets. We study the effects of changing the distance from the needle to the bath, and the concentration of calcium chloride in the bath, to control the size and the shape of the resulting microparticles.
web cellHTS2: a web-application for the analysis of high-throughput screening data.
Pelz, Oliver; Gilsdorf, Moritz; Boutros, Michael
2010-04-12
The analysis of high-throughput screening data sets is an expanding field in bioinformatics. High-throughput screens by RNAi generate large primary data sets which need to be analyzed and annotated to identify relevant phenotypic hits. Large-scale RNAi screens are frequently used to identify novel factors that influence a broad range of cellular processes, including signaling pathway activity, cell proliferation, and host cell infection. Here, we present a web-based application utility for the end-to-end analysis of large cell-based screening experiments by cellHTS2. The software guides the user through the configuration steps that are required for the analysis of single or multi-channel experiments. The web-application provides options for various standardization and normalization methods, annotation of data sets and a comprehensive HTML report of the screening data analysis, including a ranked hit list. Sessions can be saved and restored for later re-analysis. The web frontend for the cellHTS2 R/Bioconductor package interacts with it through an R-server implementation that enables highly parallel analysis of screening data sets. web cellHTS2 further provides a file import and configuration module for common file formats. The implemented web-application facilitates the analysis of high-throughput data sets and provides a user-friendly interface. web cellHTS2 is accessible online at http://web-cellHTS2.dkfz.de. A standalone version as a virtual appliance and source code for platforms supporting Java 1.5.0 can be downloaded from the web cellHTS2 page. web cellHTS2 is freely distributed under GPL.
Wu, Wei; Lu, Chao-Xia; Wang, Yi-Ning; Liu, Fang; Chen, Wei; Liu, Yong-Tai; Han, Ye-Chen; Cao, Jian; Zhang, Shu-Yang; Zhang, Xue
2015-07-10
MYBPC3 dysfunctions have been proven to induce dilated cardiomyopathy, hypertrophic cardiomyopathy, and/or left ventricular noncompaction; however, the genotype-phenotype correlation between MYBPC3 and restrictive cardiomyopathy (RCM) has not been established. The newly developed next-generation sequencing method is capable of broad genomic DNA sequencing with high throughput and can help explore novel correlations between genetic variants and cardiomyopathies. A proband from a multigenerational family with 3 live patients and 1 unrelated patient with clinical diagnoses of RCM underwent a next-generation sequencing workflow based on a custom AmpliSeq panel, including 64 candidate pathogenic genes for cardiomyopathies, on the Ion Personal Genome Machine high-throughput sequencing benchtop instrument. The selected panel contained a total of 64 genes that were reportedly associated with inherited cardiomyopathies. All patients fulfilled strict criteria for RCM with clinical characteristics, echocardiography, and/or cardiac magnetic resonance findings. The multigenerational family with 3 adult RCM patients carried an identical nonsense MYBPC3 mutation, and the unrelated patient carried a missense mutation in the MYBPC3 gene. All of these results were confirmed by the Sanger sequencing method. This study demonstrated that MYBPC3 gene mutations, revealed by next-generation sequencing, were associated with familial and sporadic RCM patients. It is suggested that the next-generation sequencing platform with a selected panel provides a highly efficient approach for molecular diagnosis of hereditary and idiopathic RCM and helps build new genotype-phenotype correlations. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
High-throughput NGL electron-beam direct-write lithography system
NASA Astrophysics Data System (ADS)
Parker, N. William; Brodie, Alan D.; McCoy, John H.
2000-07-01
Electron beam lithography systems have historically had low throughput. The only practical solution to this limitation is an approach using many beams writing simultaneously. For single-column multi-beam systems, including projection optics (SCALPELR and PREVAIL) and blanked aperture arrays, throughput and resolution are limited by space-charge effects. Multibeam micro-column (one beam per column) systems are limited by the need for low voltage operation, electrical connection density and fabrication complexities. In this paper, we discuss a new multi-beam concept employing multiple columns each with multiple beams to generate a very large total number of parallel writing beams. This overcomes the limitations of space-charge interactions and low voltage operation. We also discuss a rationale leading to the optimum number of columns and beams per column. Using this approach we show how production throughputs >= 60 wafers per hour can be achieved at CDs
A novel anti-GPC3 monoclonal antibody (YP7) | Center for Cancer Research
Glypican-3 (GPC3) is an emerging therapeutic target in hepatoma. A novel anti-GPC3 monoclonal antibody (YP7) has been generated through a combination of peptide immunization and high-throughput flow cytometry screening. YP7 binds cell-surface-associated GPC3 with high affinity and exhibits significant hepatoma xenograft growth inhibition in nude mice. The new antibody may have
Computational Approaches to Phenotyping
Lussier, Yves A.; Liu, Yang
2007-01-01
The recent completion of the Human Genome Project has made possible a high-throughput “systems approach” for accelerating the elucidation of molecular underpinnings of human diseases, and subsequent derivation of molecular-based strategies to more effectively prevent, diagnose, and treat these diseases. Although altered phenotypes are among the most reliable manifestations of altered gene functions, research using systematic analysis of phenotype relationships to study human biology is still in its infancy. This article focuses on the emerging field of high-throughput phenotyping (HTP) phenomics research, which aims to capitalize on novel high-throughput computation and informatics technology developments to derive genomewide molecular networks of genotype–phenotype associations, or “phenomic associations.” The HTP phenomics research field faces the challenge of technological research and development to generate novel tools in computation and informatics that will allow researchers to amass, access, integrate, organize, and manage phenotypic databases across species and enable genomewide analysis to associate phenotypic information with genomic data at different scales of biology. Key state-of-the-art technological advancements critical for HTP phenomics research are covered in this review. In particular, we highlight the power of computational approaches to conduct large-scale phenomics studies. PMID:17202287
Automatic poisson peak harvesting for high throughput protein identification.
Breen, E J; Hopwood, F G; Williams, K L; Wilkins, M R
2000-06-01
High throughput identification of proteins by peptide mass fingerprinting requires an efficient means of picking peaks from mass spectra. Here, we report the development of a peak harvester to automatically pick monoisotopic peaks from spectra generated on matrix-assisted laser desorption/ionisation time of flight (MALDI-TOF) mass spectrometers. The peak harvester uses advanced mathematical morphology and watershed algorithms to first process spectra to stick representations. Subsequently, Poisson modelling is applied to determine which peak in an isotopically resolved group represents the monoisotopic mass of a peptide. We illustrate the features of the peak harvester with mass spectra of standard peptides, digests of gel-separated bovine serum albumin, and with Escherictia coli proteins prepared by two-dimensional polyacrylamide gel electrophoresis. In all cases, the peak harvester proved effective in its ability to pick similar monoisotopic peaks as an experienced human operator, and also proved effective in the identification of monoisotopic masses in cases where isotopic distributions of peptides were overlapping. The peak harvester can be operated in an interactive mode, or can be completely automated and linked through to peptide mass fingerprinting protein identification tools to achieve high throughput automated protein identification.
Breast cancer diagnosis using spatial light interference microscopy
NASA Astrophysics Data System (ADS)
Majeed, Hassaan; Kandel, Mikhail E.; Han, Kevin; Luo, Zelun; Macias, Virgilia; Tangella, Krishnarao; Balla, Andre; Popescu, Gabriel
2015-11-01
The standard practice in histopathology of breast cancers is to examine a hematoxylin and eosin (H&E) stained tissue biopsy under a microscope to diagnose whether a lesion is benign or malignant. This determination is made based on a manual, qualitative inspection, making it subject to investigator bias and resulting in low throughput. Hence, a quantitative, label-free, and high-throughput diagnosis method is highly desirable. We present here preliminary results showing the potential of quantitative phase imaging for breast cancer screening and help with differential diagnosis. We generated phase maps of unstained breast tissue biopsies using spatial light interference microscopy (SLIM). As a first step toward quantitative diagnosis based on SLIM, we carried out a qualitative evaluation of our label-free images. These images were shown to two pathologists who classified each case as either benign or malignant. This diagnosis was then compared against the diagnosis of the two pathologists on corresponding H&E stained tissue images and the number of agreements were counted. The agreement between SLIM and H&E based diagnosis was 88% for the first pathologist and 87% for the second. Our results demonstrate the potential and promise of SLIM for quantitative, label-free, and high-throughput diagnosis.
Christodoulou, Eleni G.; Yang, Hai; Lademann, Franziska; Pilarsky, Christian; Beyer, Andreas; Schroeder, Michael
2017-01-01
Mutated KRAS plays an important role in many cancers. Although targeting KRAS directly is difficult, indirect inactivation via synthetic lethal partners (SLPs) is promising. Yet to date, there are no SLPs from high-throughput RNAi screening, which are supported by multiple screens. Here, we address this problem by aggregating and ranking data over three independent high-throughput screens. We integrate rankings by minimizing the displacement and by considering established methods such as RIGER and RSA. Our meta analysis reveals COPB2 as a potential SLP of KRAS with good support from all three screens. COPB2 is a coatomer subunit and its knock down has already been linked to disabled autophagy and reduced tumor growth. We confirm COPB2 as SLP in knock down experiments on pancreas and colorectal cancer cell lines. Overall, consistent integration of high throughput data can generate candidate synthetic lethal partners, which individual screens do not uncover. Concretely, we reveal and confirm that COPB2 is a synthetic lethal partner of KRAS and hence a promising cancer target. Ligands inhibiting COPB2 may, therefore, be promising new cancer drugs. PMID:28415695
DnaSAM: Software to perform neutrality testing for large datasets with complex null models.
Eckert, Andrew J; Liechty, John D; Tearse, Brandon R; Pande, Barnaly; Neale, David B
2010-05-01
Patterns of DNA sequence polymorphisms can be used to understand the processes of demography and adaptation within natural populations. High-throughput generation of DNA sequence data has historically been the bottleneck with respect to data processing and experimental inference. Advances in marker technologies have largely solved this problem. Currently, the limiting step is computational, with most molecular population genetic software allowing a gene-by-gene analysis through a graphical user interface. An easy-to-use analysis program that allows both high-throughput processing of multiple sequence alignments along with the flexibility to simulate data under complex demographic scenarios is currently lacking. We introduce a new program, named DnaSAM, which allows high-throughput estimation of DNA sequence diversity and neutrality statistics from experimental data along with the ability to test those statistics via Monte Carlo coalescent simulations. These simulations are conducted using the ms program, which is able to incorporate several genetic parameters (e.g. recombination) and demographic scenarios (e.g. population bottlenecks). The output is a set of diversity and neutrality statistics with associated probability values under a user-specified null model that are stored in easy to manipulate text file. © 2009 Blackwell Publishing Ltd.
Klijn, Marieke E; Hubbuch, Jürgen
2018-04-27
Protein phase diagrams are a tool to investigate cause and consequence of solution conditions on protein phase behavior. The effects are scored according to aggregation morphologies such as crystals or amorphous precipitates. Solution conditions affect morphological features, such as crystal size, as well as kinetic features, such as crystal growth time. Common used data visualization techniques include individual line graphs or symbols-based phase diagrams. These techniques have limitations in terms of handling large datasets, comprehensiveness or completeness. To eliminate these limitations, morphological and kinetic features obtained from crystallization images generated with high throughput microbatch experiments have been visualized with radar charts in combination with the empirical phase diagram (EPD) method. Morphological features (crystal size, shape, and number, as well as precipitate size) and kinetic features (crystal and precipitate onset and growth time) are extracted for 768 solutions with varying chicken egg white lysozyme concentration, salt type, ionic strength and pH. Image-based aggregation morphology and kinetic features were compiled into a single and easily interpretable figure, thereby showing that the EPD method can support high throughput crystallization experiments in its data amount as well as its data complexity. Copyright © 2018. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chhabra, Swapnil; Butland, Gareth; Elias, Dwayne A
The ability to conduct advanced functional genomic studies of the thousands of 38 sequenced bacteria has been hampered by the lack of available tools for making high39 throughput chromosomal manipulations in a systematic manner that can be applied across 40 diverse species. In this work, we highlight the use of synthetic biological tools to 41 assemble custom suicide vectors with reusable and interchangeable DNA parts to 42 facilitate chromosomal modification at designated loci. These constructs enable an array 43 of downstream applications including gene replacement and creation of gene fusions with 44 affinity purification or localization tags. We employed thismore » approach to engineer 45 chromosomal modifications in a bacterium that has previously proven difficult to 46 manipulate genetically, Desulfovibrio vulgaris Hildenborough, to generate a library of 47 662 strains. Furthermore, we demonstrate how these modifications can be used for 48 examining metabolic pathways, protein-protein interactions, and protein localization. The 49 ubiquity of suicide constructs in gene replacement throughout biology suggests that this 50 approach can be applied to engineer a broad range of species for a diverse array of 51 systems biological applications and is amenable to high-throughput implementation.« less
Shinde, Aniketa; Guevarra, Dan; Haber, Joel A.; ...
2014-10-21
For many solar fuel generator designs involve illumination of a photoabsorber stack coated with a catalyst for the oxygen evolution reaction (OER). In this design, impinging light must pass through the catalyst layer before reaching the photoabsorber(s), and thus optical transmission is an important function of the OER catalyst layer. Many oxide catalysts, such as those containing elements Ni and Co, form oxide or oxyhydroxide phases in alkaline solution at operational potentials that differ from the phases observed in ambient conditions. To characterize the transparency of such catalysts during OER operation, 1031 unique compositions containing the elements Ni, Co, Ce,more » La, and Fe were prepared by a high throughput inkjet printing technique. Moreover, the catalytic current of each composition was recorded at an OER overpotential of 0.33 V with simultaneous measurement of the spectral transmission. By combining the optical and catalytic properties, the combined catalyst efficiency was calculated to identify the optimal catalysts for solar fuel applications within the material library. Our measurements required development of a new high throughput instrument with integrated electrochemistry and spectroscopy measurements, which enables various spectroelectrochemistry experiments.« less
An enzyme-mediated protein-fragment complementation assay for substrate screening of sortase A.
Li, Ning; Yu, Zheng; Ji, Qun; Sun, Jingying; Liu, Xiao; Du, Mingjuan; Zhang, Wei
2017-04-29
Enzyme-mediated protein conjugation has gained great attention recently due to the remarkable site-selectivity and mild reaction condition affected by the nature of enzyme. Among all sorts of enzymes reported, sortase A from Staphylococcus aureus (SaSrtA) is the most popular enzyme due to its selectivity and well-demonstrated applications. Position scanning has been widely applied to understand enzyme substrate specificity, but the low throughput of chemical synthesis of peptide substrates and analytical methods (HPLC, LC-ESI-MS) have been the major hurdle to fully decode enzyme substrate profile. We have developed a simple high-throughput substrate profiling method to reveal novel substrates of SaSrtA 7M, a widely used hyperactive peptide ligase, by modified protein-fragment complementation assay (PCA). A small library targeting the LPATG motif recognized by SaSrtA 7M was generated and screened against proteins carrying N-terminal glycine. Using this method, we have confirmed all currently known substrates of the enzyme, and moreover identified some previously unknown substrates with varying activities. The method provides an easy, fast and highly-sensitive way to determine substrate profile of a peptide ligase in a high-throughput manner. Copyright © 2017 Elsevier Inc. All rights reserved.
Müllenbroich, M Caroline; Silvestri, Ludovico; Onofri, Leonardo; Costantini, Irene; Hoff, Marcel Van't; Sacconi, Leonardo; Iannello, Giulio; Pavone, Francesco S
2015-10-01
Comprehensive mapping and quantification of neuronal projections in the central nervous system requires high-throughput imaging of large volumes with microscopic resolution. To this end, we have developed a confocal light-sheet microscope that has been optimized for three-dimensional (3-D) imaging of structurally intact clarified whole-mount mouse brains. We describe the optical and electromechanical arrangement of the microscope and give details on the organization of the microscope management software. The software orchestrates all components of the microscope, coordinates critical timing and synchronization, and has been written in a versatile and modular structure using the LabVIEW language. It can easily be adapted and integrated to other microscope systems and has been made freely available to the light-sheet community. The tremendous amount of data routinely generated by light-sheet microscopy further requires novel strategies for data handling and storage. To complete the full imaging pipeline of our high-throughput microscope, we further elaborate on big data management from streaming of raw images up to stitching of 3-D datasets. The mesoscale neuroanatomy imaged at micron-scale resolution in those datasets allows characterization and quantification of neuronal projections in unsectioned mouse brains.
High-throughput determination of RNA structure by proximity ligation.
Ramani, Vijay; Qiu, Ruolan; Shendure, Jay
2015-09-01
We present an unbiased method to globally resolve RNA structures through pairwise contact measurements between interacting regions. RNA proximity ligation (RPL) uses proximity ligation of native RNA followed by deep sequencing to yield chimeric reads with ligation junctions in the vicinity of structurally proximate bases. We apply RPL in both baker's yeast (Saccharomyces cerevisiae) and human cells and generate contact probability maps for ribosomal and other abundant RNAs, including yeast snoRNAs, the RNA subunit of the signal recognition particle and the yeast U2 spliceosomal RNA homolog. RPL measurements correlate with established secondary structures for these RNA molecules, including stem-loop structures and long-range pseudoknots. We anticipate that RPL will complement the current repertoire of computational and experimental approaches in enabling the high-throughput determination of secondary and tertiary RNA structures.
Printed droplet microfluidics for on demand dispensing of picoliter droplets and cells
Cole, Russell H.; Tang, Shi-Yang; Siltanen, Christian A.; Shahi, Payam; Zhang, Jesse Q.; Poust, Sean; Gartner, Zev J.; Abate, Adam R.
2017-01-01
Although the elementary unit of biology is the cell, high-throughput methods for the microscale manipulation of cells and reagents are limited. The existing options either are slow, lack single-cell specificity, or use fluid volumes out of scale with those of cells. Here we present printed droplet microfluidics, a technology to dispense picoliter droplets and cells with deterministic control. The core technology is a fluorescence-activated droplet sorter coupled to a specialized substrate that together act as a picoliter droplet and single-cell printer, enabling high-throughput generation of intricate arrays of droplets, cells, and microparticles. Printed droplet microfluidics provides a programmable and robust technology to construct arrays of defined cell and reagent combinations and to integrate multiple measurement modalities together in a single assay. PMID:28760972
Printed droplet microfluidics for on demand dispensing of picoliter droplets and cells.
Cole, Russell H; Tang, Shi-Yang; Siltanen, Christian A; Shahi, Payam; Zhang, Jesse Q; Poust, Sean; Gartner, Zev J; Abate, Adam R
2017-08-15
Although the elementary unit of biology is the cell, high-throughput methods for the microscale manipulation of cells and reagents are limited. The existing options either are slow, lack single-cell specificity, or use fluid volumes out of scale with those of cells. Here we present printed droplet microfluidics, a technology to dispense picoliter droplets and cells with deterministic control. The core technology is a fluorescence-activated droplet sorter coupled to a specialized substrate that together act as a picoliter droplet and single-cell printer, enabling high-throughput generation of intricate arrays of droplets, cells, and microparticles. Printed droplet microfluidics provides a programmable and robust technology to construct arrays of defined cell and reagent combinations and to integrate multiple measurement modalities together in a single assay.
Mining high-throughput experimental data to link gene and function.
Blaby-Haas, Crysten E; de Crécy-Lagard, Valérie
2011-04-01
Nearly 2200 genomes that encode around 6 million proteins have now been sequenced. Around 40% of these proteins are of unknown function, even when function is loosely and minimally defined as 'belonging to a superfamily'. In addition to in silico methods, the swelling stream of high-throughput experimental data can give valuable clues for linking these unknowns with precise biological roles. The goal is to develop integrative data-mining platforms that allow the scientific community at large to access and utilize this rich source of experimental knowledge. To this end, we review recent advances in generating whole-genome experimental datasets, where this data can be accessed, and how it can be used to drive prediction of gene function. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chisholm, Bret J.; Webster, Dean C.; Bennett, James C.; Berry, Missy; Christianson, David; Kim, Jongsoo; Mayo, Bret; Gubbins, Nathan
2007-07-01
An automated, high-throughput adhesion workflow that enables pseudobarnacle adhesion and coating/substrate adhesion to be measured on coating patches arranged in an array format on 4×8in.2 panels was developed. The adhesion workflow consists of the following process steps: (1) application of an adhesive to the coating array; (2) insertion of panels into a clamping device; (3) insertion of aluminum studs into the clamping device and onto coating surfaces, aligned with the adhesive; (4) curing of the adhesive; and (5) automated removal of the aluminum studs. Validation experiments comparing data generated using the automated, high-throughput workflow to data obtained using conventional, manual methods showed that the automated system allows for accurate ranking of relative coating adhesion performance.
Printed droplet microfluidics for on demand dispensing of picoliter droplets and cells
NASA Astrophysics Data System (ADS)
Cole, Russell H.; Tang, Shi-Yang; Siltanen, Christian A.; Shahi, Payam; Zhang, Jesse Q.; Poust, Sean; Gartner, Zev J.; Abate, Adam R.
2017-08-01
Although the elementary unit of biology is the cell, high-throughput methods for the microscale manipulation of cells and reagents are limited. The existing options either are slow, lack single-cell specificity, or use fluid volumes out of scale with those of cells. Here we present printed droplet microfluidics, a technology to dispense picoliter droplets and cells with deterministic control. The core technology is a fluorescence-activated droplet sorter coupled to a specialized substrate that together act as a picoliter droplet and single-cell printer, enabling high-throughput generation of intricate arrays of droplets, cells, and microparticles. Printed droplet microfluidics provides a programmable and robust technology to construct arrays of defined cell and reagent combinations and to integrate multiple measurement modalities together in a single assay.
Integrated crystal mounting and alignment system for high-throughput biological crystallography
Nordmeyer, Robert A.; Snell, Gyorgy P.; Cornell, Earl W.; Kolbe, William F.; Yegian, Derek T.; Earnest, Thomas N.; Jaklevich, Joseph M.; Cork, Carl W.; Santarsiero, Bernard D.; Stevens, Raymond C.
2007-09-25
A method and apparatus for the transportation, remote and unattended mounting, and visual alignment and monitoring of protein crystals for synchrotron generated x-ray diffraction analysis. The protein samples are maintained at liquid nitrogen temperatures at all times: during shipment, before mounting, mounting, alignment, data acquisition and following removal. The samples must additionally be stably aligned to within a few microns at a point in space. The ability to accurately perform these tasks remotely and automatically leads to a significant increase in sample throughput and reliability for high-volume protein characterization efforts. Since the protein samples are placed in a shipping-compatible layered stack of sample cassettes each holding many samples, a large number of samples can be shipped in a single cryogenic shipping container.
Integrated crystal mounting and alignment system for high-throughput biological crystallography
Nordmeyer, Robert A.; Snell, Gyorgy P.; Cornell, Earl W.; Kolbe, William; Yegian, Derek; Earnest, Thomas N.; Jaklevic, Joseph M.; Cork, Carl W.; Santarsiero, Bernard D.; Stevens, Raymond C.
2005-07-19
A method and apparatus for the transportation, remote and unattended mounting, and visual alignment and monitoring of protein crystals for synchrotron generated x-ray diffraction analysis. The protein samples are maintained at liquid nitrogen temperatures at all times: during shipment, before mounting, mounting, alignment, data acquisition and following removal. The samples must additionally be stably aligned to within a few microns at a point in space. The ability to accurately perform these tasks remotely and automatically leads to a significant increase in sample throughput and reliability for high-volume protein characterization efforts. Since the protein samples are placed in a shipping-compatible layered stack of sample cassettes each holding many samples, a large number of samples can be shipped in a single cryogenic shipping container.
YAMAT-seq: an efficient method for high-throughput sequencing of mature transfer RNAs
Shigematsu, Megumi; Honda, Shozo; Loher, Phillipe; Telonis, Aristeidis G.; Rigoutsos, Isidore
2017-01-01
Abstract Besides translation, transfer RNAs (tRNAs) play many non-canonical roles in various biological pathways and exhibit highly variable expression profiles. To unravel the emerging complexities of tRNA biology and molecular mechanisms underlying them, an efficient tRNA sequencing method is required. However, the rigid structure of tRNA has been presenting a challenge to the development of such methods. We report the development of Y-shaped Adapter-ligated MAture TRNA sequencing (YAMAT-seq), an efficient and convenient method for high-throughput sequencing of mature tRNAs. YAMAT-seq circumvents the issue of inefficient adapter ligation, a characteristic of conventional RNA sequencing methods for mature tRNAs, by employing the efficient and specific ligation of Y-shaped adapter to mature tRNAs using T4 RNA Ligase 2. Subsequent cDNA amplification and next-generation sequencing successfully yield numerous mature tRNA sequences. YAMAT-seq has high specificity for mature tRNAs and high sensitivity to detect most isoacceptors from minute amount of total RNA. Moreover, YAMAT-seq shows quantitative capability to estimate expression levels of mature tRNAs, and has high reproducibility and broad applicability for various cell lines. YAMAT-seq thus provides high-throughput technique for identifying tRNA profiles and their regulations in various transcriptomes, which could play important regulatory roles in translation and other biological processes. PMID:28108659
PTMScout, a Web Resource for Analysis of High Throughput Post-translational Proteomics Studies*
Naegle, Kristen M.; Gymrek, Melissa; Joughin, Brian A.; Wagner, Joel P.; Welsch, Roy E.; Yaffe, Michael B.; Lauffenburger, Douglas A.; White, Forest M.
2010-01-01
The rate of discovery of post-translational modification (PTM) sites is increasing rapidly and is significantly outpacing our biological understanding of the function and regulation of those modifications. To help meet this challenge, we have created PTMScout, a web-based interface for viewing, manipulating, and analyzing high throughput experimental measurements of PTMs in an effort to facilitate biological understanding of protein modifications in signaling networks. PTMScout is constructed around a custom database of PTM experiments and contains information from external protein and post-translational resources, including gene ontology annotations, Pfam domains, and Scansite predictions of kinase and phosphopeptide binding domain interactions. PTMScout functionality comprises data set comparison tools, data set summary views, and tools for protein assignments of peptides identified by mass spectrometry. Analysis tools in PTMScout focus on informed subset selection via common criteria and on automated hypothesis generation through subset labeling derived from identification of statistically significant enrichment of other annotations in the experiment. Subset selection can be applied through the PTMScout flexible query interface available for quantitative data measurements and data annotations as well as an interface for importing data set groupings by external means, such as unsupervised learning. We exemplify the various functions of PTMScout in application to data sets that contain relative quantitative measurements as well as data sets lacking quantitative measurements, producing a set of interesting biological hypotheses. PTMScout is designed to be a widely accessible tool, enabling generation of multiple types of biological hypotheses from high throughput PTM experiments and advancing functional assignment of novel PTM sites. PTMScout is available at http://ptmscout.mit.edu. PMID:20631208
Wetmore, Barbara A.; Wambaugh, John F.; Allen, Brittany; Ferguson, Stephen S.; Sochaski, Mark A.; Setzer, R. Woodrow; Houck, Keith A.; Strope, Cory L.; Cantwell, Katherine; Judson, Richard S.; LeCluyse, Edward; Clewell, Harvey J.; Thomas, Russell S.; Andersen, Melvin E.
2015-01-01
We previously integrated dosimetry and exposure with high-throughput screening (HTS) to enhance the utility of ToxCast HTS data by translating in vitro bioactivity concentrations to oral equivalent doses (OEDs) required to achieve these levels internally. These OEDs were compared against regulatory exposure estimates, providing an activity-to-exposure ratio (AER) useful for a risk-based ranking strategy. As ToxCast efforts expand (ie, Phase II) beyond food-use pesticides toward a wider chemical domain that lacks exposure and toxicity information, prediction tools become increasingly important. In this study, in vitro hepatic clearance and plasma protein binding were measured to estimate OEDs for a subset of Phase II chemicals. OEDs were compared against high-throughput (HT) exposure predictions generated using probabilistic modeling and Bayesian approaches generated by the U.S. Environmental Protection Agency (EPA) ExpoCast program. This approach incorporated chemical-specific use and national production volume data with biomonitoring data to inform the exposure predictions. This HT exposure modeling approach provided predictions for all Phase II chemicals assessed in this study whereas estimates from regulatory sources were available for only 7% of chemicals. Of the 163 chemicals assessed in this study, 3 or 13 chemicals possessed AERs < 1 or < 100, respectively. Diverse bioactivities across a range of assays and concentrations were also noted across the wider chemical space surveyed. The availability of HT exposure estimation and bioactivity screening tools provides an opportunity to incorporate a risk-based strategy for use in testing prioritization. PMID:26251325
Brito Palma, Bernardo; Fisher, Charles W; Rueff, José; Kranendonk, Michel
2016-05-16
The formation of reactive metabolites through biotransformation is the suspected cause of many adverse drug reactions. Testing for the propensity of a drug to form reactive metabolites has increasingly become an integral part of lead-optimization strategy in drug discovery. DNA reactivity is one undesirable facet of a drug or its metabolites and can lead to increased risk of cancer and reproductive toxicity. Many drugs are metabolized by cytochromes P450 in the liver and other tissues, and these reactions can generate hard electrophiles. These hard electrophilic reactive metabolites may react with DNA and may be detected in standard in vitro genotoxicity assays; however, the majority of these assays fall short due to the use of animal-derived organ extracts that inadequately represent human metabolism. The current study describes the development of bacterial systems that efficiently detect DNA-damaging electrophilic reactive metabolites generated by human P450 biotransformation. These assays use a GFP reporter system that detects DNA damage through induction of the SOS response and a GFP reporter to control for cytotoxicity. Two human CYP1A2-competent prototypes presented here have appropriate characteristics for the detection of DNA-damaging reactive metabolites in a high-throughput manner. The advantages of this approach include a short assay time (120-180 min) with real-time measurement, sensitivity to small amounts of compound, and adaptability to a microplate format. These systems are suitable for high-throughput assays and can serve as prototypes for the development of future enhanced versions.
Agarose droplet microfluidics for highly parallel and efficient single molecule emulsion PCR.
Leng, Xuefei; Zhang, Wenhua; Wang, Chunming; Cui, Liang; Yang, Chaoyong James
2010-11-07
An agarose droplet method was developed for highly parallel and efficient single molecule emulsion PCR. The method capitalizes on the unique thermoresponsive sol-gel switching property of agarose for highly efficient DNA amplification and amplicon trapping. Uniform agarose solution droplets generated via a microfluidic chip serve as robust and inert nanolitre PCR reactors for single copy DNA molecule amplification. After PCR, agarose droplets are gelated to form agarose beads, trapping all amplicons in each reactor to maintain the monoclonality of each droplet. This method does not require cocapsulation of primer labeled microbeads, allows high throughput generation of uniform droplets and enables high PCR efficiency, making it a promising platform for many single copy genetic studies.
A Versatile Cell Death Screening Assay Using Dye-Stained Cells and Multivariate Image Analysis.
Collins, Tony J; Ylanko, Jarkko; Geng, Fei; Andrews, David W
2015-11-01
A novel dye-based method for measuring cell death in image-based screens is presented. Unlike conventional high- and medium-throughput cell death assays that measure only one form of cell death accurately, using multivariate analysis of micrographs of cells stained with the inexpensive mix, red dye nonyl acridine orange, and a nuclear stain, it was possible to quantify cell death induced by a variety of different agonists even without a positive control. Surprisingly, using a single known cytotoxic agent as a positive control for training a multivariate classifier allowed accurate quantification of cytotoxicity for mechanistically unrelated compounds enabling generation of dose-response curves. Comparison with low throughput biochemical methods suggested that cell death was accurately distinguished from cell stress induced by low concentrations of the bioactive compounds Tunicamycin and Brefeldin A. High-throughput image-based format analyses of more than 300 kinase inhibitors correctly identified 11 as cytotoxic with only 1 false positive. The simplicity and robustness of this dye-based assay makes it particularly suited to live cell screening for toxic compounds.
A Versatile Cell Death Screening Assay Using Dye-Stained Cells and Multivariate Image Analysis
Collins, Tony J.; Ylanko, Jarkko; Geng, Fei
2015-01-01
Abstract A novel dye-based method for measuring cell death in image-based screens is presented. Unlike conventional high- and medium-throughput cell death assays that measure only one form of cell death accurately, using multivariate analysis of micrographs of cells stained with the inexpensive mix, red dye nonyl acridine orange, and a nuclear stain, it was possible to quantify cell death induced by a variety of different agonists even without a positive control. Surprisingly, using a single known cytotoxic agent as a positive control for training a multivariate classifier allowed accurate quantification of cytotoxicity for mechanistically unrelated compounds enabling generation of dose–response curves. Comparison with low throughput biochemical methods suggested that cell death was accurately distinguished from cell stress induced by low concentrations of the bioactive compounds Tunicamycin and Brefeldin A. High-throughput image-based format analyses of more than 300 kinase inhibitors correctly identified 11 as cytotoxic with only 1 false positive. The simplicity and robustness of this dye-based assay makes it particularly suited to live cell screening for toxic compounds. PMID:26422066
PREVAIL: IBM's e-beam technology for next generation lithography
NASA Astrophysics Data System (ADS)
Pfeiffer, Hans C.
2000-07-01
PREVAIL - Projection Reduction Exposure with Variable Axis Immersion Lenses represents the high throughput e-beam projection approach to NGL which IBM is pursuing in cooperation with Nikon Corporation as alliance partner. This paper discusses the challenges and accomplishments of the PREVAIL project. The supreme challenge facing all e-beam lithography approaches has been and still is throughput. Since the throughput of e-beam projection systems is severely limited by the available optical field size, the key to success is the ability to overcome this limitation. The PREVAIL technique overcomes field-limiting off-axis aberrations through the use of variable axis lenses, which electronically shift the optical axis simultaneously with the deflected beam so that the beam effectively remains on axis. The resist images obtained with the Proof-of-Concept (POC) system demonstrate that PREVAIL effectively eliminates off- axis aberrations affecting both resolution and placement accuracy of pixels. As part of the POC system a high emittance gun has been developed to provide uniform illumination of the patterned subfield and to fill the large numerical aperture projection optics designed to significantly reduce beam blur caused by Coulomb interaction.
Efficient mouse genome engineering by CRISPR-EZ technology.
Modzelewski, Andrew J; Chen, Sean; Willis, Brandon J; Lloyd, K C Kent; Wood, Joshua A; He, Lin
2018-06-01
CRISPR/Cas9 technology has transformed mouse genome editing with unprecedented precision, efficiency, and ease; however, the current practice of microinjecting CRISPR reagents into pronuclear-stage embryos remains rate-limiting. We thus developed CRISPR ribonucleoprotein (RNP) electroporation of zygotes (CRISPR-EZ), an electroporation-based technology that outperforms pronuclear and cytoplasmic microinjection in efficiency, simplicity, cost, and throughput. In C57BL/6J and C57BL/6N mouse strains, CRISPR-EZ achieves 100% delivery of Cas9/single-guide RNA (sgRNA) RNPs, facilitating indel mutations (insertions or deletions), exon deletions, point mutations, and small insertions. In a side-by-side comparison in the high-throughput KnockOut Mouse Project (KOMP) pipeline, CRISPR-EZ consistently outperformed microinjection. Here, we provide an optimized protocol covering sgRNA synthesis, embryo collection, RNP electroporation, mouse generation, and genotyping strategies. Using CRISPR-EZ, a graduate-level researcher with basic embryo-manipulation skills can obtain genetically modified mice in 6 weeks. Altogether, CRISPR-EZ is a simple, economic, efficient, and high-throughput technology that is potentially applicable to other mammalian species.
Wells, Darren M.; French, Andrew P.; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein; Bennett, Malcolm J.; Pridmore, Tony P.
2012-01-01
Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana. PMID:22527394
Wells, Darren M; French, Andrew P; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein I; Hijazi, Hussein; Bennett, Malcolm J; Pridmore, Tony P
2012-06-05
Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana.
Lochlainn, Seosamh Ó; Amoah, Stephen; Graham, Neil S; Alamer, Khalid; Rios, Juan J; Kurup, Smita; Stoute, Andrew; Hammond, John P; Østergaard, Lars; King, Graham J; White, Phillip J; Broadley, Martin R
2011-12-08
Targeted Induced Loci Lesions IN Genomes (TILLING) is increasingly being used to generate and identify mutations in target genes of crop genomes. TILLING populations of several thousand lines have been generated in a number of crop species including Brassica rapa. Genetic analysis of mutants identified by TILLING requires an efficient, high-throughput and cost effective genotyping method to track the mutations through numerous generations. High resolution melt (HRM) analysis has been used in a number of systems to identify single nucleotide polymorphisms (SNPs) and insertion/deletions (IN/DELs) enabling the genotyping of different types of samples. HRM is ideally suited to high-throughput genotyping of multiple TILLING mutants in complex crop genomes. To date it has been used to identify mutants and genotype single mutations. The aim of this study was to determine if HRM can facilitate downstream analysis of multiple mutant lines identified by TILLING in order to characterise allelic series of EMS induced mutations in target genes across a number of generations in complex crop genomes. We demonstrate that HRM can be used to genotype allelic series of mutations in two genes, BraA.CAX1a and BraA.MET1.a in Brassica rapa. We analysed 12 mutations in BraA.CAX1.a and five in BraA.MET1.a over two generations including a back-cross to the wild-type. Using a commercially available HRM kit and the Lightscanner™ system we were able to detect mutations in heterozygous and homozygous states for both genes. Using HRM genotyping on TILLING derived mutants, it is possible to generate an allelic series of mutations within multiple target genes rapidly. Lines suitable for phenotypic analysis can be isolated approximately 8-9 months (3 generations) from receiving M3 seed of Brassica rapa from the RevGenUK TILLING service.
2011-01-01
Background Targeted Induced Loci Lesions IN Genomes (TILLING) is increasingly being used to generate and identify mutations in target genes of crop genomes. TILLING populations of several thousand lines have been generated in a number of crop species including Brassica rapa. Genetic analysis of mutants identified by TILLING requires an efficient, high-throughput and cost effective genotyping method to track the mutations through numerous generations. High resolution melt (HRM) analysis has been used in a number of systems to identify single nucleotide polymorphisms (SNPs) and insertion/deletions (IN/DELs) enabling the genotyping of different types of samples. HRM is ideally suited to high-throughput genotyping of multiple TILLING mutants in complex crop genomes. To date it has been used to identify mutants and genotype single mutations. The aim of this study was to determine if HRM can facilitate downstream analysis of multiple mutant lines identified by TILLING in order to characterise allelic series of EMS induced mutations in target genes across a number of generations in complex crop genomes. Results We demonstrate that HRM can be used to genotype allelic series of mutations in two genes, BraA.CAX1a and BraA.MET1.a in Brassica rapa. We analysed 12 mutations in BraA.CAX1.a and five in BraA.MET1.a over two generations including a back-cross to the wild-type. Using a commercially available HRM kit and the Lightscanner™ system we were able to detect mutations in heterozygous and homozygous states for both genes. Conclusions Using HRM genotyping on TILLING derived mutants, it is possible to generate an allelic series of mutations within multiple target genes rapidly. Lines suitable for phenotypic analysis can be isolated approximately 8-9 months (3 generations) from receiving M3 seed of Brassica rapa from the RevGenUK TILLING service. PMID:22152063
Bosch, Carles; Martínez, Albert; Masachs, Nuria; Teixeira, Cátia M; Fernaud, Isabel; Ulloa, Fausto; Pérez-Martínez, Esther; Lois, Carlos; Comella, Joan X; DeFelipe, Javier; Merchán-Pérez, Angel; Soriano, Eduardo
2015-01-01
The fine analysis of synaptic contacts is usually performed using transmission electron microscopy (TEM) and its combination with neuronal labeling techniques. However, the complex 3D architecture of neuronal samples calls for their reconstruction from serial sections. Here we show that focused ion beam/scanning electron microscopy (FIB/SEM) allows efficient, complete, and automatic 3D reconstruction of identified dendrites, including their spines and synapses, from GFP/DAB-labeled neurons, with a resolution comparable to that of TEM. We applied this technology to analyze the synaptogenesis of labeled adult-generated granule cells (GCs) in mice. 3D reconstruction of dendritic spines in GCs aged 3-4 and 8-9 weeks revealed two different stages of dendritic spine development and unexpected features of synapse formation, including vacant and branched dendritic spines and presynaptic terminals establishing synapses with up to 10 dendritic spines. Given the reliability, efficiency, and high resolution of FIB/SEM technology and the wide use of DAB in conventional EM, we consider FIB/SEM fundamental for the detailed characterization of identified synaptic contacts in neurons in a high-throughput manner.
Bosch, Carles; Martínez, Albert; Masachs, Nuria; Teixeira, Cátia M.; Fernaud, Isabel; Ulloa, Fausto; Pérez-Martínez, Esther; Lois, Carlos; Comella, Joan X.; DeFelipe, Javier; Merchán-Pérez, Angel; Soriano, Eduardo
2015-01-01
The fine analysis of synaptic contacts is usually performed using transmission electron microscopy (TEM) and its combination with neuronal labeling techniques. However, the complex 3D architecture of neuronal samples calls for their reconstruction from serial sections. Here we show that focused ion beam/scanning electron microscopy (FIB/SEM) allows efficient, complete, and automatic 3D reconstruction of identified dendrites, including their spines and synapses, from GFP/DAB-labeled neurons, with a resolution comparable to that of TEM. We applied this technology to analyze the synaptogenesis of labeled adult-generated granule cells (GCs) in mice. 3D reconstruction of dendritic spines in GCs aged 3–4 and 8–9 weeks revealed two different stages of dendritic spine development and unexpected features of synapse formation, including vacant and branched dendritic spines and presynaptic terminals establishing synapses with up to 10 dendritic spines. Given the reliability, efficiency, and high resolution of FIB/SEM technology and the wide use of DAB in conventional EM, we consider FIB/SEM fundamental for the detailed characterization of identified synaptic contacts in neurons in a high-throughput manner. PMID:26052271
Veeranagouda, Yaligara; Debono-Lagneaux, Delphine; Fournet, Hamida; Thill, Gilbert; Didier, Michel
2018-01-16
The emergence of clustered regularly interspaced short palindromic repeats-Cas9 (CRISPR-Cas9) gene editing systems has enabled the creation of specific mutants at low cost, in a short time and with high efficiency, in eukaryotic cells. Since a CRISPR-Cas9 system typically creates an array of mutations in targeted sites, a successful gene editing project requires careful selection of edited clones. This process can be very challenging, especially when working with multiallelic genes and/or polyploid cells (such as cancer and plants cells). Here we described a next-generation sequencing method called CRISPR-Cas9 Edited Site Sequencing (CRES-Seq) for the efficient and high-throughput screening of CRISPR-Cas9-edited clones. CRES-Seq facilitates the precise genotyping up to 96 CRISPR-Cas9-edited sites (CRES) in a single MiniSeq (Illumina) run with an approximate sequencing cost of $6/clone. CRES-Seq is particularly useful when multiple genes are simultaneously targeted by CRISPR-Cas9, and also for screening of clones generated from multiallelic genes/polyploid cells. © 2018 by John Wiley & Sons, Inc. Copyright © 2018 John Wiley & Sons, Inc.
Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir
2018-06-01
Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.
Tissue vascularization through 3D printing: Will technology bring us flow?
Paulsen, S J; Miller, J S
2015-05-01
Though in vivo models provide the most physiologically relevant environment for studying tissue function, in vitro studies provide researchers with explicit control over experimental conditions and the potential to develop high throughput testing methods. In recent years, advancements in developmental biology research and imaging techniques have significantly improved our understanding of the processes involved in vascular development. However, the task of recreating the complex, multi-scale vasculature seen in in vivo systems remains elusive. 3D bioprinting offers a potential method to generate controlled vascular networks with hierarchical structure approaching that of in vivo networks. Bioprinting is an interdisciplinary field that relies on advances in 3D printing technology along with advances in imaging and computational modeling, which allow researchers to monitor cellular function and to better understand cellular environment within the printed tissue. As bioprinting technologies improve with regards to resolution, printing speed, available materials, and automation, 3D printing could be used to generate highly controlled vascularized tissues in a high throughput manner for use in regenerative medicine and the development of in vitro tissue models for research in developmental biology and vascular diseases. © 2015 Wiley Periodicals, Inc.
Chen, Yu-Mei; Li, Hua; Fan, Yi; Zhang, Qi-Jun; Li, Xing; Wu, Li-Jie; Chen, Zi-Jie; Zhu, Chun; Qian, Ling-Mei
2017-04-25
Previous studies have shown that mammalian cardiac tissue has a regenerative capacity. Remarkably, neonatal mice can regenerate their cardiac tissue for up to 6 days after birth, but this capacity is lost by day 7. In this study, we aimed to explore the expression pattern of long noncoding RNA (lncRNA) during this period and examine the mechanisms underlying this process. We found that 685 lncRNAs and 1833 mRNAs were differentially expressed at P1 and P7 by the next-generation high-throughput RNA sequencing. The coding genes associated with differentially expressed lncRNAs were mainly involved in metabolic processes and cell proliferation, and also were potentially associated with several key regeneration signalling pathways, including PI3K-Akt, MAPK, Hippo and Wnt. In addition, we identified some correlated targets of highly-dysregulated lncRNAs such as Igfbp3, Trnp1, Itgb6, and Pim3 by the coding-noncoding gene co-expression network. These data may offer a reference resource for further investigation about the mechanisms by which lncRNAs regulate cardiac regeneration.
Biomonitoring data can help inform the development and calibration of high-throughput exposure modeling for use in prioritization and risk evaluation. A pilot project was conducted to evaluate the feasibility of using pooled banked blood samples to generate initial data on popul...
Using the ToxMiner Database for Identifying Disease-Gene Associations in the ToxCast Dataset
The US EPA ToxCast program is using in vitro, high-throughput screening (HTS) to profile and model the bioactivity of environmental chemicals. The main goal of the ToxCast program is to generate predictive signatures of toxicity that ultimately provide rapid and cost-effective me...
The US EPA ToxCast program is using in vitro HTS (High-Throughput Screening) methods to profile and model bioactivity of environmental chemicals. The main goals of the ToxCast program are to generate predictive signatures of toxicity, and ultimately provide rapid and cost-effecti...
USDA-ARS?s Scientific Manuscript database
Generation of natural product libraries containing column fractions, each with only a few small molecules, by a high throughput, automated fractionation system has made it possible to implement an improved dereplication strategy for selection and prioritization of hits in a natural product discovery...
USDA-ARS?s Scientific Manuscript database
An interactome is the genome-wide roadmap of protein-protein interactions that occur within an organism. Interactomes for humans, the fruit fly, and now plants such as Arabidopsis thaliana and Oryza sativa have been generated using high throughput experimental methods. It is possible to use these ...
High-throughput non-targeted analyses (NTA) rely on chemical reference databases for tentative identification of observed chemical features. Many of these databases and online resources incorporate chemical structure data not in a form that is readily observed by mass spectromet...
Community standards for genomic resources, genetic conservation, and data integration
Jill Wegrzyn; Meg Staton; Emily Grau; Richard Cronn; C. Dana Nelson
2017-01-01
Genetics and genomics are increasingly important in forestry management and conservation. Next generation sequencing can increase analytical power, but still relies on building on the structure of previously acquired data. Data standards and data sharing allow the community to maximize the analytical power of high throughput genomics data. The landscape of incomplete...
Comparative Transcriptomes and EVO-DEVO Studies Depending on Next Generation Sequencing.
Liu, Tiancheng; Yu, Lin; Liu, Lei; Li, Hong; Li, Yixue
2015-01-01
High throughput technology has prompted the progressive omics studies, including genomics and transcriptomics. We have reviewed the improvement of comparative omic studies, which are attributed to the high throughput measurement of next generation sequencing technology. Comparative genomics have been successfully applied to evolution analysis while comparative transcriptomics are adopted in comparison of expression profile from two subjects by differential expression or differential coexpression, which enables their application in evolutionary developmental biology (EVO-DEVO) studies. EVO-DEVO studies focus on the evolutionary pressure affecting the morphogenesis of development and previous works have been conducted to illustrate the most conserved stages during embryonic development. Old measurements of these studies are based on the morphological similarity from macro view and new technology enables the micro detection of similarity in molecular mechanism. Evolutionary model of embryo development, which includes the "funnel-like" model and the "hourglass" model, has been evaluated by combination of these new comparative transcriptomic methods with prior comparative genomic information. Although the technology has promoted the EVO-DEVO studies into a new era, technological and material limitation still exist and further investigations require more subtle study design and procedure.
Chung, Thomas D Y; Sergienko, Eduard; Millán, José Luis
2010-04-27
The tissue-nonspecific alkaline phosphatase (TNAP) isozyme is centrally involved in the control of normal skeletal mineralization and pathophysiological abnormalities that lead to disease states such as hypophosphatasia, osteoarthritis, ankylosis and vascular calcification. TNAP acts in concert with the nucleoside triphosphate pyrophosphohydrolase-1 (NPP1) and the Ankylosis protein to regulate the extracellular concentrations of inorganic pyrophosphate (PP(i)), a potent inhibitor of mineralization. In this review we describe the serial development of two miniaturized high-throughput screens (HTS) for TNAP inhibitors that differ in both signal generation and detection formats, but more critically in the concentrations of a terminal alcohol acceptor used. These assay improvements allowed the rescue of the initially unsuccessful screening campaign against a large small molecule chemical library, but moreover enabled the discovery of several unique classes of molecules with distinct mechanisms of action and selectivity against the related placental (PLAP) and intestinal (IAP) alkaline phosphatase isozymes. This illustrates the underappreciated impact of the underlying fundamental assay configuration on screening success, beyond mere signal generation and detection formats.
Combinatorial Methods for Exploring Complex Materials
NASA Astrophysics Data System (ADS)
Amis, Eric J.
2004-03-01
Combinatorial and high-throughput methods have changed the paradigm of pharmaceutical synthesis and have begun to have a similar impact on materials science research. Already there are examples of combinatorial methods used for inorganic materials, catalysts, and polymer synthesis. For many investigations the primary goal has been discovery of new material compositions that optimize properties such as phosphorescence or catalytic activity. In the midst of the excitement generated to "make things", another opportunity arises for materials science to "understand things" by using the efficiency of combinatorial methods. We have shown that combinatorial methods hold potential for rapid and systematic generation of experimental data over the multi-parameter space typical of investigations in polymer physics. We have applied the combinatorial approach to studies of polymer thin films, biomaterials, polymer blends, filled polymers, and semicrystalline polymers. By combining library fabrication, high-throughput measurements, informatics, and modeling we can demonstrate validation of the methodology, new observations, and developments toward predictive models. This talk will present some of our latest work with applications to coating stability, multi-component formulations, and nanostructure assembly.
Hydrogel Droplet Microfluidics for High-Throughput Single Molecule/Cell Analysis.
Zhu, Zhi; Yang, Chaoyong James
2017-01-17
Heterogeneity among individual molecules and cells has posed significant challenges to traditional bulk assays, due to the assumption of average behavior, which would lose important biological information in heterogeneity and result in a misleading interpretation. Single molecule/cell analysis has become an important and emerging field in biological and biomedical research for insights into heterogeneity between large populations at high resolution. Compared with the ensemble bulk method, single molecule/cell analysis explores the information on time trajectories, conformational states, and interactions of individual molecules/cells, all key factors in the study of chemical and biological reaction pathways. Various powerful techniques have been developed for single molecule/cell analysis, including flow cytometry, atomic force microscopy, optical and magnetic tweezers, single-molecule fluorescence spectroscopy, and so forth. However, some of them have the low-throughput issue that has to analyze single molecules/cells one by one. Flow cytometry is a widely used high-throughput technique for single cell analysis but lacks the ability for intercellular interaction study and local environment control. Droplet microfluidics becomes attractive for single molecule/cell manipulation because single molecules/cells can be individually encased in monodisperse microdroplets, allowing high-throughput analysis and manipulation with precise control of the local environment. Moreover, hydrogels, cross-linked polymer networks that swell in the presence of water, have been introduced into droplet microfluidic systems as hydrogel droplet microfluidics. By replacing an aqueous phase with a monomer or polymer solution, hydrogel droplets can be generated on microfluidic chips for encapsulation of single molecules/cells according to the Poisson distribution. The sol-gel transition property endows the hydrogel droplets with new functionalities and diversified applications in single molecule/cell analysis. The hydrogel can act as a 3D cell culture matrix to mimic the extracellular environment for long-term single cell culture, which allows further heterogeneity study in proliferation, drug screening, and metastasis at the single-cell level. The sol-gel transition allows reactions in solution to be performed rapidly and efficiently with product storage in the gel for flexible downstream manipulation and analysis. More importantly, controllable sol-gel regulation provides a new way to maintain phenotype-genotype linkages in the hydrogel matrix for high throughput molecular evolution. In this Account, we will review the hydrogel droplet generation on microfluidics, single molecule/cell encapsulation in hydrogel droplets, as well as the progress made by our group and others in the application of hydrogel droplet microfluidics for single molecule/cell analysis, including single cell culture, single molecule/cell detection, single cell sequencing, and molecular evolution.
Strategic and Operational Plan for Integrating Transcriptomics ...
Plans for incorporating high throughput transcriptomics into the current high throughput screening activities at NCCT; the details are in the attached slide presentation presentation on plans for incorporating high throughput transcriptomics into the current high throughput screening activities at NCCT, given at the OECD meeting on June 23, 2016
High-Throughput Experimental Approach Capabilities | Materials Science |
NREL High-Throughput Experimental Approach Capabilities High-Throughput Experimental Approach by yellow and is for materials in the upper right sector. NREL's high-throughput experimental ,Te) and oxysulfide sputtering Combi-5: Nitrides and oxynitride sputtering We also have several non
Mask pattern generator employing EPL technology
NASA Astrophysics Data System (ADS)
Yoshioka, Nobuyuki; Yamabe, Masaki; Wakamiya, Wataru; Endo, Nobuhiro
2003-08-01
Mask cost is one of crucial issues in device fabrication, especially in SoC (System on a Chip) with small-volume production. The cost mainly depends on productivity of mask manufacturing tools such as mask writers and defect inspection tools. EPL (Electron Projection Lithography) has been developing as a high-throughput electron beam exposure technology that will succeed optical lithography. The application of EPL technology to mask writing will result in high productivity and contribute to decrease the mask cost. The concept of a mask pattern generator employing EPL technology is proposed in this paper. It is very similar to EPL technology used for pattern printing on a wafer. The mask patterns on the glass substrate are exposed by projecting the basic circuit patterns formed on the mother EPL mask. One example of the mother EPL mask is a stencil type made with 200-mm Si wafer. The basic circuit patterns are IP patterns and logical primitive patterns such as cell libraries (AND, OR, Inverter, Flip-Flop and etc.) to express the SoC device patterns. Since the SoC patterns are exposed with its collective units such as IP and logical primitive patterns by using this method, the high throughput will be expected comparing with conventional mask E-beam writers. In this paper, the mask pattern generator with the EPL technology is proposed. The concept, its advantages and issues to be solved are discussed.
MOEX: Solvent extraction approach for recycling enriched 98Mo/ 100Mo material
Tkac, Peter; Brown, M. Alex; Momen, Abdul; ...
2017-03-20
Several promising pathways exist for the production of 99Mo/ 99mTc using enriched 98Mo or 100Mo. Use of Mo targets require a major change in current generator technology, and the necessity for an efficient recycle pathway to recover valuable enriched Mo material. High recovery yields, purity, suitable chemical form and particle size are required. Results on the development of the MOEX– molybdenum solvent extraction – approach to recycle enriched Mo material are presented. Furthermore, the advantages of the MOEX process are very high decontamination factors from potassium and other elements, high throughput, easy scalability, automation, and minimal waste generation.
MOEX: Solvent extraction approach for recycling enriched 98Mo/ 100Mo material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tkac, Peter; Brown, M. Alex; Momen, Abdul
Several promising pathways exist for the production of 99Mo/ 99mTc using enriched 98Mo or 100Mo. Use of Mo targets require a major change in current generator technology, and the necessity for an efficient recycle pathway to recover valuable enriched Mo material. High recovery yields, purity, suitable chemical form and particle size are required. Results on the development of the MOEX– molybdenum solvent extraction – approach to recycle enriched Mo material are presented. Furthermore, the advantages of the MOEX process are very high decontamination factors from potassium and other elements, high throughput, easy scalability, automation, and minimal waste generation.
Lin, Sansan; Fischl, Anthony S; Bi, Xiahui; Parce, Wally
2003-03-01
Phospholipid molecules such as ceramide and phosphoinositides play crucial roles in signal transduction pathways. Lipid-modifying enzymes including sphingomyelinase and phosphoinositide kinases regulate the generation and degradation of these lipid-signaling molecules and are important therapeutic targets in drug discovery. We now report a sensitive and convenient method to separate these lipids using microfluidic chip-based technology. The method takes advantage of the high-separation power of the microchips that separate lipids based on micellar electrokinetic capillary chromatography (MEKC) and the high sensitivity of fluorescence detection. We further exploited the method to develop a homogenous assay to monitor activities of lipid-modifying enzymes. The assay format consists of two steps: an on-plate enzymatic reaction using fluorescently labeled substrates followed by an on-chip MEKC separation of the reaction products from the substrates. The utility of the assay format for high-throughput screening (HTS) is demonstrated using phospholipase A(2) on the Caliper 250 HTS system: throughput of 80min per 384-well plate can be achieved with unattended running time of 5.4h. This enabling technology for assaying lipid-modifying enzymes is ideal for HTS because it avoids the use of radioactive substrates and complicated separation/washing steps and detects both substrate and product simultaneously.
High-throughput Molecular Simulations of MOFs for CO2 Separation: Opportunities and Challenges
NASA Astrophysics Data System (ADS)
Erucar, Ilknur; Keskin, Seda
2018-02-01
Metal organic frameworks (MOFs) have emerged as great alternatives to traditional nanoporous materials for CO2 separation applications. MOFs are porous materials that are formed by self-assembly of transition metals and organic ligands. The most important advantage of MOFs over well-known porous materials is the possibility to generate multiple materials with varying structural properties and chemical functionalities by changing the combination of metal centers and organic linkers during the synthesis. This leads to a large diversity of materials with various pore sizes and shapes that can be efficiently used for CO2 separations. Since the number of synthesized MOFs has already reached to several thousand, experimental investigation of each MOF at the lab-scale is not practical. High-throughput computational screening of MOFs is a great opportunity to identify the best materials for CO2 separation and to gain molecular-level insights into the structure-performance relationships. This type of knowledge can be used to design new materials with the desired structural features that can lead to extraordinarily high CO2 selectivities. In this mini-review, we focused on developments in high-throughput molecular simulations of MOFs for CO2 separations. After reviewing the current studies on this topic, we discussed the opportunities and challenges in the field and addressed the potential future developments.
Devailly, Guillaume; Mantsoki, Anna; Joshi, Anagha
2016-11-01
Better protocols and decreasing costs have made high-throughput sequencing experiments now accessible even to small experimental laboratories. However, comparing one or few experiments generated by an individual lab to the vast amount of relevant data freely available in the public domain might be limited due to lack of bioinformatics expertise. Though several tools, including genome browsers, allow such comparison at a single gene level, they do not provide a genome-wide view. We developed Heat*seq, a web-tool that allows genome scale comparison of high throughput experiments chromatin immuno-precipitation followed by sequencing, RNA-sequencing and Cap Analysis of Gene Expression) provided by a user, to the data in the public domain. Heat*seq currently contains over 12 000 experiments across diverse tissues and cell types in human, mouse and drosophila. Heat*seq displays interactive correlation heatmaps, with an ability to dynamically subset datasets to contextualize user experiments. High quality figures and tables are produced and can be downloaded in multiple formats. Web application: http://www.heatstarseq.roslin.ed.ac.uk/ Source code: https://github.com/gdevailly CONTACT: Guillaume.Devailly@roslin.ed.ac.uk or Anagha.Joshi@roslin.ed.ac.ukSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Tai, Mitchell; Ly, Amanda; Leung, Inne; Nayar, Gautam
2015-01-01
The burgeoning pipeline for new biologic drugs has increased the need for high-throughput process characterization to efficiently use process development resources. Breakthroughs in highly automated and parallelized upstream process development have led to technologies such as the 250-mL automated mini bioreactor (ambr250™) system. Furthermore, developments in modern design of experiments (DoE) have promoted the use of definitive screening design (DSD) as an efficient method to combine factor screening and characterization. Here we utilize the 24-bioreactor ambr250™ system with 10-factor DSD to demonstrate a systematic experimental workflow to efficiently characterize an Escherichia coli (E. coli) fermentation process for recombinant protein production. The generated process model is further validated by laboratory-scale experiments and shows how the strategy is useful for quality by design (QbD) approaches to control strategies for late-stage characterization. © 2015 American Institute of Chemical Engineers.
Multiplex amplification of large sets of human exons.
Porreca, Gregory J; Zhang, Kun; Li, Jin Billy; Xie, Bin; Austin, Derek; Vassallo, Sara L; LeProust, Emily M; Peck, Bill J; Emig, Christopher J; Dahl, Fredrik; Gao, Yuan; Church, George M; Shendure, Jay
2007-11-01
A new generation of technologies is poised to reduce DNA sequencing costs by several orders of magnitude. But our ability to fully leverage the power of these technologies is crippled by the absence of suitable 'front-end' methods for isolating complex subsets of a mammalian genome at a scale that matches the throughput at which these platforms will routinely operate. We show that targeting oligonucleotides released from programmable microarrays can be used to capture and amplify approximately 10,000 human exons in a single multiplex reaction. Additionally, we show integration of this protocol with ultra-high-throughput sequencing for targeted variation discovery. Although the multiplex capture reaction is highly specific, we found that nonuniform capture is a key issue that will need to be resolved by additional optimization. We anticipate that highly multiplexed methods for targeted amplification will enable the comprehensive resequencing of human exons at a fraction of the cost of whole-genome resequencing.
Casalino, Laura; Magnani, Dario; De Falco, Sandro; Filosa, Stefania; Minchiotti, Gabriella; Patriarca, Eduardo J; De Cesare, Dario
2012-03-01
The use of Embryonic Stem Cells (ESCs) holds considerable promise both for drug discovery programs and the treatment of degenerative disorders in regenerative medicine approaches. Nevertheless, the successful use of ESCs is still limited by the lack of efficient control of ESC self-renewal and differentiation capabilities. In this context, the possibility to modulate ESC biological properties and to obtain homogenous populations of correctly specified cells will help developing physiologically relevant screens, designed for the identification of stem cell modulators. Here, we developed a high throughput screening-suitable ESC neural differentiation assay by exploiting the Cell(maker) robotic platform and demonstrated that neural progenies can be generated from ESCs in complete automation, with high standards of accuracy and reliability. Moreover, we performed a pilot screening providing proof of concept that this assay allows the identification of regulators of ESC neural differentiation in full automation.
Optima MDxt: A high throughput 335 keV mid-dose implanter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisner, Edward; David, Jonathan; Justesen, Perry
2012-11-06
The continuing demand for both energy purity and implant angle control along with high wafer throughput drove the development of the Axcelis Optima MDxt mid-dose ion implanter. The system utilizes electrostatic scanning, an electrostatic parallelizing lens and an electrostatic energy filter to produce energetically pure beams with high angular integrity. Based on field proven components, the Optima MDxt beamline architecture offers the high beam currents possible with singly charged species including arsenic at energies up to 335 keV as well as large currents from multiply charged species at energies extending over 1 MeV. Conversely, the excellent energy filtering capability allowsmore » high currents at low beam energies, since it is safe to utilize large deceleration ratios. This beamline is coupled with the >500 WPH capable endstation technology used on the Axcelis Optima XEx high energy ion implanter. The endstation includes in-situ angle measurements of the beam in order to maintain excellent beam-to-wafer implant angle control in both the horizontal and vertical directions. The Optima platform control system provides new generation dose control system that assures excellent dosimetry and charge control. This paper will describe the features and technologies that allow the Optima MDxt to provide superior process performance at the highest wafer throughput, and will provide examples of the process performance achievable.« less
Towards High-Throughput, Simultaneous Characterization of Thermal and Thermoelectric Properties
NASA Astrophysics Data System (ADS)
Miers, Collier Stephen
The extension of thermoelectric generators to more general markets requires that the devices be affordable and practical (low $/Watt) to implement. A key challenge in this pursuit is the quick and accurate characterization of thermoelectric materials, which will allow researchers to tune and modify the material properties quickly. The goal of this thesis is to design and fabricate a high-throughput characterization system for the simultaneous characterization of thermal, electrical, and thermoelectric properties for device scale material samples. The measurement methodology presented in this thesis combines a custom designed measurement system created specifically for high-throughput testing with a novel device structure that permits simultaneous characterization of the material properties. The measurement system is based upon the 3o method for thermal conductivity measurements, with the addition of electrodes and voltage probes to measure the electrical conductivity and Seebeck coefficient. A device designed and optimized to permit the rapid characterization of thermoelectric materials is also presented. This structure is optimized to ensure 1D heat transfer within the sample, thus permitting rapid data analysis and fitting using a MATLAB script. Verification of the thermal portion of the system is presented using fused silica and sapphire materials for benchmarking. The fused silica samples yielded a thermal conductivity of 1.21 W/(m K), while a thermal conductivity of 31.2 W/(m K) was measured for the sapphire samples. The device and measurement system designed and developed in this thesis provide insight and serve as a foundation for the development of high throughput, simultaneous measurement platforms.
Prakesch, Michael; Srivastava, Stuti; Leek, Donald M; Arya, Prabhat
2006-01-01
With the goal of rapidly accessing tetrahydroquinoline-based natural-product-like polycyclic architectures, herein, we report an unprecedented, in situ, stereocontrolled Aza Michael approach in solution and on the solid phase. The mild reaction conditions required to reach the desired target are highly attractive for the use of this method in library generation. To our knowledge, this approach has not been used before, and it opens a novel route leading to a wide variety of tetrahydroquinoline-derived bridged tricyclic derivatives.
Computer numeric control generation of toric surfaces
NASA Astrophysics Data System (ADS)
Bradley, Norman D.; Ball, Gary A.; Keller, John R.
1994-05-01
Until recently, the manufacture of toric ophthalmic lenses relied largely upon expensive, manual techniques for generation and polishing. Recent gains in computer numeric control (CNC) technology and tooling enable lens designers to employ single- point diamond, fly-cutting methods in the production of torics. Fly-cutting methods continue to improve, significantly expanding lens design possibilities while lowering production costs. Advantages of CNC fly cutting include precise control of surface geometry, rapid production with high throughput, and high-quality lens surface finishes requiring minimal polishing. As accessibility and affordability increase within the ophthalmic market, torics promise to dramatically expand lens design choices available to consumers.
Controlled Electrospray Generation of Nonspherical Alginate Microparticles.
Jeyhani, Morteza; Mak, Sze Yi; Sammut, Stephen; Shum, Ho Cheung; Hwang, Dae Kun; Tsai, Scott S H
2017-12-11
Electrospraying is a technique used to generate microparticles in a high throughput manner. For biomedical applications, a biocompatible electrosprayed material is often desirable. Using polymers, such as alginate hydrogels, makes it possible to create biocompatible and biodegradable microparticles that can be used for cell encapsulation, to be employed as drug carriers, and for use in 3D cell culturing. Evidence in the literature suggests that the morphology of the biocompatible microparticles is relevant in controlling the dynamics of the microparticles in drug delivery and 3D cell culturing applications. Yet, most electrospray-based techniques only form spherical microparticles, and there is currently no widely adopted technique for producing nonspherical microparticles at a high throughput. Here, we demonstrate the generation of nonspherical biocompatible alginate microparticles by electrospraying, and control the shape of the microparticles by varying experimental parameters such as chemical concentration and the distance between the electrospray tip and the particle-solidification bath. Importantly, we show that these changes to the experimental setup enable the synthesis of different shaped particles, and the systematic change in parameters, such as chemical concentration, result in monotonic changes to the particle aspect ratio. We expect that these results will find utility in many biomedical applications that require biocompatible microparticles of specific shapes. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
A DNA fingerprinting procedure for ultra high-throughput genetic analysis of insects.
Schlipalius, D I; Waldron, J; Carroll, B J; Collins, P J; Ebert, P R
2001-12-01
Existing procedures for the generation of polymorphic DNA markers are not optimal for insect studies in which the organisms are often tiny and background molecular information is often non-existent. We have used a new high throughput DNA marker generation protocol called randomly amplified DNA fingerprints (RAF) to analyse the genetic variability in three separate strains of the stored grain pest, Rhyzopertha dominica. This protocol is quick, robust and reliable even though it requires minimal sample preparation, minute amounts of DNA and no prior molecular analysis of the organism. Arbitrarily selected oligonucleotide primers routinely produced approximately 50 scoreable polymorphic DNA markers, between individuals of three independent field isolates of R. dominica. Multivariate cluster analysis using forty-nine arbitrarily selected polymorphisms generated from a single primer reliably separated individuals into three clades corresponding to their geographical origin. The resulting clades were quite distinct, with an average genetic difference of 37.5 +/- 6.0% between clades and of 21.0 +/- 7.1% between individuals within clades. As a prelude to future gene mapping efforts, we have also assessed the performance of RAF under conditions commonly used in gene mapping. In this analysis, fingerprints from pooled DNA samples accurately and reproducibly reflected RAF profiles obtained from individual DNA samples that had been combined to create the bulked samples.
Säll, Anna; Walle, Maria; Wingren, Christer; Müller, Susanne; Nyman, Tomas; Vala, Andrea; Ohlin, Mats; Borrebaeck, Carl A K; Persson, Helena
2016-10-01
Antibody-based proteomics offers distinct advantages in the analysis of complex samples for discovery and validation of biomarkers associated with disease. However, its large-scale implementation requires tools and technologies that allow development of suitable antibody or antibody fragments in a high-throughput manner. To address this we designed and constructed two human synthetic antibody fragment (scFv) libraries denoted HelL-11 and HelL-13. By the use of phage display technology, in total 466 unique scFv antibodies specific for 114 different antigens were generated. The specificities of these antibodies were analyzed in a variety of immunochemical assays and a subset was further evaluated for functionality in protein microarray applications. This high-throughput approach demonstrates the ability to rapidly generate a wealth of reagents not only for proteome research, but potentially also for diagnostics and therapeutics. In addition, this work provides a great example on how a synthetic approach can be used to optimize library designs. By having precise control of the diversity introduced into the antigen-binding sites, synthetic libraries offer increased understanding of how different diversity contributes to antibody binding reactivity and stability, thereby providing the key to future library optimization. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Mora-Castilla, Sergio; To, Cuong; Vaezeslami, Soheila; Morey, Robert; Srinivasan, Srimeenakshi; Dumdie, Jennifer N; Cook-Andersen, Heidi; Jenkins, Joby; Laurent, Louise C
2016-08-01
As the cost of next-generation sequencing has decreased, library preparation costs have become a more significant proportion of the total cost, especially for high-throughput applications such as single-cell RNA profiling. Here, we have applied novel technologies to scale down reaction volumes for library preparation. Our system consisted of in vitro differentiated human embryonic stem cells representing two stages of pancreatic differentiation, for which we prepared multiple biological and technical replicates. We used the Fluidigm (San Francisco, CA) C1 single-cell Autoprep System for single-cell complementary DNA (cDNA) generation and an enzyme-based tagmentation system (Nextera XT; Illumina, San Diego, CA) with a nanoliter liquid handler (mosquito HTS; TTP Labtech, Royston, UK) for library preparation, reducing the reaction volume down to 2 µL and using as little as 20 pg of input cDNA. The resulting sequencing data were bioinformatically analyzed and correlated among the different library reaction volumes. Our results showed that decreasing the reaction volume did not interfere with the quality or the reproducibility of the sequencing data, and the transcriptional data from the scaled-down libraries allowed us to distinguish between single cells. Thus, we have developed a process to enable efficient and cost-effective high-throughput single-cell transcriptome sequencing. © 2016 Society for Laboratory Automation and Screening.
A Rapid Method for the Determination of Fucoxanthin in Diatom
Wang, Li-Juan; Fan, Yong; Parsons, Ronald L.; Hu, Guang-Rong; Zhang, Pei-Yu
2018-01-01
Fucoxanthin is a natural pigment found in microalgae, especially diatoms and Chrysophyta. Recently, it has been shown to have anti-inflammatory, anti-tumor, and anti-obesityactivity in humans. Phaeodactylum tricornutum is a diatom with high economic potential due to its high content of fucoxanthin and eicosapentaenoic acid. In order to improve fucoxanthin production, physical and chemical mutagenesis could be applied to generate mutants. An accurate and rapid method to assess the fucoxanthin content is a prerequisite for a high-throughput screen of mutants. In this work, the content of fucoxanthin in P. tricornutum was determined using spectrophotometry instead of high performance liquid chromatography (HPLC). This spectrophotometric method is easier and faster than liquid chromatography and the standard error was less than 5% when compared to the HPLC results. Also, this method can be applied to other diatoms, with standard errors of 3–14.6%. It provides a high throughput screening method for microalgae strains producing fucoxanthin. PMID:29361768
Michalet, X.; Siegmund, O.H.W.; Vallerga, J.V.; Jelinsky, P.; Millaud, J.E.; Weiss, S.
2017-01-01
We have recently developed a wide-field photon-counting detector having high-temporal and high-spatial resolutions and capable of high-throughput (the H33D detector). Its design is based on a 25 mm diameter multi-alkali photocathode producing one photo electron per detected photon, which are then multiplied up to 107 times by a 3-microchannel plate stack. The resulting electron cloud is proximity focused on a cross delay line anode, which allows determining the incident photon position with high accuracy. The imaging and fluorescence lifetime measurement performances of the H33D detector installed on a standard epifluorescence microscope will be presented. We compare them to those of standard single-molecule detectors such as single-photon avalanche photodiode (SPAD) or electron-multiplying camera using model samples (fluorescent beads, quantum dots and live cells). Finally, we discuss the design and applications of future generation of H33D detectors for single-molecule imaging and high-throughput study of biomolecular interactions. PMID:29479130
Zhang, Douglas; Lee, Junmin; Kilian, Kristopher A
2017-10-01
Cells in tissue receive a host of soluble and insoluble signals in a context-dependent fashion, where integration of these cues through a complex network of signal transduction cascades will define a particular outcome. Biomaterials scientists and engineers are tasked with designing materials that can at least partially recreate this complex signaling milieu towards new materials for biomedical applications. In this progress report, recent advances in high throughput techniques and high content imaging approaches that are facilitating the discovery of efficacious biomaterials are described. From microarrays of synthetic polymers, peptides and full-length proteins, to designer cell culture systems that present multiple biophysical and biochemical cues in tandem, it is discussed how the integration of combinatorics with high content imaging and analysis is essential to extracting biologically meaningful information from large scale cellular screens to inform the design of next generation biomaterials. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Forreryd, Andy; Johansson, Henrik; Albrekt, Ann-Sofie; Lindstedt, Malin
2014-05-16
Allergic contact dermatitis (ACD) develops upon exposure to certain chemical compounds termed skin sensitizers. To reduce the occurrence of skin sensitizers, chemicals are regularly screened for their capacity to induce sensitization. The recently developed Genomic Allergen Rapid Detection (GARD) assay is an in vitro alternative to animal testing for identification of skin sensitizers, classifying chemicals by evaluating transcriptional levels of a genomic biomarker signature. During assay development and biomarker identification, genome-wide expression analysis was applied using microarrays covering approximately 30,000 transcripts. However, the microarray platform suffers from drawbacks in terms of low sample throughput, high cost per sample and time consuming protocols and is a limiting factor for adaption of GARD into a routine assay for screening of potential sensitizers. With the purpose to simplify assay procedures, improve technical parameters and increase sample throughput, we assessed the performance of three high throughput gene expression platforms--nCounter®, BioMark HD™ and OpenArray®--and correlated their performance metrics against our previously generated microarray data. We measured the levels of 30 transcripts from the GARD biomarker signature across 48 samples. Detection sensitivity, reproducibility, correlations and overall structure of gene expression measurements were compared across platforms. Gene expression data from all of the evaluated platforms could be used to classify most of the sensitizers from non-sensitizers in the GARD assay. Results also showed high data quality and acceptable reproducibility for all platforms but only medium to poor correlations of expression measurements across platforms. In addition, evaluated platforms were superior to the microarray platform in terms of cost efficiency, simplicity of protocols and sample throughput. We evaluated the performance of three non-array based platforms using a limited set of transcripts from the GARD biomarker signature. We demonstrated that it was possible to achieve acceptable discriminatory power in terms of separation between sensitizers and non-sensitizers in the GARD assay while reducing assay costs, simplify assay procedures and increase sample throughput by using an alternative platform, providing a first step towards the goal to prepare GARD for formal validation and adaption of the assay for industrial screening of potential sensitizers.
BarraCUDA - a fast short read sequence aligner using graphics processing units
2012-01-01
Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net PMID:22244497
Chemical genomic profiling via barcode sequencing to predict compound mode of action
Piotrowski, Jeff S.; Simpkins, Scott W.; Li, Sheena C.; Deshpande, Raamesh; McIlwain, Sean; Ong, Irene; Myers, Chad L.; Boone, Charlie; Andersen, Raymond J.
2015-01-01
Summary Chemical genomics is an unbiased, whole-cell approach to characterizing novel compounds to determine mode of action and cellular target. Our version of this technique is built upon barcoded deletion mutants of Saccharomyces cerevisiae and has been adapted to a high-throughput methodology using next-generation sequencing. Here we describe the steps to generate a chemical genomic profile from a compound of interest, and how to use this information to predict molecular mechanism and targets of bioactive compounds. PMID:25618354
Throughput Calibration of the 52x0.2E1 Aperture
NASA Astrophysics Data System (ADS)
Heap, Sara
2009-07-01
The Next Generation Spectral Library {NGSL} is a library of low-dispersion STIS spectra extending from 0.2-1.0 microns. So far, 378 stars with a wide range in metallicity have been observed. Despite their high S/N>100, many NGSL spectra have 5-10% systematic errors in their spectral energy distributions, which can be traced to throughput variations in the 52x0.2E1 aperture caused by vignetting of a wavelength-dependent asymmetric PSF. We propose to obtain STIS spectra of the HST standard star, BD+75D325, at several positions in the 52x0.2E1 aperture, which will enable us to calibrate the NGSL spectra properly.
Oguntimein, Gbekeloluwa B; Rodriguez, Miguel; Dumitrache, Alexandru; Shollenberger, Todd; Decker, Stephen R; Davison, Brian H; Brown, Steven D
2018-02-01
To develop and prototype a high-throughput microplate assay to assess anaerobic microorganisms and lignocellulosic biomasses in a rapid, cost-effective screen for consolidated bioprocessing potential. Clostridium thermocellum parent Δhpt strain deconstructed Avicel to cellobiose, glucose, and generated lactic acid, formic acid, acetic acid and ethanol as fermentation products in titers and ratios similar to larger scale fermentations confirming the suitability of a plate-based method for C. thermocellum growth studies. C. thermocellum strain LL1210, with gene deletions in the key central metabolic pathways, produced higher ethanol titers in the Consolidated Bioprocessing (CBP) plate assay for both Avicel and switchgrass fermentations when compared to the Δhpt strain. A prototype microplate assay system is developed that will facilitate high-throughput bioprospecting for new lignocellulosic biomass types, genetic variants and new microbial strains for bioethanol production.
High-throughput sequencing in veterinary infection biology and diagnostics.
Belák, S; Karlsson, O E; Leijon, M; Granberg, F
2013-12-01
Sequencing methods have improved rapidly since the first versions of the Sanger techniques, facilitating the development of very powerful tools for detecting and identifying various pathogens, such as viruses, bacteria and other microbes. The ongoing development of high-throughput sequencing (HTS; also known as next-generation sequencing) technologies has resulted in a dramatic reduction in DNA sequencing costs, making the technology more accessible to the average laboratory. In this White Paper of the World Organisation for Animal Health (OIE) Collaborating Centre for the Biotechnology-based Diagnosis of Infectious Diseases in Veterinary Medicine (Uppsala, Sweden), several approaches and examples of HTS are summarised, and their diagnostic applicability is briefly discussed. Selected future aspects of HTS are outlined, including the need for bioinformatic resources, with a focus on improving the diagnosis and control of infectious diseases in veterinary medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Jian; Casey, Cameron P.; Zheng, Xueyun
Motivation: Drift tube ion mobility spectrometry (DTIMS) is increasingly implemented in high throughput omics workflows, and new informatics approaches are necessary for processing the associated data. To automatically extract arrival times for molecules measured by DTIMS coupled with mass spectrometry and compute their associated collisional cross sections (CCS) we created the PNNL Ion Mobility Cross Section Extractor (PIXiE). The primary application presented for this algorithm is the extraction of information necessary to create a reference library containing accu-rate masses, DTIMS arrival times and CCSs for use in high throughput omics analyses. Results: We demonstrate the utility of this approach bymore » automatically extracting arrival times and calculating the associated CCSs for a set of endogenous metabolites and xenobiotics. The PIXiE-generated CCS values were identical to those calculated by hand and within error of those calcu-lated using commercially available instrument vendor software.« less
Biofuel metabolic engineering with biosensors.
Morgan, Stacy-Anne; Nadler, Dana C; Yokoo, Rayka; Savage, David F
2016-12-01
Metabolic engineering offers the potential to renewably produce important classes of chemicals, particularly biofuels, at an industrial scale. DNA synthesis and editing techniques can generate large pathway libraries, yet identifying the best variants is slow and cumbersome. Traditionally, analytical methods like chromatography and mass spectrometry have been used to evaluate pathway variants, but such techniques cannot be performed with high throughput. Biosensors - genetically encoded components that actuate a cellular output in response to a change in metabolite concentration - are therefore a promising tool for rapid and high-throughput evaluation of candidate pathway variants. Applying biosensors can also dynamically tune pathways in response to metabolic changes, improving balance and productivity. Here, we describe the major classes of biosensors and briefly highlight recent progress in applying them to biofuel-related metabolic pathway engineering. Copyright © 2016 Elsevier Ltd. All rights reserved.
A kinase-focused compound collection: compilation and screening strategy.
Sun, Dongyu; Chuaqui, Claudio; Deng, Zhan; Bowes, Scott; Chin, Donovan; Singh, Juswinder; Cullen, Patrick; Hankins, Gretchen; Lee, Wen-Cherng; Donnelly, Jason; Friedman, Jessica; Josiah, Serene
2006-06-01
Lead identification by high-throughput screening of large compound libraries has been supplemented with virtual screening and focused compound libraries. To complement existing approaches for lead identification at Biogen Idec, a kinase-focused compound collection was designed, developed and validated. Two strategies were adopted to populate the compound collection: a ligand shape-based virtual screening and a receptor-based approach (structural interaction fingerprint). Compounds selected with the two approaches were cherry-picked from an existing high-throughput screening compound library, ordered from suppliers and supplemented with specific medicinal compounds from internal programs. Promising hits and leads have been generated from the kinase-focused compound collection against multiple kinase targets. The principle of the collection design and screening strategy was validated and the use of the kinase-focused compound collection for lead identification has been added to existing strategies.
The US EPA ToxCast program is using in vitro HTS (High-Throughput Screening) methods to profile and model bioactivity of environmental chemicals. The main goals of the ToxCast program are to generate predictive signatures of toxicity, and ultimately provide rapid and cost-effecti...
The ToxCast program has generated a great wealth of in vitro high throughput screening (HTS) data on a large number of compounds, providing a unique resource of information on the bioactivity of these compounds. However, analysis of these data are ongoing, and interpretation and ...
The US EPA’s ToxCast program has generated a wealth of data in >600 in vitro assayson a library of 1060 environmentally relevant chemicals and failed pharmaceuticals to facilitate hazard identification. An inherent criticism of many in vitro-based strategies is the inability of a...
Many commercial and environmental chemicals lack toxicity data necessary for users and risk assessors to make fully informed decisions about potential health effects. Generating these data using high throughput in vitro cell- or biochemical-based tests would be faster and less e...
The ToxCast program has generated a wealth of in vitro high throughput screening data, and best approaches for the interpretation and use of these data remain undetermined. We present case studies comparing the ToxCast and in vivo toxicity data for two food contact substances us...
USDA-ARS?s Scientific Manuscript database
The molecular biological techniques for plasmid-based assembly and cloning of gene open reading frames are essential for elucidating the function of the proteins encoded by the genes. These techniques involve the production of full-length cDNA libraries as a source of plasmid-based clones to expres...
Abstract Silver nanoparticles (Ag NP) have been shown to generate reactive oxygen species; however, the association between physicochemical characteristics of nanoparticles and cellular stress responses elicited by exposure has not been elucidated. Here, we examined three key...
USDA-ARS?s Scientific Manuscript database
A small fast neutron mutant population has been established from Phaseolus vulgaris cv. Red Hawk. We leveraged the available P. vulgaris genome sequence and high throughput next generation DNA sequencing to examine the genomic structure of five Phaseolus vulgaris cv. Red Hawk fast neutron mutants wi...
USDA-ARS?s Scientific Manuscript database
Many species of mites and ticks are of agricultural and medical importance. Much can be learned from the study of transcriptomes of acarines which can generate DNA-sequence information of potential target genes for the control of acarine pests. High throughput transcriptome sequencing can also yie...
Low-Cost, High-Throughput Sequencing of DNA Assemblies Using a Highly Multiplexed Nextera Process.
Shapland, Elaine B; Holmes, Victor; Reeves, Christopher D; Sorokin, Elena; Durot, Maxime; Platt, Darren; Allen, Christopher; Dean, Jed; Serber, Zach; Newman, Jack; Chandran, Sunil
2015-07-17
In recent years, next-generation sequencing (NGS) technology has greatly reduced the cost of sequencing whole genomes, whereas the cost of sequence verification of plasmids via Sanger sequencing has remained high. Consequently, industrial-scale strain engineers either limit the number of designs or take short cuts in quality control. Here, we show that over 4000 plasmids can be completely sequenced in one Illumina MiSeq run for less than $3 each (15× coverage), which is a 20-fold reduction over using Sanger sequencing (2× coverage). We reduced the volume of the Nextera tagmentation reaction by 100-fold and developed an automated workflow to prepare thousands of samples for sequencing. We also developed software to track the samples and associated sequence data and to rapidly identify correctly assembled constructs having the fewest defects. As DNA synthesis and assembly become a centralized commodity, this NGS quality control (QC) process will be essential to groups operating high-throughput pipelines for DNA construction.
YAMAT-seq: an efficient method for high-throughput sequencing of mature transfer RNAs.
Shigematsu, Megumi; Honda, Shozo; Loher, Phillipe; Telonis, Aristeidis G; Rigoutsos, Isidore; Kirino, Yohei
2017-05-19
Besides translation, transfer RNAs (tRNAs) play many non-canonical roles in various biological pathways and exhibit highly variable expression profiles. To unravel the emerging complexities of tRNA biology and molecular mechanisms underlying them, an efficient tRNA sequencing method is required. However, the rigid structure of tRNA has been presenting a challenge to the development of such methods. We report the development of Y-shaped Adapter-ligated MAture TRNA sequencing (YAMAT-seq), an efficient and convenient method for high-throughput sequencing of mature tRNAs. YAMAT-seq circumvents the issue of inefficient adapter ligation, a characteristic of conventional RNA sequencing methods for mature tRNAs, by employing the efficient and specific ligation of Y-shaped adapter to mature tRNAs using T4 RNA Ligase 2. Subsequent cDNA amplification and next-generation sequencing successfully yield numerous mature tRNA sequences. YAMAT-seq has high specificity for mature tRNAs and high sensitivity to detect most isoacceptors from minute amount of total RNA. Moreover, YAMAT-seq shows quantitative capability to estimate expression levels of mature tRNAs, and has high reproducibility and broad applicability for various cell lines. YAMAT-seq thus provides high-throughput technique for identifying tRNA profiles and their regulations in various transcriptomes, which could play important regulatory roles in translation and other biological processes. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Application of Genomic Technologies to the Breeding of Trees
Badenes, Maria L.; Fernández i Martí, Angel; Ríos, Gabino; Rubio-Cabetas, María J.
2016-01-01
The recent introduction of next generation sequencing (NGS) technologies represents a major revolution in providing new tools for identifying the genes and/or genomic intervals controlling important traits for selection in breeding programs. In perennial fruit trees with long generation times and large sizes of adult plants, the impact of these techniques is even more important. High-throughput DNA sequencing technologies have provided complete annotated sequences in many important tree species. Most of the high-throughput genotyping platforms described are being used for studies of genetic diversity and population structure. Dissection of complex traits became possible through the availability of genome sequences along with phenotypic variation data, which allow to elucidate the causative genetic differences that give rise to observed phenotypic variation. Association mapping facilitates the association between genetic markers and phenotype in unstructured and complex populations, identifying molecular markers for assisted selection and breeding. Also, genomic data provide in silico identification and characterization of genes and gene families related to important traits, enabling new tools for molecular marker assisted selection in tree breeding. Deep sequencing of transcriptomes is also a powerful tool for the analysis of precise expression levels of each gene in a sample. It consists in quantifying short cDNA reads, obtained by NGS technologies, in order to compare the entire transcriptomes between genotypes and environmental conditions. The miRNAs are non-coding short RNAs involved in the regulation of different physiological processes, which can be identified by high-throughput sequencing of RNA libraries obtained by reverse transcription of purified short RNAs, and by in silico comparison with known miRNAs from other species. All together, NGS techniques and their applications have increased the resources for plant breeding in tree species, closing the former gap of genetic tools between trees and annual species. PMID:27895664
Application of Genomic Technologies to the Breeding of Trees.
Badenes, Maria L; Fernández I Martí, Angel; Ríos, Gabino; Rubio-Cabetas, María J
2016-01-01
The recent introduction of next generation sequencing (NGS) technologies represents a major revolution in providing new tools for identifying the genes and/or genomic intervals controlling important traits for selection in breeding programs. In perennial fruit trees with long generation times and large sizes of adult plants, the impact of these techniques is even more important. High-throughput DNA sequencing technologies have provided complete annotated sequences in many important tree species. Most of the high-throughput genotyping platforms described are being used for studies of genetic diversity and population structure. Dissection of complex traits became possible through the availability of genome sequences along with phenotypic variation data, which allow to elucidate the causative genetic differences that give rise to observed phenotypic variation. Association mapping facilitates the association between genetic markers and phenotype in unstructured and complex populations, identifying molecular markers for assisted selection and breeding. Also, genomic data provide in silico identification and characterization of genes and gene families related to important traits, enabling new tools for molecular marker assisted selection in tree breeding. Deep sequencing of transcriptomes is also a powerful tool for the analysis of precise expression levels of each gene in a sample. It consists in quantifying short cDNA reads, obtained by NGS technologies, in order to compare the entire transcriptomes between genotypes and environmental conditions. The miRNAs are non-coding short RNAs involved in the regulation of different physiological processes, which can be identified by high-throughput sequencing of RNA libraries obtained by reverse transcription of purified short RNAs, and by in silico comparison with known miRNAs from other species. All together, NGS techniques and their applications have increased the resources for plant breeding in tree species, closing the former gap of genetic tools between trees and annual species.
High-Throughput Genome Editing and Phenotyping Facilitated by High Resolution Melting Curve Analysis
Thomas, Holly R.; Percival, Stefanie M.; Yoder, Bradley K.; Parant, John M.
2014-01-01
With the goal to generate and characterize the phenotypes of null alleles in all genes within an organism and the recent advances in custom nucleases, genome editing limitations have moved from mutation generation to mutation detection. We previously demonstrated that High Resolution Melting (HRM) analysis is a rapid and efficient means of genotyping known zebrafish mutants. Here we establish optimized conditions for HRM based detection of novel mutant alleles. Using these conditions, we demonstrate that HRM is highly efficient at mutation detection across multiple genome editing platforms (ZFNs, TALENs, and CRISPRs); we observed nuclease generated HRM positive targeting in 1 of 6 (16%) open pool derived ZFNs, 14 of 23 (60%) TALENs, and 58 of 77 (75%) CRISPR nucleases. Successful targeting, based on HRM of G0 embryos correlates well with successful germline transmission (46 of 47 nucleases); yet, surprisingly mutations in the somatic tail DNA weakly correlate with mutations in the germline F1 progeny DNA. This suggests that analysis of G0 tail DNA is a good indicator of the efficiency of the nuclease, but not necessarily a good indicator of germline alleles that will be present in the F1s. However, we demonstrate that small amplicon HRM curve profiles of F1 progeny DNA can be used to differentiate between specific mutant alleles, facilitating rare allele identification and isolation; and that HRM is a powerful technique for screening possible off-target mutations that may be generated by the nucleases. Our data suggest that micro-homology based alternative NHEJ repair is primarily utilized in the generation of CRISPR mutant alleles and allows us to predict likelihood of generating a null allele. Lastly, we demonstrate that HRM can be used to quickly distinguish genotype-phenotype correlations within F1 embryos derived from G0 intercrosses. Together these data indicate that custom nucleases, in conjunction with the ease and speed of HRM, will facilitate future high-throughput mutation generation and analysis needed to establish mutants in all genes of an organism. PMID:25503746
Richter, Ingrid; Fidler, Andrew E.
2014-01-01
Developing high-throughput assays to screen marine extracts for bioactive compounds presents both conceptual and technical challenges. One major challenge is to develop assays that have well-grounded ecological and evolutionary rationales. In this review we propose that a specific group of ligand-activated transcription factors are particularly well-suited to act as sensors in such bioassays. More specifically, xenobiotic-activated nuclear receptors (XANRs) regulate transcription of genes involved in xenobiotic detoxification. XANR ligand-binding domains (LBDs) may adaptively evolve to bind those bioactive, and potentially toxic, compounds to which organisms are normally exposed to through their specific diets. A brief overview of the function and taxonomic distribution of both vertebrate and invertebrate XANRs is first provided. Proof-of-concept experiments are then described which confirm that a filter-feeding marine invertebrate XANR LBD is activated by marine bioactive compounds. We speculate that increasing access to marine invertebrate genome sequence data, in combination with the expression of functional recombinant marine invertebrate XANR LBDs, will facilitate the generation of high-throughput bioassays/biosensors of widely differing specificities, but all based on activation of XANR LBDs. Such assays may find application in screening marine extracts for bioactive compounds that could act as drug lead compounds. PMID:25421319
Li, Ben; Li, Yunxiao; Qin, Zhaohui S
2017-06-01
Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical 'large p , small n ' problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package "adaptiveHM", which is freely available from https://github.com/benliemory/adaptiveHM.
Networking Omic Data to Envisage Systems Biological Regulation.
Kalapanulak, Saowalak; Saithong, Treenut; Thammarongtham, Chinae
To understand how biological processes work, it is necessary to explore the systematic regulation governing the behaviour of the processes. Not only driving the normal behavior of organisms, the systematic regulation evidently underlies the temporal responses to surrounding environments (dynamics) and long-term phenotypic adaptation (evolution). The systematic regulation is, in effect, formulated from the regulatory components which collaboratively work together as a network. In the drive to decipher such a code of lives, a spectrum of technologies has continuously been developed in the post-genomic era. With current advances, high-throughput sequencing technologies are tremendously powerful for facilitating genomics and systems biology studies in the attempt to understand system regulation inside the cells. The ability to explore relevant regulatory components which infer transcriptional and signaling regulation, driving core cellular processes, is thus enhanced. This chapter reviews high-throughput sequencing technologies, including second and third generation sequencing technologies, which support the investigation of genomics and transcriptomics data. Utilization of this high-throughput data to form the virtual network of systems regulation is explained, particularly transcriptional regulatory networks. Analysis of the resulting regulatory networks could lead to an understanding of cellular systems regulation at the mechanistic and dynamics levels. The great contribution of the biological networking approach to envisage systems regulation is finally demonstrated by a broad range of examples.
Efthymiou, Anastasia; Shaltouki, Atossa; Steiner, Joseph P; Jha, Balendu; Heman-Ackah, Sabrina M; Swistowski, Andrzej; Zeng, Xianmin; Rao, Mahendra S; Malik, Nasir
2014-01-01
Rapid and effective drug discovery for neurodegenerative disease is currently impeded by an inability to source primary neural cells for high-throughput and phenotypic screens. This limitation can be addressed through the use of pluripotent stem cells (PSCs), which can be derived from patient-specific samples and differentiated to neural cells for use in identifying novel compounds for the treatment of neurodegenerative diseases. We have developed an efficient protocol to culture pure populations of neurons, as confirmed by gene expression analysis, in the 96-well format necessary for screens. These differentiated neurons were subjected to viability assays to illustrate their potential in future high-throughput screens. We have also shown that organelles such as nuclei and mitochondria could be live-labeled and visualized through fluorescence, suggesting that we should be able to monitor subcellular phenotypic changes. Neurons derived from a green fluorescent protein-expressing reporter line of PSCs were live-imaged to assess markers of neuronal maturation such as neurite length and co-cultured with astrocytes to demonstrate further maturation. These studies confirm that PSC-derived neurons can be used effectively in viability and functional assays and pave the way for high-throughput screens on neurons derived from patients with neurodegenerative disorders.
ChemHTPS - A virtual high-throughput screening program suite for the chemical and materials sciences
NASA Astrophysics Data System (ADS)
Afzal, Mohammad Atif Faiz; Evangelista, William; Hachmann, Johannes
The discovery of new compounds, materials, and chemical reactions with exceptional properties is the key for the grand challenges in innovation, energy and sustainability. This process can be dramatically accelerated by means of the virtual high-throughput screening (HTPS) of large-scale candidate libraries. The resulting data can further be used to study the underlying structure-property relationships and thus facilitate rational design capability. This approach has been extensively used for many years in the drug discovery community. However, the lack of openly available virtual HTPS tools is limiting the use of these techniques in various other applications such as photovoltaics, optoelectronics, and catalysis. Thus, we developed ChemHTPS, a general-purpose, comprehensive and user-friendly suite, that will allow users to efficiently perform large in silico modeling studies and high-throughput analyses in these applications. ChemHTPS also includes a massively parallel molecular library generator which offers a multitude of options to customize and restrict the scope of the enumerated chemical space and thus tailor it for the demands of specific applications. To streamline the non-combinatorial exploration of chemical space, we incorporate genetic algorithms into the framework. In addition to implementing smarter algorithms, we also focus on the ease of use, workflow, and code integration to make this technology more accessible to the community.
Kusters, Ilja; van Oijen, Antoine M; Driessen, Arnold J M
2014-04-22
Screening of transport processes across biological membranes is hindered by the challenge to establish fragile supported lipid bilayers and the difficulty to determine at which side of the membrane reactants reside. Here, we present a method for the generation of suspended lipid bilayers with physiological relevant lipid compositions on microstructured Si/SiO2 chips that allow for high-throughput screening of both membrane transport and viral membrane fusion. Simultaneous observation of hundreds of single-membrane channels yields statistical information revealing population heterogeneities of the pore assembly and conductance of the bacterial toxin α-hemolysin (αHL). The influence of lipid composition and ionic strength on αHL pore formation was investigated at the single-channel level, resolving features of the pore-assembly pathway. Pore formation is inhibited by a specific antibody, demonstrating the applicability of the platform for drug screening of bacterial toxins and cell-penetrating agents. Furthermore, fusion of H3N2 influenza viruses with suspended lipid bilayers can be observed directly using a specialized chip architecture. The presented micropore arrays are compatible with fluorescence readout from below using an air objective, thus allowing high-throughput screening of membrane transport in multiwell formats in analogy to plate readers.
Li, Ben; Li, Yunxiao; Qin, Zhaohui S.
2016-01-01
Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical ‘large p, small n’ problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package “adaptiveHM”, which is freely available from https://github.com/benliemory/adaptiveHM. PMID:28919931
Adamski, Mateusz G; Gumann, Patryk; Baird, Alison E
2014-01-01
Over the past decade rapid advances have occurred in the understanding of RNA expression and its regulation. Quantitative polymerase chain reactions (qPCR) have become the gold standard for quantifying gene expression. Microfluidic next generation, high throughput qPCR now permits the detection of transcript copy number in thousands of reactions simultaneously, dramatically increasing the sensitivity over standard qPCR. Here we present a gene expression analysis method applicable to both standard polymerase chain reactions (qPCR) and high throughput qPCR. This technique is adjusted to the input sample quantity (e.g., the number of cells) and is independent of control gene expression. It is efficiency-corrected and with the use of a universal reference sample (commercial complementary DNA (cDNA)) permits the normalization of results between different batches and between different instruments--regardless of potential differences in transcript amplification efficiency. Modifications of the input quantity method include (1) the achievement of absolute quantification and (2) a non-efficiency corrected analysis. When compared to other commonly used algorithms the input quantity method proved to be valid. This method is of particular value for clinical studies of whole blood and circulating leukocytes where cell counts are readily available.
Sources of PCR-induced distortions in high-throughput sequencing data sets
Kebschull, Justus M.; Zador, Anthony M.
2015-01-01
PCR permits the exponential and sequence-specific amplification of DNA, even from minute starting quantities. PCR is a fundamental step in preparing DNA samples for high-throughput sequencing. However, there are errors associated with PCR-mediated amplification. Here we examine the effects of four important sources of error—bias, stochasticity, template switches and polymerase errors—on sequence representation in low-input next-generation sequencing libraries. We designed a pool of diverse PCR amplicons with a defined structure, and then used Illumina sequencing to search for signatures of each process. We further developed quantitative models for each process, and compared predictions of these models to our experimental data. We find that PCR stochasticity is the major force skewing sequence representation after amplification of a pool of unique DNA amplicons. Polymerase errors become very common in later cycles of PCR but have little impact on the overall sequence distribution as they are confined to small copy numbers. PCR template switches are rare and confined to low copy numbers. Our results provide a theoretical basis for removing distortions from high-throughput sequencing data. In addition, our findings on PCR stochasticity will have particular relevance to quantification of results from single cell sequencing, in which sequences are represented by only one or a few molecules. PMID:26187991
Rames, Matthew; Yu, Yadong; Ren, Gang
2014-08-15
Structural determination of proteins is rather challenging for proteins with molecular masses between 40 - 200 kDa. Considering that more than half of natural proteins have a molecular mass between 40 - 200 kDa, a robust and high-throughput method with a nanometer resolution capability is needed. Negative staining (NS) electron microscopy (EM) is an easy, rapid, and qualitative approach which has frequently been used in research laboratories to examine protein structure and protein-protein interactions. Unfortunately, conventional NS protocols often generate structural artifacts on proteins, especially with lipoproteins that usually form presenting rouleaux artifacts. By using images of lipoproteins from cryo-electronmore » microscopy (cryo-EM) as a standard, the key parameters in NS specimen preparation conditions were recently screened and reported as the optimized NS protocol (OpNS), a modified conventional NS protocol. Artifacts like rouleaux can be greatly limited by OpNS, additionally providing high contrast along with reasonably high-resolution (near 1 nm) images of small and asymmetric proteins. These high-resolution and high contrast images are even favorable for an individual protein (a single object, no average) 3D reconstruction, such as a 160 kDa antibody, through the method of electron tomography. Moreover, OpNS can be a high-throughput tool to examine hundreds of samples of small proteins. For example, the previously published mechanism of 53 kDa cholesteryl ester transfer protein (CETP) involved the screening and imaging of hundreds of samples. Considering cryo-EM rarely successfully images proteins less than 200 kDa has yet to publish any study involving screening over one hundred sample conditions, it is fair to call OpNS a high-throughput method for studying small proteins. Hopefully the OpNS protocol presented here can be a useful tool to push the boundaries of EM and accelerate EM studies into small protein structure, dynamics and mechanisms.« less
Morwick, Tina; Büttner, Frank H; Cywin, Charles L; Dahmann, Georg; Hickey, Eugene; Jakes, Scott; Kaplita, Paul; Kashem, Mohammed A; Kerr, Steven; Kugler, Stanley; Mao, Wang; Marshall, Daniel; Paw, Zofia; Shih, Cheng-Kon; Wu, Frank; Young, Erick
2010-01-28
A highly selective series of bisbenzamide inhibitors of Rho-associated coiled-coil forming protein kinase (ROCK) and a related ureidobenzamide series, both identified by high throughput screening (HTS), are described. Details of the hit validation and lead generation process, including structure-activity relationship (SAR) studies, a selectivity assessment, target-independent profiling (TIP) results, and an analysis of functional activity using a rat aortic ring assay are discussed.
The Use of AlphaScreen Technology in HTS: Current Status
Eglen, Richard M; Reisine, Terry; Roby, Philippe; Rouleau, Nathalie; Illy, Chantal; Bossé, Roger; Bielefeld, Martina
2008-01-01
AlphaScreen (Amplified Luminescent Proximity Homogeneous Assay Screen) is versatile assay technology developed to measuring analytes using a homogenous protocol. This technology is an example of a bead-based proximity assay and was developed from a diagnostic assay technology known as LOCI (Luminescent Oxygen Channeling Assay). Here, singlet oxygen molecules, generated by high energy irradiation of Donor beads, travel over a constrained distance (approx. 200 nm) to Acceptor beads. This results in excitation of a cascading series of chemical reactions, ultimately causing generation of a chemiluminescent signal. In the past decade, a wide variety of applications has been reported, ranging from detection of analytes involved in cell signaling, including protein:protein, protein:peptide, protein:small molecule or peptide:peptide interactions. Numerous homogeneous HTS-optimized assays have been reported using the approach, including generation of second messengers (such as accumulation of cyclic AMP, cyclic GMP, inositol [1, 4, 5] trisphosphate or phosphorylated ERK) from liganded GPCRs or tyrosine kinase receptors, post-translational modification of proteins (such as proteolytic cleavage, phosphorylation, ubiquination and sumoylation) as well as protein-protein and protein-nucleic acid interactions. Recently, the basic AlphaScreen technology was extended in that the chemistry of the Acceptor bead was modified such that emitted light is more intense and spectrally defined, thereby markedly reducing interference from biological fluid matrices (such as trace hemolysis in serum and plasma). In this format, referred to as AlphaLISA, it provides an alternative technology to classical ELISA assays and is suitable for high throughput automated fluid dispensing and detection systems. Collectively, AlphaScreen and AlphaLISA technologies provide a facile assay platform with which one can quantitate complex cellular processes using simple no-wash microtiter plate based assays. They provide the means by which large compound libraries can be screened in a high throughput fashion at a diverse range of therapeutically important targets, often not readily undertaken using other homogeneous assay technologies. This review assesses the current status of the technology in drug discovery, in general, and high throughput screening (HTS), in particular. PMID:20161822
Computational solutions to large-scale data management and analysis
Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.
2011-01-01
Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155
Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.
Bühler, Jonas; von Lieres, Eric; Huber, Gregor J
2018-01-01
Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.
Brase, Jan C.; Kronenwett, Ralf; Petry, Christoph; Denkert, Carsten; Schmidt, Marcus
2013-01-01
Several multigene tests have been developed for breast cancer patients to predict the individual risk of recurrence. Most of the first generation tests rely on proliferation-associated genes and are commonly carried out in central reference laboratories. Here, we describe the development of a second generation multigene assay, the EndoPredict test, a prognostic multigene expression test for estrogen receptor (ER) positive, human epidermal growth factor receptor (HER2) negative (ER+/HER2−) breast cancer patients. The EndoPredict gene signature was initially established in a large high-throughput microarray-based screening study. The key steps for biomarker identification are discussed in detail, in comparison to the establishment of other multigene signatures. After biomarker selection, genes and algorithms were transferred to a diagnostic platform (reverse transcription quantitative PCR (RT-qPCR)) to allow for assaying formalin-fixed, paraffin-embedded (FFPE) samples. A comprehensive analytical validation was performed and a prospective proficiency testing study with seven pathological laboratories finally proved that EndoPredict can be reliably used in the decentralized setting. Three independent large clinical validation studies (n = 2,257) demonstrated that EndoPredict offers independent prognostic information beyond current clinicopathological parameters and clinical guidelines. The review article summarizes several important steps that should be considered for the development process of a second generation multigene test and offers a means for transferring a microarray signature from the research laboratory to clinical practice. PMID:27605191
SACRB-MAC: A High-Capacity MAC Protocol for Cognitive Radio Sensor Networks in Smart Grid
Yang, Zhutian; Shi, Zhenguo; Jin, Chunlin
2016-01-01
The Cognitive Radio Sensor Network (CRSN) is considered as a viable solution to enhance various aspects of the electric power grid and to realize a smart grid. However, several challenges for CRSNs are generated due to the harsh wireless environment in a smart grid. As a result, throughput and reliability become critical issues. On the other hand, the spectrum aggregation technique is expected to play an important role in CRSNs in a smart grid. By using spectrum aggregation, the throughput of CRSNs can be improved efficiently, so as to address the unique challenges of CRSNs in a smart grid. In this regard, we proposed Spectrum Aggregation Cognitive Receiver-Based MAC (SACRB-MAC), which employs the spectrum aggregation technique to improve the throughput performance of CRSNs in a smart grid. Moreover, SACRB-MAC is a receiver-based MAC protocol, which can provide a good reliability performance. Analytical and simulation results demonstrate that SACRB-MAC is a promising solution for CRSNs in a smart grid. PMID:27043573
SACRB-MAC: A High-Capacity MAC Protocol for Cognitive Radio Sensor Networks in Smart Grid.
Yang, Zhutian; Shi, Zhenguo; Jin, Chunlin
2016-03-31
The Cognitive Radio Sensor Network (CRSN) is considered as a viable solution to enhance various aspects of the electric power grid and to realize a smart grid. However, several challenges for CRSNs are generated due to the harsh wireless environment in a smart grid. As a result, throughput and reliability become critical issues. On the other hand, the spectrum aggregation technique is expected to play an important role in CRSNs in a smart grid. By using spectrum aggregation, the throughput of CRSNs can be improved efficiently, so as to address the unique challenges of CRSNs in a smart grid. In this regard, we proposed Spectrum Aggregation Cognitive Receiver-Based MAC (SACRB-MAC), which employs the spectrum aggregation technique to improve the throughput performance of CRSNs in a smart grid. Moreover, SACRB-MAC is a receiver-based MAC protocol, which can provide a good reliability performance. Analytical and simulation results demonstrate that SACRB-MAC is a promising solution for CRSNs in a smart grid.
NASA Astrophysics Data System (ADS)
Close, Dan; Webb, James; Ripp, Steven; Patterson, Stacey; Sayler, Gary
2012-06-01
Traditionally, human toxicant bioavailability screening has been forced to proceed in either a high throughput fashion using prokaryotic or lower eukaryotic targets with minimal applicability to humans, or in a more expensive, lower throughput manner that uses fluorescent or bioluminescent human cells to directly provide human bioavailability data. While these efforts are often sufficient for basic scientific research, they prevent the rapid and remote identification of potentially toxic chemicals required for modern biosecurity applications. To merge the advantages of high throughput, low cost screening regimens with the direct bioavailability assessment of human cell line use, we re-engineered the bioluminescent bacterial luciferase gene cassette to function autonomously (without exogenous stimulation) within human cells. Optimized cassette expression provides for fully endogenous bioluminescent production, allowing continuous, real time monitoring of the bioavailability and toxicology of various compounds in an automated fashion. To access the functionality of this system, two sets of bioluminescent human cells were developed. The first was programed to suspend bioluminescent production upon toxicological challenge to mimic the non-specific detection of a toxicant. The second induced bioluminescence upon detection of a specific compound to demonstrate autonomous remote target identification. These cells were capable of responding to μM concentrations of the toxicant n-decanal, and allowed for continuous monitoring of cellular health throughout the treatment process. Induced bioluminescence was generated through treatment with doxycycline and was detectable upon dosage at a 100 ng/ml concentration. These results demonstrate that leveraging autonomous bioluminescence allows for low-cost, high throughput direct assessment of toxicant bioavailability.
High-throughput cultivation and screening platform for unicellular phototrophs.
Tillich, Ulrich M; Wolter, Nick; Schulze, Katja; Kramer, Dan; Brödel, Oliver; Frohme, Marcus
2014-09-16
High-throughput cultivation and screening methods allow a parallel, miniaturized and cost efficient processing of many samples. These methods however, have not been generally established for phototrophic organisms such as microalgae or cyanobacteria. In this work we describe and test high-throughput methods with the model organism Synechocystis sp. PCC6803. The required technical automation for these processes was achieved with a Tecan Freedom Evo 200 pipetting robot. The cultivation was performed in 2.2 ml deepwell microtiter plates within a cultivation chamber outfitted with programmable shaking conditions, variable illumination, variable temperature, and an adjustable CO2 atmosphere. Each microtiter-well within the chamber functions as a separate cultivation vessel with reproducible conditions. The automated measurement of various parameters such as growth, full absorption spectrum, chlorophyll concentration, MALDI-TOF-MS, as well as a novel vitality measurement protocol, have already been established and can be monitored during cultivation. Measurement of growth parameters can be used as inputs for the system to allow for periodic automatic dilutions and therefore a semi-continuous cultivation of hundreds of cultures in parallel. The system also allows the automatic generation of mid and long term backups of cultures to repeat experiments or to retrieve strains of interest. The presented platform allows for high-throughput cultivation and screening of Synechocystis sp. PCC6803. The platform should be usable for many phototrophic microorganisms as is, and be adaptable for even more. A variety of analyses are already established and the platform is easily expandable both in quality, i.e. with further parameters to screen for additional targets and in quantity, i.e. size or number of processed samples.
toxoMine: an integrated omics data warehouse for Toxoplasma gondii systems biology research
Rhee, David B.; Croken, Matthew McKnight; Shieh, Kevin R.; Sullivan, Julie; Micklem, Gos; Kim, Kami; Golden, Aaron
2015-01-01
Toxoplasma gondii (T. gondii) is an obligate intracellular parasite that must monitor for changes in the host environment and respond accordingly; however, it is still not fully known which genetic or epigenetic factors are involved in regulating virulence traits of T. gondii. There are on-going efforts to elucidate the mechanisms regulating the stage transition process via the application of high-throughput epigenomics, genomics and proteomics techniques. Given the range of experimental conditions and the typical yield from such high-throughput techniques, a new challenge arises: how to effectively collect, organize and disseminate the generated data for subsequent data analysis. Here, we describe toxoMine, which provides a powerful interface to support sophisticated integrative exploration of high-throughput experimental data and metadata, providing researchers with a more tractable means toward understanding how genetic and/or epigenetic factors play a coordinated role in determining pathogenicity of T. gondii. As a data warehouse, toxoMine allows integration of high-throughput data sets with public T. gondii data. toxoMine is also able to execute complex queries involving multiple data sets with straightforward user interaction. Furthermore, toxoMine allows users to define their own parameters during the search process that gives users near-limitless search and query capabilities. The interoperability feature also allows users to query and examine data available in other InterMine systems, which would effectively augment the search scope beyond what is available to toxoMine. toxoMine complements the major community database ToxoDB by providing a data warehouse that enables more extensive integrative studies for T. gondii. Given all these factors, we believe it will become an indispensable resource to the greater infectious disease research community. Database URL: http://toxomine.org PMID:26130662
Zhao, Siwei; Zhu, Kan; Zhang, Yan; Zhu, Zijie; Xu, Zhengping; Zhao, Min; Pan, Tingrui
2014-11-21
Both endogenous and externally applied electrical stimulation can affect a wide range of cellular functions, including growth, migration, differentiation and division. Among those effects, the electrical field (EF)-directed cell migration, also known as electrotaxis, has received broad attention because it holds great potential in facilitating clinical wound healing. Electrotaxis experiment is conventionally conducted in centimetre-sized flow chambers built in Petri dishes. Despite the recent efforts to adapt microfluidics for electrotaxis studies, the current electrotaxis experimental setup is still cumbersome due to the needs of an external power supply and EF controlling/monitoring systems. There is also a lack of parallel experimental systems for high-throughput electrotaxis studies. In this paper, we present a first independently operable microfluidic platform for high-throughput electrotaxis studies, integrating all functional components for cell migration under EF stimulation (except microscopy) on a compact footprint (the same as a credit card), referred to as ElectroTaxis-on-a-Chip (ETC). Inspired by the R-2R resistor ladder topology in digital signal processing, we develop a systematic approach to design an infinitely expandable microfluidic generator of EF gradients for high-throughput and quantitative studies of EF-directed cell migration. Furthermore, a vacuum-assisted assembly method is utilized to allow direct and reversible attachment of our device to existing cell culture media on biological surfaces, which separates the cell culture and device preparation/fabrication steps. We have demonstrated that our ETC platform is capable of screening human cornea epithelial cell migration under the stimulation of an EF gradient spanning over three orders of magnitude. The screening results lead to the identification of the EF-sensitive range of that cell type, which can provide valuable guidance to the clinical application of EF-facilitated wound healing.
Future technologies for monitoring HIV drug resistance and cure.
Parikh, Urvi M; McCormick, Kevin; van Zyl, Gert; Mellors, John W
2017-03-01
Sensitive, scalable and affordable assays are critically needed for monitoring the success of interventions for preventing, treating and attempting to cure HIV infection. This review evaluates current and emerging technologies that are applicable for both surveillance of HIV drug resistance (HIVDR) and characterization of HIV reservoirs that persist despite antiretroviral therapy and are obstacles to curing HIV infection. Next-generation sequencing (NGS) has the potential to be adapted into high-throughput, cost-efficient approaches for HIVDR surveillance and monitoring during continued scale-up of antiretroviral therapy and rollout of preexposure prophylaxis. Similarly, improvements in PCR and NGS are resulting in higher throughput single genome sequencing to detect intact proviruses and to characterize HIV integration sites and clonal expansions of infected cells. Current population genotyping methods for resistance monitoring are high cost and low throughput. NGS, combined with simpler sample collection and storage matrices (e.g. dried blood spots), has considerable potential to broaden global surveillance and patient monitoring for HIVDR. Recent adaptions of NGS to identify integration sites of HIV in the human genome and to characterize the integrated HIV proviruses are likely to facilitate investigations of the impact of experimental 'curative' interventions on HIV reservoirs.
High Throughput PBTK: Open-Source Data and Tools for ...
Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy
Hess, Jon E; Campbell, Nathan R; Docker, Margaret F; Baker, Cyndi; Jackson, Aaron; Lampman, Ralph; McIlraith, Brian; Moser, Mary L; Statler, David P; Young, William P; Wildbill, Andrew J; Narum, Shawn R
2015-01-01
Next-generation sequencing data can be mined for highly informative single nucleotide polymorphisms (SNPs) to develop high-throughput genomic assays for nonmodel organisms. However, choosing a set of SNPs to address a variety of objectives can be difficult because SNPs are often not equally informative. We developed an optimal combination of 96 high-throughput SNP assays from a total of 4439 SNPs identified in a previous study of Pacific lamprey (Entosphenus tridentatus) and used them to address four disparate objectives: parentage analysis, species identification and characterization of neutral and adaptive variation. Nine of these SNPs are FST outliers, and five of these outliers are localized within genes and significantly associated with geography, run-timing and dwarf life history. Two of the 96 SNPs were diagnostic for two other lamprey species that were morphologically indistinguishable at early larval stages and were sympatric in the Pacific Northwest. The majority (85) of SNPs in the panel were highly informative for parentage analysis, that is, putatively neutral with high minor allele frequency across the species' range. Results from three case studies are presented to demonstrate the broad utility of this panel of SNP markers in this species. As Pacific lamprey populations are undergoing rapid decline, these SNPs provide an important resource to address critical uncertainties associated with the conservation and recovery of this imperiled species. © 2014 John Wiley & Sons Ltd.
Abd El-Hay, Soad S; Colyer, Christa L
2017-01-13
Despite the importance of nitric oxide (NO) in vascular physiology and pathology, a high-throughput method for the quantification of its vascular generation is lacking. By using the fluorescent probe 4-amino-5-methylamino-2',7'-difluorofluorescein (DAF-FM), we have optimized a simple method for the determination of the generation of endothelial nitric oxide in a microplate format. A nitric oxide donor was used (3-morpholinosydnonimine hydrochloride, SIN-1). Different factors affecting the method were studied, such as the effects of dye concentration, different buffers, time of reaction, gain, and number of flashes. Beer's law was linear over a nanomolar range (1-10 nM) of SIN-1 with wavelengths of maximum excitation and emission at 495 and 525 nm; the limit of detection reached 0.897 nM. Under the optimized conditions, the generation of rat aortic endothelial NO was measured by incubating DAF-FM with serial concentrations (10-1000 µM) of acetylcholine (ACh) for 3 min. To confirm specificity, N ω -Nitro-l-arginine methyl ester (l-NAME)-the standard inhibitor of endothelial NO synthase-was found to inhibit the ACh-stimulated generation of NO. In addition, vessels pre-exposed for 1 h to 400 µM of the endothelial damaging agent methyl glyoxal showed inhibited NO generation when compared to the control stimulated by ACh. The capability of the method to measure micro-volume samples makes it convenient for the simultaneous handling of a very large number of samples. Additionally, it allows samples to be run simultaneously with their replicates to ensure identical experimental conditions, thus minimizing the effect of biological variability.
Displacement of particles in microfluidics by laser-generated tandem bubbles
NASA Astrophysics Data System (ADS)
Lautz, Jaclyn; Sankin, Georgy; Yuan, Fang; Zhong, Pei
2010-11-01
The dynamic interaction between laser-generated tandem bubble and individual polystyrene particles of 2 and 10 μm in diameter is studied in a microfluidic channel (25 μm height) by high-speed imaging and particle image velocimetry. The asymmetric collapse of the tandem bubble produces a pair of microjets and associated long-lasting vortices that can propel a single particle to a maximum velocity of 1.4 m/s in 30 μs after the bubble collapse with a resultant directional displacement up to 60 μm in 150 μs. This method may be useful for high-throughput cell sorting in microfluidic devices.
Holes generation in glass using large spot femtosecond laser pulses
NASA Astrophysics Data System (ADS)
Berg, Yuval; Kotler, Zvi; Shacham-Diamand, Yosi
2018-03-01
We demonstrate high-throughput, symmetrical, holes generation in fused silica glass using a large spot size, femtosecond IR-laser irradiation which modifies the glass properties and yields an enhanced chemical etching rate. The process relies on a balanced interplay between the nonlinear Kerr effect and multiphoton absorption in the glass which translates into symmetrical glass modification and increased etching rate. The use of a large laser spot size makes it possible to process thick glasses at high speeds over a large area. We have demonstrated such fabricated holes with an aspect ratio of 1:10 in a 1 mm thick glass samples.
Tepper, Naama; Shlomi, Tomer
2011-01-21
Combinatorial approaches in metabolic engineering work by generating genetic diversity in a microbial population followed by screening for strains with improved phenotypes. One of the most common goals in this field is the generation of a high rate chemical producing strain. A major hurdle with this approach is that many chemicals do not have easy to recognize attributes, making their screening expensive and time consuming. To address this problem, it was previously suggested to use microbial biosensors to facilitate the detection and quantification of chemicals of interest. Here, we present novel computational methods to: (i) rationally design microbial biosensors for chemicals of interest based on substrate auxotrophy that would enable their high-throughput screening; (ii) predict engineering strategies for coupling the synthesis of a chemical of interest with the production of a proxy metabolite for which high-throughput screening is possible via a designed bio-sensor. The biosensor design method is validated based on known genetic modifications in an array of E. coli strains auxotrophic to various amino-acids. Predicted chemical production rates achievable via the biosensor-based approach are shown to potentially improve upon those predicted by current rational strain design approaches. (A Matlab implementation of the biosensor design method is available via http://www.cs.technion.ac.il/~tomersh/tools).
Tebani, Abdellah; Afonso, Carlos; Marret, Stéphane; Bekri, Soumeya
2016-01-01
The rise of technologies that simultaneously measure thousands of data points represents the heart of systems biology. These technologies have had a huge impact on the discovery of next-generation diagnostics, biomarkers, and drugs in the precision medicine era. Systems biology aims to achieve systemic exploration of complex interactions in biological systems. Driven by high-throughput omics technologies and the computational surge, it enables multi-scale and insightful overviews of cells, organisms, and populations. Precision medicine capitalizes on these conceptual and technological advancements and stands on two main pillars: data generation and data modeling. High-throughput omics technologies allow the retrieval of comprehensive and holistic biological information, whereas computational capabilities enable high-dimensional data modeling and, therefore, accessible and user-friendly visualization. Furthermore, bioinformatics has enabled comprehensive multi-omics and clinical data integration for insightful interpretation. Despite their promise, the translation of these technologies into clinically actionable tools has been slow. In this review, we present state-of-the-art multi-omics data analysis strategies in a clinical context. The challenges of omics-based biomarker translation are discussed. Perspectives regarding the use of multi-omics approaches for inborn errors of metabolism (IEM) are presented by introducing a new paradigm shift in addressing IEM investigations in the post-genomic era. PMID:27649151
Tebani, Abdellah; Afonso, Carlos; Marret, Stéphane; Bekri, Soumeya
2016-09-14
The rise of technologies that simultaneously measure thousands of data points represents the heart of systems biology. These technologies have had a huge impact on the discovery of next-generation diagnostics, biomarkers, and drugs in the precision medicine era. Systems biology aims to achieve systemic exploration of complex interactions in biological systems. Driven by high-throughput omics technologies and the computational surge, it enables multi-scale and insightful overviews of cells, organisms, and populations. Precision medicine capitalizes on these conceptual and technological advancements and stands on two main pillars: data generation and data modeling. High-throughput omics technologies allow the retrieval of comprehensive and holistic biological information, whereas computational capabilities enable high-dimensional data modeling and, therefore, accessible and user-friendly visualization. Furthermore, bioinformatics has enabled comprehensive multi-omics and clinical data integration for insightful interpretation. Despite their promise, the translation of these technologies into clinically actionable tools has been slow. In this review, we present state-of-the-art multi-omics data analysis strategies in a clinical context. The challenges of omics-based biomarker translation are discussed. Perspectives regarding the use of multi-omics approaches for inborn errors of metabolism (IEM) are presented by introducing a new paradigm shift in addressing IEM investigations in the post-genomic era.
Zhou, Chengran
2017-01-01
Abstract Over the past decade, biodiversity researchers have dedicated tremendous efforts to constructing DNA reference barcodes for rapid species registration and identification. Although analytical cost for standard DNA barcoding has been significantly reduced since early 2000, further dramatic reduction in barcoding costs is unlikely because Sanger sequencing is approaching its limits in throughput and chemistry cost. Constraints in barcoding cost not only led to unbalanced barcoding efforts around the globe, but also prevented high-throughput sequencing (HTS)–based taxonomic identification from applying binomial species names, which provide crucial linkages to biological knowledge. We developed an Illumina-based pipeline, HIFI-Barcode, to produce full-length Cytochrome c oxidase subunit I (COI) barcodes from pooled polymerase chain reaction amplicons generated by individual specimens. The new pipeline generated accurate barcode sequences that were comparable to Sanger standards, even for different haplotypes of the same species that were only a few nucleotides different from each other. Additionally, the new pipeline was much more sensitive in recovering amplicons at low quantity. The HIFI-Barcode pipeline successfully recovered barcodes from more than 78% of the polymerase chain reactions that didn’t show clear bands on the electrophoresis gel. Moreover, sequencing results based on the single molecular sequencing platform Pacbio confirmed the accuracy of the HIFI-Barcode results. Altogether, the new pipeline can provide an improved solution to produce full-length reference barcodes at about one-tenth of the current cost, enabling construction of comprehensive barcode libraries for local fauna, leading to a feasible direction for DNA barcoding global biomes. PMID:29077841
Liu, Shanlin; Yang, Chentao; Zhou, Chengran; Zhou, Xin
2017-12-01
Over the past decade, biodiversity researchers have dedicated tremendous efforts to constructing DNA reference barcodes for rapid species registration and identification. Although analytical cost for standard DNA barcoding has been significantly reduced since early 2000, further dramatic reduction in barcoding costs is unlikely because Sanger sequencing is approaching its limits in throughput and chemistry cost. Constraints in barcoding cost not only led to unbalanced barcoding efforts around the globe, but also prevented high-throughput sequencing (HTS)-based taxonomic identification from applying binomial species names, which provide crucial linkages to biological knowledge. We developed an Illumina-based pipeline, HIFI-Barcode, to produce full-length Cytochrome c oxidase subunit I (COI) barcodes from pooled polymerase chain reaction amplicons generated by individual specimens. The new pipeline generated accurate barcode sequences that were comparable to Sanger standards, even for different haplotypes of the same species that were only a few nucleotides different from each other. Additionally, the new pipeline was much more sensitive in recovering amplicons at low quantity. The HIFI-Barcode pipeline successfully recovered barcodes from more than 78% of the polymerase chain reactions that didn't show clear bands on the electrophoresis gel. Moreover, sequencing results based on the single molecular sequencing platform Pacbio confirmed the accuracy of the HIFI-Barcode results. Altogether, the new pipeline can provide an improved solution to produce full-length reference barcodes at about one-tenth of the current cost, enabling construction of comprehensive barcode libraries for local fauna, leading to a feasible direction for DNA barcoding global biomes. © The Authors 2017. Published by Oxford University Press.
High-throughput biological techniques, like microarrays and drug screens, generate an enormous amount of data that may be critically important for cancer researchers and clinicians. Being able to manipulate the data to extract those pieces of interest, however, can require computational or bioinformatics skills beyond those of the average scientist.
The US EPA ToxCast program is using in vitro, high-throughput screening (HTS) to profile and model the bioactivity of environmental chemicals. The main goal of the ToxCast program is to generate predictive signatures of toxicity that ultimately provide rapid and cost-effective me...
USDA-ARS?s Scientific Manuscript database
The molecular biological techniques for plasmid-based assembly and cloning of synthetic assembled gene open reading frames are essential for elucidating the function of the proteins encoded by the genes. These techniques involve the production of full-length cDNA libraries as a source of plasmid-bas...
USDA-ARS?s Scientific Manuscript database
Ongoing developments and cost decreases in next-generation sequencing (NGS) technologies have led to an increase in their application, which has greatly enhanced the fields of genetics and genomics. Mapping sequence reads onto a reference genome is a fundamental step in the analysis of NGS data. Eff...
Comparison of a rational vs. high throughput approach for rapid salt screening and selection.
Collman, Benjamin M; Miller, Jonathan M; Seadeek, Christopher; Stambek, Julie A; Blackburn, Anthony C
2013-01-01
In recent years, high throughput (HT) screening has become the most widely used approach for early phase salt screening and selection in a drug discovery/development setting. The purpose of this study was to compare a rational approach for salt screening and selection to those results previously generated using a HT approach. The rational approach involved a much smaller number of initial trials (one salt synthesis attempt per counterion) that were selected based on a few strategic solubility determinations of the free form combined with a theoretical analysis of the ideal solvent solubility conditions for salt formation. Salt screening results for sertraline, tamoxifen, and trazodone using the rational approach were compared to those previously generated by HT screening. The rational approach produced similar results to HT screening, including identification of the commercially chosen salt forms, but with a fraction of the crystallization attempts. Moreover, the rational approach provided enough solid from the very initial crystallization of a salt for more thorough and reliable solid-state characterization and thus rapid decision-making. The crystallization techniques used in the rational approach mimic larger-scale process crystallization, allowing smoother technical transfer of the selected salt to the process chemist.
Saxena, Shalini; Durgam, Laxman; Guruprasad, Lalitha
2018-05-14
Development of new antimalarial drugs continues to be of huge importance because of the resistance of malarial parasite towards currently used drugs. Due to the reliance of parasite on glycolysis for energy generation, glycolytic enzymes have played important role as potential targets for the development of new drugs. Plasmodium falciparum lactate dehydrogenase (PfLDH) is a key enzyme for energy generation of malarial parasites and is considered to be a potential antimalarial target. Presently, there are nearly 15 crystal structures bound with inhibitors and substrate that are available in the protein data bank (PDB). In the present work, we attempted to consider multiple crystal structures with bound inhibitors showing affinity in the range of 1.4 × 10 2 -1.3 × 10 6 nM efficacy and optimized the pharmacophore based on the energy involved in binding termed as e-pharmacophore mapping. A high throughput virtual screening (HTVS) combined with molecular docking, ADME predictions and molecular dynamics simulation led to the identification of 20 potential compounds which could be further developed as novel inhibitors for PfLDH.
NCBI GEO: archive for functional genomics data sets--10 years on.
Barrett, Tanya; Troup, Dennis B; Wilhite, Stephen E; Ledoux, Pierre; Evangelista, Carlos; Kim, Irene F; Tomashevsky, Maxim; Marshall, Kimberly A; Phillippy, Katherine H; Sherman, Patti M; Muertter, Rolf N; Holko, Michelle; Ayanbule, Oluwabukunmi; Yefanov, Andrey; Soboleva, Alexandra
2011-01-01
A decade ago, the Gene Expression Omnibus (GEO) database was established at the National Center for Biotechnology Information (NCBI). The original objective of GEO was to serve as a public repository for high-throughput gene expression data generated mostly by microarray technology. However, the research community quickly applied microarrays to non-gene-expression studies, including examination of genome copy number variation and genome-wide profiling of DNA-binding proteins. Because the GEO database was designed with a flexible structure, it was possible to quickly adapt the repository to store these data types. More recently, as the microarray community switches to next-generation sequencing technologies, GEO has again adapted to host these data sets. Today, GEO stores over 20,000 microarray- and sequence-based functional genomics studies, and continues to handle the majority of direct high-throughput data submissions from the research community. Multiple mechanisms are provided to help users effectively search, browse, download and visualize the data at the level of individual genes or entire studies. This paper describes recent database enhancements, including new search and data representation tools, as well as a brief review of how the community uses GEO data. GEO is freely accessible at http://www.ncbi.nlm.nih.gov/geo/.
McCann, Joshua C.; Wickersham, Tryon A.; Loor, Juan J.
2014-01-01
Diversity in the forestomach microbiome is one of the key features of ruminant animals. The diverse microbial community adapts to a wide array of dietary feedstuffs and management strategies. Understanding rumen microbiome composition, adaptation, and function has global implications ranging from climatology to applied animal production. Classical knowledge of rumen microbiology was based on anaerobic, culture-dependent methods. Next-generation sequencing and other molecular techniques have uncovered novel features of the rumen microbiome. For instance, pyrosequencing of the 16S ribosomal RNA gene has revealed the taxonomic identity of bacteria and archaea to the genus level, and when complemented with barcoding adds multiple samples to a single run. Whole genome shotgun sequencing generates true metagenomic sequences to predict the functional capability of a microbiome, and can also be used to construct genomes of isolated organisms. Integration of high-throughput data describing the rumen microbiome with classic fermentation and animal performance parameters has produced meaningful advances and opened additional areas for study. In this review, we highlight recent studies of the rumen microbiome in the context of cattle production focusing on nutrition, rumen development, animal efficiency, and microbial function. PMID:24940050
Optical element for full spectral purity from IR-generated EUV light sources
NASA Astrophysics Data System (ADS)
van den Boogaard, A. J. R.; Louis, E.; van Goor, F. A.; Bijkerk, F.
2009-03-01
Laser produced plasma (LLP) sources are generally considered attractive for high power EUV production in next generation lithography equipment. Such plasmas are most efficiently excited by the relatively long, infrared wavelengths of CO2-lasers, but a significant part of the rotational-vibrational excitation lines of the CO2 radiation will be backscattered by the plasma's critical density surface and consequently will be present as parasitic radiation in the spectrum of such sources. Since most optical elements in the EUV collecting and imaging train have a high reflection coefficient for IR radiation, undesirable heating phenomena at the resist level are likely to occur. In this study a completely new principle is employed to obtain full separation of EUV and IR radiation from the source by a single optical component. While the application of a transmission filter would come at the expense of EUV throughput, this technique potentially enables wavelength separation without loosing reflectance compared to a conventional Mo/Si multilayer coated element. As a result this method provides full spectral purity from the source without loss in EUV throughput. Detailed calculations on the principal of functioning are presented.
NEL, ANDRE; XIA, TIAN; MENG, HUAN; WANG, XIANG; LIN, SIJIE; JI, ZHAOXIA; ZHANG, HAIYUAN
2014-01-01
Conspectus The production of engineered nanomaterials (ENMs) is a scientific breakthrough in material design and the development of new consumer products. While the successful implementation of nanotechnology is important for the growth of the global economy, we also need to consider the possible environmental health and safety (EHS) impact as a result of the novel physicochemical properties that could generate hazardous biological outcomes. In order to assess ENM hazard, reliable and reproducible screening approaches are needed to test the basic materials as well as nano-enabled products. A platform is required to investigate the potentially endless number of bio-physicochemical interactions at the nano/bio interface, in response to which we have developed a predictive toxicological approach. We define a predictive toxicological approach as the use of mechanisms-based high throughput screening in vitro to make predictions about the physicochemical properties of ENMs that may lead to the generation of pathology or disease outcomes in vivo. The in vivo results are used to validate and improve the in vitro high throughput screening (HTS) and to establish structure-activity relationships (SARs) that allow hazard ranking and modeling by an appropriate combination of in vitro and in vivo testing. This notion is in agreement with the landmark 2007 report from the US National Academy of Sciences, “Toxicity Testing in the 21st Century: A Vision and a Strategy” (http://www.nap.edu/catalog.php?record_id=11970), which advocates increased efficiency of toxicity testing by transitioning from qualitative, descriptive animal testing to quantitative, mechanistic and pathway-based toxicity testing in human cells or cell lines using high throughput approaches. Accordingly, we have implemented HTS approaches to screen compositional and combinatorial ENM libraries to develop hazard ranking and structure-activity relationships that can be used for predicting in vivo injury outcomes. This predictive approach allows the bulk of the screening analysis and high volume data generation to be carried out in vitro, following which limited, but critical, validation studies are carried out in animals or whole organisms. Risk reduction in the exposed human or environmental populations can then focus on limiting or avoiding exposures that trigger these toxicological responses as well as implementing safer design of potentially hazardous ENMs. In this communication, we review the tools required for establishing predictive toxicology paradigms to assess inhalation and environmental toxicological scenarios through the use of compositional and combinatorial ENM libraries, mechanism-based HTS assays, hazard ranking and development of nano-SARs. We will discuss the major injury paradigms that have emerged based on specific ENM properties, as well as describing the safer design of ZnO nanoparticles based on characterization of dissolution chemistry as a major predictor of toxicity. PMID:22676423
A biosensor generated via high throughput screening quantifies cell edge Src dynamics
Gulyani, Akash; Vitriol, Eric; Allen, Richard; Wu, Jianrong; Gremyachinskiy, Dmitriy; Lewis, Steven; Dewar, Brian; Graves, Lee M.; Kay, Brian K.; Kuhlman, Brian; Elston, Tim; Hahn, Klaus M.
2011-01-01
Fluorescent biosensors for living cells currently require laborious optimization and a unique design for each target. They are limited by the availability of naturally occurring ligands with appropriate target specificity. Here we describe a biosensor based on an engineered fibronectin monobody scaffold that can be tailored to bind different targets via high throughput screening. This Src family kinase (SFK) biosensor was made by derivatizing a monobody specific for activated SFK with a bright dye whose fluorescence increases upon target binding. We identified sites for dye attachment and alterations to eliminate vesiculation in living cells, providing a generalizable scaffold for biosensor production. This approach minimizes cell perturbation because it senses endogenous, unmodified target, and because sensitivity is enhanced by direct dye excitation. Automated correlation of cell velocities and SFK activity revealed that SFK are activated specifically during protrusion. Activity correlates with velocity, and peaks 1–2 microns from the leading edge. PMID:21666688
High-throughput detection of ethanol-producing cyanobacteria in a microdroplet platform.
Abalde-Cela, Sara; Gould, Anna; Liu, Xin; Kazamia, Elena; Smith, Alison G; Abell, Chris
2015-05-06
Ethanol production by microorganisms is an important renewable energy source. Most processes involve fermentation of sugars from plant feedstock, but there is increasing interest in direct ethanol production by photosynthetic organisms. To facilitate this, a high-throughput screening technique for the detection of ethanol is required. Here, a method for the quantitative detection of ethanol in a microdroplet-based platform is described that can be used for screening cyanobacterial strains to identify those with the highest ethanol productivity levels. The detection of ethanol by enzymatic assay was optimized both in bulk and in microdroplets. In parallel, the encapsulation of engineered ethanol-producing cyanobacteria in microdroplets and their growth dynamics in microdroplet reservoirs were demonstrated. The combination of modular microdroplet operations including droplet generation for cyanobacteria encapsulation, droplet re-injection and pico-injection, and laser-induced fluorescence, were used to create this new platform to screen genetically engineered strains of cyanobacteria with different levels of ethanol production.
Integrated, multi-scale, spatial-temporal cell biology--A next step in the post genomic era.
Horwitz, Rick
2016-03-01
New microscopic approaches, high-throughput imaging, and gene editing promise major new insights into cellular behaviors. When coupled with genomic and other 'omic information and "mined" for correlations and associations, a new breed of powerful and useful cellular models should emerge. These top down, coarse-grained, and statistical models, in turn, can be used to form hypotheses merging with fine-grained, bottom up mechanistic studies and models that are the back bone of cell biology. The goal of the Allen Institute for Cell Science is to develop the top down approach by developing a high throughput microscopy pipeline that is integrated with modeling, using gene edited hiPS cell lines in various physiological and pathological contexts. The output of these experiments and models will be an "animated" cell, capable of integrating and analyzing image data generated from experiments and models. Copyright © 2015 Elsevier Inc. All rights reserved.
CellCognition: time-resolved phenotype annotation in high-throughput live cell imaging.
Held, Michael; Schmitz, Michael H A; Fischer, Bernd; Walter, Thomas; Neumann, Beate; Olma, Michael H; Peter, Matthias; Ellenberg, Jan; Gerlich, Daniel W
2010-09-01
Fluorescence time-lapse imaging has become a powerful tool to investigate complex dynamic processes such as cell division or intracellular trafficking. Automated microscopes generate time-resolved imaging data at high throughput, yet tools for quantification of large-scale movie data are largely missing. Here we present CellCognition, a computational framework to annotate complex cellular dynamics. We developed a machine-learning method that combines state-of-the-art classification with hidden Markov modeling for annotation of the progression through morphologically distinct biological states. Incorporation of time information into the annotation scheme was essential to suppress classification noise at state transitions and confusion between different functional states with similar morphology. We demonstrate generic applicability in different assays and perturbation conditions, including a candidate-based RNA interference screen for regulators of mitotic exit in human cells. CellCognition is published as open source software, enabling live-cell imaging-based screening with assays that directly score cellular dynamics.
Dentinger, Bryn T M; Margaritescu, Simona; Moncalvo, Jean-Marc
2010-07-01
We present two methods for DNA extraction from fresh and dried mushrooms that are adaptable to high-throughput sequencing initiatives, such as DNA barcoding. Our results show that these protocols yield ∼85% sequencing success from recently collected materials. Tests with both recent (<2 year) and older (>100 years) specimens reveal that older collections have low success rates and may be an inefficient resource for populating a barcode database. However, our method of extracting DNA from herbarium samples using small amount of tissue is reliable and could be used for important historical specimens. The application of these protocols greatly reduces time, and therefore cost, of generating DNA sequences from mushrooms and other fungi vs. traditional extraction methods. The efficiency of these methods illustrates that standardization and streamlining of sample processing should be shifted from the laboratory to the field. © 2009 Blackwell Publishing Ltd.
Droplet barcoding for single cell transcriptomics applied to embryonic stem cells
Klein, Allon M; Mazutis, Linas; Akartuna, Ilke; Tallapragada, Naren; Veres, Adrian; Li, Victor; Peshkin, Leonid; Weitz, David A; Kirschner, Marc W
2015-01-01
Summary It has long been the dream of biologists to map gene expression at the single cell level. With such data one might track heterogeneous cell sub-populations, and infer regulatory relationships between genes and pathways. Recently, RNA sequencing has achieved single cell resolution. What is limiting is an effective way to routinely isolate and process large numbers of individual cells for quantitative in-depth sequencing. We have developed a high-throughput droplet-microfluidic approach for barcoding the RNA from thousands of individual cells for subsequent analysis by next-generation sequencing. The method shows a surprisingly low noise profile and is readily adaptable to other sequencing-based assays. We analyzed mouse embryonic stem cells, revealing in detail the population structure and the heterogeneous onset of differentiation after LIF withdrawal. The reproducibility of these high-throughput single cell data allowed us to deconstruct cell populations and infer gene expression relationships. PMID:26000487
Oguntimein, Gbekeloluwa B.; Rodriguez, Jr., Miguel; Dumitrache, Alexandru; ...
2017-11-09
Here, to develop and prototype a high-throughput microplate assay to assess anaerobic microorganisms and lignocellulosic biomasses in a rapid, cost-effective screen for consolidated bioprocessing potential. Clostridium thermocellum parent Δ hpt strain deconstructed Avicel to cellobiose, glucose, and generated lactic acid, formic acid, acetic acid and ethanol as fermentation products in titers and ratios similar to larger scale fermentations confirming the suitability of a plate-based method for C. thermocellum growth studies. C. thermocellum strain LL1210, with gene deletions in the key central metabolic pathways, produced higher ethanol titers in the Consolidated Bioprocessing (CBP) plate assay for both Avicel and switchgrass fermentationsmore » when compared to the Δ hpt strain. A prototype microplate assay system is developed that will facilitate high-throughput bioprospecting for new lignocellulosic biomass types, genetic variants and new microbial strains for bioethanol production.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oguntimein, Gbekeloluwa B.; Rodriguez, Jr., Miguel; Dumitrache, Alexandru
Here, to develop and prototype a high-throughput microplate assay to assess anaerobic microorganisms and lignocellulosic biomasses in a rapid, cost-effective screen for consolidated bioprocessing potential. Clostridium thermocellum parent Δ hpt strain deconstructed Avicel to cellobiose, glucose, and generated lactic acid, formic acid, acetic acid and ethanol as fermentation products in titers and ratios similar to larger scale fermentations confirming the suitability of a plate-based method for C. thermocellum growth studies. C. thermocellum strain LL1210, with gene deletions in the key central metabolic pathways, produced higher ethanol titers in the Consolidated Bioprocessing (CBP) plate assay for both Avicel and switchgrass fermentationsmore » when compared to the Δ hpt strain. A prototype microplate assay system is developed that will facilitate high-throughput bioprospecting for new lignocellulosic biomass types, genetic variants and new microbial strains for bioethanol production.« less
A Microfluidic Approach for Studying Piezo Channels.
Maneshi, M M; Gottlieb, P A; Hua, S Z
2017-01-01
Microfluidics is an interdisciplinary field intersecting many areas in engineering. Utilizing a combination of physics, chemistry, biology, and biotechnology, along with practical applications for designing devices that use low volumes of fluids to achieve high-throughput screening, is a major goal in microfluidics. Microfluidic approaches allow the study of cells growth and differentiation using a variety of conditions including control of fluid flow that generates shear stress. Recently, Piezo1 channels were shown to respond to fluid shear stress and are crucial for vascular development. This channel is ideal for studying fluid shear stress applied to cells using microfluidic devices. We have developed an approach that allows us to analyze the role of Piezo channels on any given cell and serves as a high-throughput screen for drug discovery. We show that this approach can provide detailed information about the inhibitors of Piezo channels. Copyright © 2017 Elsevier Inc. All rights reserved.
High-throughput mouse genotyping using robotics automation.
Linask, Kaari L; Lo, Cecilia W
2005-02-01
The use of mouse models is rapidly expanding in biomedical research. This has dictated the need for the rapid genotyping of mutant mouse colonies for more efficient utilization of animal holding space. We have established a high-throughput protocol for mouse genotyping using two robotics workstations: a liquid-handling robot to assemble PCR and a microfluidics electrophoresis robot for PCR product analysis. This dual-robotics setup incurs lower start-up costs than a fully automated system while still minimizing human intervention. Essential to this automation scheme is the construction of a database containing customized scripts for programming the robotics workstations. Using these scripts and the robotics systems, multiple combinations of genotyping reactions can be assembled simultaneously, allowing even complex genotyping data to be generated rapidly with consistency and accuracy. A detailed protocol, database, scripts, and additional background information are available at http://dir.nhlbi.nih.gov/labs/ldb-chd/autogene/.
Shabbir, Shagufta H.; Regan, Clinton J.; Anslyn, Eric V.
2009-01-01
A general approach to high-throughput screening of enantiomeric excess (ee) and concentration was developed by using indicator displacement assays (IDAs), and the protocol was then applied to the vicinal diol hydrobenzoin. The method involves the sequential utilization of what we define herein as screening, training, and analysis plates. Several enantioselective boronic acid-based receptors were screened by using 96-well plates, both for their ability to discriminate the enantiomers of hydrobenzoin and to find their optimal pairing with indicators resulting in the largest optical responses. The best receptor/indicator combination was then used to train an artificial neural network to determine concentration and ee. To prove the practicality of the developed protocol, analysis plates were created containing true unknown samples of hydrobenzoin generated by established Sharpless asymmetric dihydroxylation reactions, and the best ligand was correctly identified. PMID:19332790
Holst-Jensen, Arne; Spilsberg, Bjørn; Arulandhu, Alfred J; Kok, Esther; Shi, Jianxin; Zel, Jana
2016-07-01
The emergence of high-throughput, massive or next-generation sequencing technologies has created a completely new foundation for molecular analyses. Various selective enrichment processes are commonly applied to facilitate detection of predefined (known) targets. Such approaches, however, inevitably introduce a bias and are prone to miss unknown targets. Here we review the application of high-throughput sequencing technologies and the preparation of fit-for-purpose whole genome shotgun sequencing libraries for the detection and characterization of genetically modified and derived products. The potential impact of these new sequencing technologies for the characterization, breeding selection, risk assessment, and traceability of genetically modified organisms and genetically modified products is yet to be fully acknowledged. The published literature is reviewed, and the prospects for future developments and use of the new sequencing technologies for these purposes are discussed.
Shabbir, Shagufta H; Regan, Clinton J; Anslyn, Eric V
2009-06-30
A general approach to high-throughput screening of enantiomeric excess (ee) and concentration was developed by using indicator displacement assays (IDAs), and the protocol was then applied to the vicinal diol hydrobenzoin. The method involves the sequential utilization of what we define herein as screening, training, and analysis plates. Several enantioselective boronic acid-based receptors were screened by using 96-well plates, both for their ability to discriminate the enantiomers of hydrobenzoin and to find their optimal pairing with indicators resulting in the largest optical responses. The best receptor/indicator combination was then used to train an artificial neural network to determine concentration and ee. To prove the practicality of the developed protocol, analysis plates were created containing true unknown samples of hydrobenzoin generated by established Sharpless asymmetric dihydroxylation reactions, and the best ligand was correctly identified.
Kondrashova, Olga; Love, Clare J.; Lunke, Sebastian; Hsu, Arthur L.; Waring, Paul M.; Taylor, Graham R.
2015-01-01
Whilst next generation sequencing can report point mutations in fixed tissue tumour samples reliably, the accurate determination of copy number is more challenging. The conventional Multiplex Ligation-dependent Probe Amplification (MLPA) assay is an effective tool for measurement of gene dosage, but is restricted to around 50 targets due to size resolution of the MLPA probes. By switching from a size-resolved format, to a sequence-resolved format we developed a scalable, high-throughput, quantitative assay. MLPA-seq is capable of detecting deletions, duplications, and amplifications in as little as 5ng of genomic DNA, including from formalin-fixed paraffin-embedded (FFPE) tumour samples. We show that this method can detect BRCA1, BRCA2, ERBB2 and CCNE1 copy number changes in DNA extracted from snap-frozen and FFPE tumour tissue, with 100% sensitivity and >99.5% specificity. PMID:26569395
Embedded Hyperchaotic Generators: A Comparative Analysis
NASA Astrophysics Data System (ADS)
Sadoudi, Said; Tanougast, Camel; Azzaz, Mohamad Salah; Dandache, Abbas
In this paper, we present a comparative analysis of FPGA implementation performances, in terms of throughput and resources cost, of five well known autonomous continuous hyperchaotic systems. The goal of this analysis is to identify the embedded hyperchaotic generator which leads to designs with small logic area cost, satisfactory throughput rates, low power consumption and low latency required for embedded applications such as secure digital communications between embedded systems. To implement the four-dimensional (4D) chaotic systems, we use a new structural hardware architecture based on direct VHDL description of the forth order Runge-Kutta method (RK-4). The comparative analysis shows that the hyperchaotic Lorenz generator provides attractive performances compared to that of others. In fact, its hardware implementation requires only 2067 CLB-slices, 36 multipliers and no block RAMs, and achieves a throughput rate of 101.6 Mbps, at the output of the FPGA circuit, at a clock frequency of 25.315 MHz with a low latency time of 316 ns. Consequently, these good implementation performances offer to the embedded hyperchaotic Lorenz generator the advantage of being the best candidate for embedded communications applications.
Application of ToxCast High-Throughput Screening and ...
Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenesis Distruptors Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenssis Distruptors
A CRISPR Cas9 high-throughput genome editing toolkit for kinetoplastids
Beneke, Tom; Makin, Laura; Valli, Jessica; Sunter, Jack
2017-01-01
Clustered regularly interspaced short palindromic repeats (CRISPR), CRISPR-associated gene 9 (Cas9) genome editing is set to revolutionize genetic manipulation of pathogens, including kinetoplastids. CRISPR technology provides the opportunity to develop scalable methods for high-throughput production of mutant phenotypes. Here, we report development of a CRISPR-Cas9 toolkit that allows rapid tagging and gene knockout in diverse kinetoplastid species without requiring the user to perform any DNA cloning. We developed a new protocol for single-guide RNA (sgRNA) delivery using PCR-generated DNA templates which are transcribed in vivo by T7 RNA polymerase and an online resource (LeishGEdit.net) for automated primer design. We produced a set of plasmids that allows easy and scalable generation of DNA constructs for transfections in just a few hours. We show how these tools allow knock-in of fluorescent protein tags, modified biotin ligase BirA*, luciferase, HaloTag and small epitope tags, which can be fused to proteins at the N- or C-terminus, for functional studies of proteins and localization screening. These tools enabled generation of null mutants in a single round of transfection in promastigote form Leishmania major, Leishmania mexicana and bloodstream form Trypanosoma brucei; deleted genes were undetectable in non-clonal populations, enabling for the first time rapid and large-scale knockout screens. PMID:28573017
High speed micromachining with high power UV laser
NASA Astrophysics Data System (ADS)
Patel, Rajesh S.; Bovatsek, James M.
2013-03-01
Increasing demand for creating fine features with high accuracy in manufacturing of electronic mobile devices has fueled growth for lasers in manufacturing. High power, high repetition rate ultraviolet (UV) lasers provide an opportunity to implement a cost effective high quality, high throughput micromachining process in a 24/7 manufacturing environment. The energy available per pulse and the pulse repetition frequency (PRF) of diode pumped solid state (DPSS) nanosecond UV lasers have increased steadily over the years. Efficient use of the available energy from a laser is important to generate accurate fine features at a high speed with high quality. To achieve maximum material removal and minimal thermal damage for any laser micromachining application, use of the optimal process parameters including energy density or fluence (J/cm2), pulse width, and repetition rate is important. In this study we present a new high power, high PRF QuasarR 355-40 laser from Spectra-Physics with TimeShiftTM technology for unique software adjustable pulse width, pulse splitting, and pulse shaping capabilities. The benefits of these features for micromachining include improved throughput and quality. Specific example and results of silicon scribing are described to demonstrate the processing benefits of the Quasar's available power, PRF, and TimeShift technology.
Automated processing of zebrafish imaging data: a survey.
Mikut, Ralf; Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A; Kausler, Bernhard X; Ledesma-Carbayo, María J; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine
2013-09-01
Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.
Automated Processing of Zebrafish Imaging Data: A Survey
Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine
2013-01-01
Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125
Pietiainen, Vilja; Saarela, Jani; von Schantz, Carina; Turunen, Laura; Ostling, Paivi; Wennerberg, Krister
2014-05-01
The High Throughput Biomedicine (HTB) unit at the Institute for Molecular Medicine Finland FIMM was established in 2010 to serve as a national and international academic screening unit providing access to state of the art instrumentation for chemical and RNAi-based high throughput screening. The initial focus of the unit was multiwell plate based chemical screening and high content microarray-based siRNA screening. However, over the first four years of operation, the unit has moved to a more flexible service platform where both chemical and siRNA screening is performed at different scales primarily in multiwell plate-based assays with a wide range of readout possibilities with a focus on ultraminiaturization to allow for affordable screening for the academic users. In addition to high throughput screening, the equipment of the unit is also used to support miniaturized, multiplexed and high throughput applications for other types of research such as genomics, sequencing and biobanking operations. Importantly, with the translational research goals at FIMM, an increasing part of the operations at the HTB unit is being focused on high throughput systems biological platforms for functional profiling of patient cells in personalized and precision medicine projects.
High Throughput Screening For Hazard and Risk of Environmental Contaminants
High throughput toxicity testing provides detailed mechanistic information on the concentration response of environmental contaminants in numerous potential toxicity pathways. High throughput screening (HTS) has several key advantages: (1) expense orders of magnitude less than an...
[Prospects for applications in human health of nanopore-based sequencing].
Audebert, Christophe; Hot, David; Caboche, Ségolène
2018-04-01
High throughput sequencing has opened up new clinical opportunities moving towards a medicine of precision. Oncology, infectious diseases or human genomics, many applications have been developed in recent years. The introduction of a third generation of nanopore-based sequencing technology, addressing some of the weaknesses of the previous generation, heralds a new revolution. Portability, real time, long reads and marginal investment costs, these promising new technologies point to a new shift of paradigm. What are the perspectives opened up by nanopores for clinical applications? © 2018 médecine/sciences – Inserm.
Rational Methods for the Selection of Diverse Screening Compounds
Huggins, David J.; Venkitaraman, Ashok R.; Spring, David R.
2016-01-01
Traditionally a pursuit of large pharmaceutical companies, high-throughput screening assays are becoming increasingly common within academic and government laboratories. This shift has been instrumental in enabling projects that have not been commercially viable, such as chemical probe discovery and screening against high risk targets. Once an assay has been prepared and validated, it must be fed with screening compounds. Crafting a successful collection of small molecules for screening poses a significant challenge. An optimized collection will minimize false positives whilst maximizing hit rates of compounds that are amenable to lead generation and optimization. Without due consideration of the relevant protein targets and the downstream screening assays, compound filtering and selection can fail to explore the great extent of chemical diversity and eschew valuable novelty. Herein, we discuss the different factors to be considered and methods that may be employed when assembling a structurally diverse compound screening collection. Rational methods for selecting diverse chemical libraries are essential for their effective use in high-throughput screens. PMID:21261294
Röst, Hannes L; Liu, Yansheng; D'Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi
2016-09-01
Next-generation mass spectrometric (MS) techniques such as SWATH-MS have substantially increased the throughput and reproducibility of proteomic analysis, but ensuring consistent quantification of thousands of peptide analytes across multiple liquid chromatography-tandem MS (LC-MS/MS) runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we developed TRIC (http://proteomics.ethz.ch/tric/), a software tool that utilizes fragment-ion data to perform cross-run alignment, consistent peak-picking and quantification for high-throughput targeted proteomics. TRIC reduced the identification error compared to a state-of-the-art SWATH-MS analysis without alignment by more than threefold at constant recall while correcting for highly nonlinear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups. Thus, TRIC fills a gap in the pipeline for automated analysis of massively parallel targeted proteomics data sets.
Information management systems for pharmacogenomics.
Thallinger, Gerhard G; Trajanoski, Slave; Stocker, Gernot; Trajanoski, Zlatko
2002-09-01
The value of high-throughput genomic research is dramatically enhanced by association with key patient data. These data are generally available but of disparate quality and not typically directly associated. A system that could bring these disparate data sources into a common resource connected with functional genomic data would be tremendously advantageous. However, the integration of clinical and accurate interpretation of the generated functional genomic data requires the development of information management systems capable of effectively capturing the data as well as tools to make that data accessible to the laboratory scientist or to the clinician. In this review these challenges and current information technology solutions associated with the management, storage and analysis of high-throughput data are highlighted. It is suggested that the development of a pharmacogenomic data management system which integrates public and proprietary databases, clinical datasets, and data mining tools embedded in a high-performance computing environment should include the following components: parallel processing systems, storage technologies, network technologies, databases and database management systems (DBMS), and application services.
Wang, Meng; Li, Sijin; Zhao, Huimin
2016-01-01
The development of high-throughput phenotyping tools is lagging far behind the rapid advances of genotype generation methods. To bridge this gap, we report a new strategy for design, construction, and fine-tuning of intracellular-metabolite-sensing/regulation gene circuits by repurposing bacterial transcription factors and eukaryotic promoters. As proof of concept, we systematically investigated the design and engineering of bacterial repressor-based xylose-sensing/regulation gene circuits in Saccharomyces cerevisiae. We demonstrated that numerous properties, such as induction ratio and dose-response curve, can be fine-tuned at three different nodes, including repressor expression level, operator position, and operator sequence. By applying these gene circuits, we developed a cell sorting based, rapid and robust high-throughput screening method for xylose transporter engineering and obtained a sugar transporter HXT14 mutant with 6.5-fold improvement in xylose transportation capacity. This strategy should be generally applicable and highly useful for evolutionary engineering of proteins, pathways, and genomes in S. cerevisiae. © 2015 Wiley Periodicals, Inc.
Neural network Hilbert transform based filtered backprojection for fast inline x-ray inspection
NASA Astrophysics Data System (ADS)
Janssens, Eline; De Beenhouwer, Jan; Van Dael, Mattias; De Schryver, Thomas; Van Hoorebeke, Luc; Verboven, Pieter; Nicolai, Bart; Sijbers, Jan
2018-03-01
X-ray imaging is an important tool for quality control since it allows to inspect the interior of products in a non-destructive way. Conventional x-ray imaging, however, is slow and expensive. Inline x-ray inspection, on the other hand, can pave the way towards fast and individual quality control, provided that a sufficiently high throughput can be achieved at a minimal cost. To meet these criteria, an inline inspection acquisition geometry is proposed where the object moves and rotates on a conveyor belt while it passes a fixed source and detector. Moreover, for this acquisition geometry, a new neural-network-based reconstruction algorithm is introduced: the neural network Hilbert transform based filtered backprojection. The proposed algorithm is evaluated both on simulated and real inline x-ray data and has shown to generate high quality reconstructions of 400 × 400 reconstruction pixels within 200 ms, thereby meeting the high throughput criteria.
Green, Anthony P; Turner, Nicholas J; O'Reilly, Elaine
2014-09-26
The widespread application of ω-transaminases as biocatalysts for chiral amine synthesis has been hampered by fundamental challenges, including unfavorable equilibrium positions and product inhibition. Herein, an efficient process that allows reactions to proceed in high conversion in the absence of by-product removal using only one equivalent of a diamine donor (ortho-xylylenediamine) is reported. This operationally simple method is compatible with the most widely used (R)- and (S)-selective ω-TAs and is particularly suitable for the conversion of substrates with unfavorable equilibrium positions (e.g., 1-indanone). Significantly, spontaneous polymerization of the isoindole by-product generates colored derivatives, providing a high-throughput screening platform to identify desired ω-TA activity. © 2014 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Weidenhof, B; Reiser, M; Stöwe, K; Maier, W F; Kim, M; Azurdia, J; Gulari, E; Seker, E; Barks, A; Laine, R M
2009-07-08
We describe here the use of liquid-feed flame spray pyrolysis (LF-FSP) to produce high surface area, nonporous, mixed-metal oxide nanopowders that were subsequently subjected to high-throughput screening to assess a set of materials for deNO(x) catalysis and hydrocarbon combustion. We were able to easily screen some 40 LF-FSP produced materials. LF-FSP produces nanopowders that very often consist of kinetic rather than thermodynamic phases. Such materials are difficult to access or are completely inaccessible via traditional catalyst preparation methods. Indeed, our studies identified a set of Ce(1-x)Zr(x)O(2) and Al(2)O(3)-Ce(1-x)Zr(x)O(2) nanopowders that offer surprisingly good activities for both NO(x) reduction and propane/propene oxidation both in high-throughput screening and in continuous flow catalytic studies. All of these catalysts offer activities comparable to traditional Pt/Al(2)O(3) catalysts but without Pt. Thus, although Pt-free, they are quite active for several extremely important emission control reactions, especially considering that these are only first generation materials. Indeed, efforts to dope the active catalysts with Pt actually led to lower catalytic activities. Thus the potential exists to completely change the materials used in emission control devices, especially for high-temperature reactions as these materials have already been exposed to 1500 degrees C; however, much research must be done before this potential is verified.
NASA Astrophysics Data System (ADS)
Malloy, Matt; Thiel, Brad; Bunday, Benjamin D.; Wurm, Stefan; Jindal, Vibhu; Mukhtar, Maseeh; Quoi, Kathy; Kemen, Thomas; Zeidler, Dirk; Eberle, Anna Lena; Garbowski, Tomasz; Dellemann, Gregor; Peters, Jan Hendrik
2015-09-01
The new device architectures and materials being introduced for sub-10nm manufacturing, combined with the complexity of multiple patterning and the need for improved hotspot detection strategies, have pushed current wafer inspection technologies to their limits. In parallel, gaps in mask inspection capability are growing as new generations of mask technologies are developed to support these sub-10nm wafer manufacturing requirements. In particular, the challenges associated with nanoimprint and extreme ultraviolet (EUV) mask inspection require new strategies that enable fast inspection at high sensitivity. The tradeoffs between sensitivity and throughput for optical and e-beam inspection are well understood. Optical inspection offers the highest throughput and is the current workhorse of the industry for both wafer and mask inspection. E-beam inspection offers the highest sensitivity but has historically lacked the throughput required for widespread adoption in the manufacturing environment. It is unlikely that continued incremental improvements to either technology will meet tomorrow's requirements, and therefore a new inspection technology approach is required; one that combines the high-throughput performance of optical with the high-sensitivity capabilities of e-beam inspection. To support the industry in meeting these challenges SUNY Poly SEMATECH has evaluated disruptive technologies that can meet the requirements for high volume manufacturing (HVM), for both the wafer fab [1] and the mask shop. Highspeed massively parallel e-beam defect inspection has been identified as the leading candidate for addressing the key gaps limiting today's patterned defect inspection techniques. As of late 2014 SUNY Poly SEMATECH completed a review, system analysis, and proof of concept evaluation of multiple e-beam technologies for defect inspection. A champion approach has been identified based on a multibeam technology from Carl Zeiss. This paper includes a discussion on the need for high-speed e-beam inspection and then provides initial imaging results from EUV masks and wafers from 61 and 91 beam demonstration systems. Progress towards high resolution and consistent intentional defect arrays (IDA) is also shown.
Generation of genetically modified mice using CRISPR/Cas9 and haploid embryonic stem cell systems
JIN, Li-Fang; LI, Jin-Song
2016-01-01
With the development of high-throughput sequencing technology in the post-genomic era, researchers have concentrated their efforts on elucidating the relationships between genes and their corresponding functions. Recently, important progress has been achieved in the generation of genetically modified mice based on CRISPR/Cas9 and haploid embryonic stem cell (haESC) approaches, which provide new platforms for gene function analysis, human disease modeling, and gene therapy. Here, we review the CRISPR/Cas9 and haESC technology for the generation of genetically modified mice and discuss the key challenges in the application of these approaches. PMID:27469251
Lifetime Assessment of the NEXT Ion Thruster
NASA Technical Reports Server (NTRS)
VanNoord, Jonathan L.
2010-01-01
Ion thrusters are low thrust, high specific impulse devices with required operational lifetimes on the order of 10,000 to 100,000 hr. The NEXT ion thruster is the latest generation of ion thrusters under development. The NEXT ion thruster currently has a qualification level propellant throughput requirement of 450 kg of xenon, which corresponds to roughly 22,000 hr of operation at the highest throttling point. Currently, a NEXT engineering model ion thruster with prototype model ion optics is undergoing a long duration test to determine wear characteristics and establish propellant throughput capability. The NEXT thruster includes many improvements over previous generations of ion thrusters, but two of its component improvements have a larger effect on thruster lifetime. These include the ion optics with tighter tolerances, a masked region and better gap control, and the discharge cathode keeper material change to graphite. Data from the NEXT 2000 hr wear test, the NEXT long duration test, and further analysis is used to determine the expected lifetime of the NEXT ion thruster. This paper will review the predictions for all of the anticipated failure mechanisms. The mechanisms will include wear of the ion optics and cathode s orifice plate and keeper from the plasma, depletion of low work function material in each cathode s insert, and spalling of material in the discharge chamber leading to arcing. Based on the analysis of the NEXT ion thruster, the first failure mode for operation above a specific impulse of 2000 sec is expected to be the structural failure of the ion optics at 750 kg of propellant throughput, 1.7 times the qualification requirement. An assessment based on mission analyses for operation below a specific impulse of 2000 sec indicates that the NEXT thruster is capable of double the propellant throughput required by these missions.
High Throughput Transcriptomics: From screening to pathways
The EPA ToxCast effort has screened thousands of chemicals across hundreds of high-throughput in vitro screening assays. The project is now leveraging high-throughput transcriptomic (HTTr) technologies to substantially expand its coverage of biological pathways. The first HTTr sc...
NASA Astrophysics Data System (ADS)
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-12-01
Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-01-01
Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.
Binladen, Jonas; Gilbert, M Thomas P; Bollback, Jonathan P; Panitz, Frank; Bendixen, Christian; Nielsen, Rasmus; Willerslev, Eske
2007-02-14
The invention of the Genome Sequence 20 DNA Sequencing System (454 parallel sequencing platform) has enabled the rapid and high-volume production of sequence data. Until now, however, individual emulsion PCR (emPCR) reactions and subsequent sequencing runs have been unable to combine template DNA from multiple individuals, as homologous sequences cannot be subsequently assigned to their original sources. We use conventional PCR with 5'-nucleotide tagged primers to generate homologous DNA amplification products from multiple specimens, followed by sequencing through the high-throughput Genome Sequence 20 DNA Sequencing System (GS20, Roche/454 Life Sciences). Each DNA sequence is subsequently traced back to its individual source through 5'tag-analysis. We demonstrate that this new approach enables the assignment of virtually all the generated DNA sequences to the correct source once sequencing anomalies are accounted for (miss-assignment rate<0.4%). Therefore, the method enables accurate sequencing and assignment of homologous DNA sequences from multiple sources in single high-throughput GS20 run. We observe a bias in the distribution of the differently tagged primers that is dependent on the 5' nucleotide of the tag. In particular, primers 5' labelled with a cytosine are heavily overrepresented among the final sequences, while those 5' labelled with a thymine are strongly underrepresented. A weaker bias also exists with regards to the distribution of the sequences as sorted by the second nucleotide of the dinucleotide tags. As the results are based on a single GS20 run, the general applicability of the approach requires confirmation. However, our experiments demonstrate that 5'primer tagging is a useful method in which the sequencing power of the GS20 can be applied to PCR-based assays of multiple homologous PCR products. The new approach will be of value to a broad range of research areas, such as those of comparative genomics, complete mitochondrial analyses, population genetics, and phylogenetics.
High Throughput Experimental Materials Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakutayev, Andriy; Perkins, John; Schwarting, Marcus
The mission of the High Throughput Experimental Materials Database (HTEM DB) is to enable discovery of new materials with useful properties by releasing large amounts of high-quality experimental data to public. The HTEM DB contains information about materials obtained from high-throughput experiments at the National Renewable Energy Laboratory (NREL).
20180311 - High Throughput Transcriptomics: From screening to pathways (SOT 2018)
The EPA ToxCast effort has screened thousands of chemicals across hundreds of high-throughput in vitro screening assays. The project is now leveraging high-throughput transcriptomic (HTTr) technologies to substantially expand its coverage of biological pathways. The first HTTr sc...
Hoeflinger, Jennifer L; Hoeflinger, Daniel E; Miller, Michael J
2017-01-01
Herein, an open-source method to generate quantitative bacterial growth data from high-throughput microplate assays is described. The bacterial lag time, maximum specific growth rate, doubling time and delta OD are reported. Our method was validated by carbohydrate utilization of lactobacilli, and visual inspection revealed 94% of regressions were deemed excellent. Copyright © 2016 Elsevier B.V. All rights reserved.
The SOLAR-C Mission: Plan B Payload Concept
NASA Astrophysics Data System (ADS)
Shimizu, T.; Sakao, T.; Katsukawa, Y.; Group, J. S. W.
2012-08-01
The telescope concepts for the SOLAR-C Plan B mission as of the time of the Hinode-3 meeting were briefly presented for having comments from the international solar physics community. The telescope candidates are 1) near IR-visible-UV telescope with 1.5m aperture and enhanced spectro-polarimetric capability, 2) UV/EUV high throughput spectrometer, and 3) next generation X-ray telescope.
Hypoxia-sensitive reporter system for high-throughput screening.
Tsujita, Tadayuki; Kawaguchi, Shin-ichi; Dan, Takashi; Baird, Liam; Miyata, Toshio; Yamamoto, Masayuki
2015-02-01
The induction of anti-hypoxic stress enzymes and proteins has the potential to be a potent therapeutic strategy to prevent the progression of ischemic heart, kidney or brain diseases. To realize this idea, small chemical compounds, which mimic hypoxic conditions by activating the PHD-HIF-α system, have been developed. However, to date, none of these compounds were identified by monitoring the transcriptional activation of hypoxia-inducible factors (HIFs). Thus, to facilitate the discovery of potent inducers of HIF-α, we have developed an effective high-throughput screening (HTS) system to directly monitor the output of HIF-α transcription. We generated a HIF-α-dependent reporter system that responds to hypoxic stimuli in a concentration- and time-dependent manner. This system was developed through multiple optimization steps, resulting in the generation of a construct that consists of the secretion-type luciferase gene (Metridia luciferase, MLuc) under the transcriptional regulation of an enhancer containing 7 copies of 40-bp hypoxia responsive element (HRE) upstream of a mini-TATA promoter. This construct was stably integrated into the human neuroblastoma cell line, SK-N-BE(2)c, to generate a reporter system, named SKN:HRE-MLuc. To improve this system and to increase its suitability for the HTS platform, we incorporated the next generation luciferase, Nano luciferase (NLuc), whose longer half-life provides us with flexibility for the use of this reporter. We thus generated a stably transformed clone with NLuc, named SKN:HRE-NLuc, and found that it showed significantly improved reporter activity compared to SKN:HRE-MLuc. In this study, we have successfully developed the SKN:HRE-NLuc screening system as an efficient platform for future HTS.
High Throughput Determination of Critical Human Dosing Parameters (SOT)
High throughput toxicokinetics (HTTK) is a rapid approach that uses in vitro data to estimate TK for hundreds of environmental chemicals. Reverse dosimetry (i.e., reverse toxicokinetics or RTK) based on HTTK data converts high throughput in vitro toxicity screening (HTS) data int...
High Throughput Determinations of Critical Dosing Parameters (IVIVE workshop)
High throughput toxicokinetics (HTTK) is an approach that allows for rapid estimations of TK for hundreds of environmental chemicals. HTTK-based reverse dosimetry (i.e, reverse toxicokinetics or RTK) is used in order to convert high throughput in vitro toxicity screening (HTS) da...
Optimization of high-throughput nanomaterial developmental toxicity testing in zebrafish embryos
Nanomaterial (NM) developmental toxicities are largely unknown. With an extensive variety of NMs available, high-throughput screening methods may be of value for initial characterization of potential hazard. We optimized a zebrafish embryo test as an in vivo high-throughput assay...
A high-throughput assay for enzymatic polyester hydrolysis activity by fluorimetric detection.
Wei, Ren; Oeser, Thorsten; Billig, Susan; Zimmermann, Wolfgang
2012-12-01
A fluorimetric assay for the fast determination of the activity of polyester-hydrolyzing enzymes in a large number of samples has been developed. Terephthalic acid (TPA) is a main product of the enzymatic hydrolysis of polyethylene terephthalate (PET), a synthetic polyester. Terephthalate has been quantified following its conversion to the fluorescent 2-hydroxyterephthalate by an iron autoxidation-mediated generation of free hydroxyl radicals. The assay proved to be robust at different buffer concentrations, reaction times, pH values, and in the presence of proteins. A validation of the assay was performed by analyzing TPA formation from PET films and nanoparticles catalyzed by a polyester hydrolase from Thermobifida fusca KW3 in a 96-well microplate format. The results showed a close correlation (R(2) = 0.99) with those obtained by a considerably more tedious and time-consuming HPLC method, suggesting the aptness of the fluorimetric assay for a high-throughput screening for polyester hydrolases. The method described in this paper will facilitate the detection and development of biocatalysts for the modification and degradation of synthetic polymers. The fluorimetric assay can be used to quantify the amount of TPA obtained as the final degradation product of the enzymatic hydrolysis of PET. In a microplate format, this assay can be applied for the high-throughput screening of polyester hydrolases. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Drosophila melanogaster as a High-Throughput Model for Host-Microbiota Interactions.
Trinder, Mark; Daisley, Brendan A; Dube, Josh S; Reid, Gregor
2017-01-01
Microbiota research often assumes that differences in abundance and identity of microorganisms have unique influences on host physiology. To test this concept mechanistically, germ-free mice are colonized with microbial communities to assess causation. Due to the cost, infrastructure challenges, and time-consuming nature of germ-free mouse models, an alternative approach is needed to investigate host-microbial interactions. Drosophila melanogaster (fruit flies) can be used as a high throughput in vivo screening model of host-microbiome interactions as they are affordable, convenient, and replicable. D. melanogaster were essential in discovering components of the innate immune response to pathogens. However, axenic D. melanogaster can easily be generated for microbiome studies without the need for ethical considerations. The simplified microbiota structure enables researchers to evaluate permutations of how each microbial species within the microbiota contribute to host phenotypes of interest. This enables the possibility of thorough strain-level analysis of host and microbial properties relevant to physiological outcomes. Moreover, a wide range of mutant D. melanogaster strains can be affordably obtained from public stock centers. Given this, D. melanogaster can be used to identify candidate mechanisms of host-microbe symbioses relevant to pathogen exclusion, innate immunity modulation, diet, xenobiotics, and probiotic/prebiotic properties in a high throughput manner. This perspective comments on the most promising areas of microbiota research that could immediately benefit from using the D. melanogaster model.
Pagès, Hervé
2018-01-01
Biological experiments involving genomics or other high-throughput assays typically yield a data matrix that can be explored and analyzed using the R programming language with packages from the Bioconductor project. Improvements in the throughput of these assays have resulted in an explosion of data even from routine experiments, which poses a challenge to the existing computational infrastructure for statistical data analysis. For example, single-cell RNA sequencing (scRNA-seq) experiments frequently generate large matrices containing expression values for each gene in each cell, requiring sparse or file-backed representations for memory-efficient manipulation in R. These alternative representations are not easily compatible with high-performance C++ code used for computationally intensive tasks in existing R/Bioconductor packages. Here, we describe a C++ interface named beachmat, which enables agnostic data access from various matrix representations. This allows package developers to write efficient C++ code that is interoperable with dense, sparse and file-backed matrices, amongst others. We evaluated the performance of beachmat for accessing data from each matrix representation using both simulated and real scRNA-seq data, and defined a clear memory/speed trade-off to motivate the choice of an appropriate representation. We also demonstrate how beachmat can be incorporated into the code of other packages to drive analyses of a very large scRNA-seq data set. PMID:29723188
Nisisako, Takasi; Ando, Takuya; Hatsuzawa, Takeshi
2012-09-21
This study describes a microfluidic platform with coaxial annular world-to-chip interfaces for high-throughput production of single and compound emulsion droplets, having controlled sizes and internal compositions. The production module consists of two distinct elements: a planar square chip on which many copies of a microfluidic droplet generator (MFDG) are arranged circularly, and a cubic supporting module with coaxial annular channels for supplying fluids evenly to the inlets of the mounted chip, assembled from blocks with cylinders and holes. Three-dimensional flow was simulated to evaluate the distribution of flow velocity in the coaxial multiple annular channels. By coupling a 1.5 cm × 1.5 cm microfluidic chip with parallelized 144 MFDGs and a supporting module with two annular channels, for example, we could produce simple oil-in-water (O/W) emulsion droplets having a mean diameter of 90.7 μm and a coefficient of variation (CV) of 2.2% at a throughput of 180.0 mL h(-1). Furthermore, we successfully demonstrated high-throughput production of Janus droplets, double emulsions and triple emulsions, by coupling 1.5 cm × 1.5 cm - 4.5 cm × 4.5 cm microfluidic chips with parallelized 32-128 MFDGs of various geometries and supporting modules with 3-4 annular channels.
Rahi, Praveen; Prakash, Om; Shouche, Yogesh S.
2016-01-01
Matrix-assisted laser desorption/ionization time-of-flight mass-spectrometry (MALDI-TOF MS) based biotyping is an emerging technique for high-throughput and rapid microbial identification. Due to its relatively higher accuracy, comprehensive database of clinically important microorganisms and low-cost compared to other microbial identification methods, MALDI-TOF MS has started replacing existing practices prevalent in clinical diagnosis. However, applicability of MALDI-TOF MS in the area of microbial ecology research is still limited mainly due to the lack of data on non-clinical microorganisms. Intense research activities on cultivation of microbial diversity by conventional as well as by innovative and high-throughput methods has substantially increased the number of microbial species known today. This important area of research is in urgent need of rapid and reliable method(s) for characterization and de-replication of microorganisms from various ecosystems. MALDI-TOF MS based characterization, in our opinion, appears to be the most suitable technique for such studies. Reliability of MALDI-TOF MS based identification method depends mainly on accuracy and width of reference databases, which need continuous expansion and improvement. In this review, we propose a common strategy to generate MALDI-TOF MS spectral database and advocated its sharing, and also discuss the role of MALDI-TOF MS based high-throughput microbial identification in microbial ecology studies. PMID:27625644
Lun, Aaron T L; Pagès, Hervé; Smith, Mike L
2018-05-01
Biological experiments involving genomics or other high-throughput assays typically yield a data matrix that can be explored and analyzed using the R programming language with packages from the Bioconductor project. Improvements in the throughput of these assays have resulted in an explosion of data even from routine experiments, which poses a challenge to the existing computational infrastructure for statistical data analysis. For example, single-cell RNA sequencing (scRNA-seq) experiments frequently generate large matrices containing expression values for each gene in each cell, requiring sparse or file-backed representations for memory-efficient manipulation in R. These alternative representations are not easily compatible with high-performance C++ code used for computationally intensive tasks in existing R/Bioconductor packages. Here, we describe a C++ interface named beachmat, which enables agnostic data access from various matrix representations. This allows package developers to write efficient C++ code that is interoperable with dense, sparse and file-backed matrices, amongst others. We evaluated the performance of beachmat for accessing data from each matrix representation using both simulated and real scRNA-seq data, and defined a clear memory/speed trade-off to motivate the choice of an appropriate representation. We also demonstrate how beachmat can be incorporated into the code of other packages to drive analyses of a very large scRNA-seq data set.
Hughes, Stephen R; Butt, Tauseef R; Bartolett, Scott; Riedmuller, Steven B; Farrelly, Philip
2011-08-01
The molecular biological techniques for plasmid-based assembly and cloning of gene open reading frames are essential for elucidating the function of the proteins encoded by the genes. High-throughput integrated robotic molecular biology platforms that have the capacity to rapidly clone and express heterologous gene open reading frames in bacteria and yeast and to screen large numbers of expressed proteins for optimized function are an important technology for improving microbial strains for biofuel production. The process involves the production of full-length complementary DNA libraries as a source of plasmid-based clones to express the desired proteins in active form for determination of their functions. Proteins that were identified by high-throughput screening as having desired characteristics are overexpressed in microbes to enable them to perform functions that will allow more cost-effective and sustainable production of biofuels. Because the plasmid libraries are composed of several thousand unique genes, automation of the process is essential. This review describes the design and implementation of an automated integrated programmable robotic workcell capable of producing complementary DNA libraries, colony picking, isolating plasmid DNA, transforming yeast and bacteria, expressing protein, and performing appropriate functional assays. These operations will allow tailoring microbial strains to use renewable feedstocks for production of biofuels, bioderived chemicals, fertilizers, and other coproducts for profitable and sustainable biorefineries. Published by Elsevier Inc.
Multiprocessor Z-Buffer Architecture for High-Speed, High Complexity Computer Image Generation.
1983-12-01
Oversampling 50 17. "Poking Through" Effects 51 18. Sampling Paths 52 19. Triangle Variables 54 20. Intelligent Tiling Algorithm 61 21. Tiler Functional Blocks...64 * 22. HSD Interface 65 23. Tiling Machine Setup 67 24. Tiling Machine 68 25. Tile Accumulate 69 26. A lx$ Sorting Machine 77 27. A 2x8 Sorting...Delay 227 87. Effect of Triangle Size on Tiler Throughput Rates 229 88. Tiling Machine Setup Stage Performance for Oversample Mode 234 89. Tiling
Design and evaluation of 1,7-naphthyridones as novel KDM5 inhibitors.
Labadie, Sharada S; Dragovich, Peter S; Cummings, Richard T; Deshmukh, Gauri; Gustafson, Amy; Han, Ning; Harmange, Jean-Christophe; Kiefer, James R; Li, Yue; Liang, Jun; Liederer, Bianca M; Liu, Yichin; Manieri, Wanda; Mao, Wiefeng; Murray, Lesley; Ortwine, Daniel F; Trojer, Patrick; VanderPorten, Erica; Vinogradova, Maia; Wen, Li
2016-09-15
Features from a high throughput screening (HTS) hit and a previously reported scaffold were combined to generate 1,7-naphthyridones as novel KDM5 enzyme inhibitors with nanomolar potencies. These molecules exhibited high selectivity over the related KDM4C and KDM2B isoforms. An X-ray co-crystal structure of a representative molecule bound to KDM5A showed that these inhibitors are competitive with the co-substrate (2-oxoglutarate or 2-OG). Copyright © 2016 Elsevier Ltd. All rights reserved.
OLEDs for lighting: new approaches
NASA Astrophysics Data System (ADS)
Duggal, Anil R.; Foust, Donald F.; Nealon, William F.; Heller, Christian M.
2004-02-01
OLED technology has improved to the point where it is now possible to envision developing OLEDs as a low cost solid state light source. In order to realize this, significant advances have to be made in device efficiency, lifetime at high brightness, high throughput fabrication, and the generation of illumination quality white light. In this talk, the requirements for general lighting will be reviewed and various approaches to meeting them will be outlined. Emphasis will be placed on a new monolithic series-connected OLED design architecture that promises scalability without high fabrication cost or design complexity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jianping; Sandhu, Hardev
1) The success in crop improvement programs depends largely on the extent of genetic variability available. Germplasm collections assembles all the available genetic resources and are critical for long-term crop improvement. This world sugarcane germplasm collection contains enormous genetic variability for various morphological traits, biomass yield components, adaptation and many quality traits, prospectively imbeds a large number of valuable alleles for biofuel traits such as high biomass yield, quantity and quality of lignocelluloses, stress tolerance, and nutrient use efficiency. The germplasm collection is of little value unless it is characterized and utilized for crop improvement. In this project, we phenotypicallymore » and genotypically characterized the sugarcane world germplasm collection (The results were published in two papers already and another two papers are to be published). This data will be made available for public to refer to for germplasm unitization specifically in the sugarcane and energy cane breeding programs. In addition, we are identifying the alleles contributing to the biomass traits in sugarcane germplasm. This part of project is very challenging due to the large genome and highly polyploid level of this crop. We firstly established a high throughput sugarcane genotyping pipeline in the genome and bioinformatics era (a paper is published in 2016). We identified and modified a software for genome-wide association analysis of polyploid species. The results of the alleles associated to the biomass traits will be published soon, which will help the scientific community understand the genetic makeup of the biomass components of sugarcane. Molecular breeders can develop markers for marker assisted selection of biomass traits improvement. Further, the development and release of new energy cane cultivars through this project not only improved genetic diversity but also improved dry biomass yields and resistance to diseases. These new cultivars were tested on marginal soils in Florida and showed very promising yield potential that is important for the successful use of energy cane as a dedicated feedstock for lignocellulosic ethanol production. 2) Multiple techniques at different project progress stages were utilized. For example, for the whole world germplasm accession genotyping, a cheap widely used SSR marker genotyping platform was utilized due to the large number of samples (over thousand). But the throughput of this technique is low in generating data points. However, the purpose the genotyping is to form a core collection for further high throughput genotyping. Thus the results from the SSR genotyping was quite good enough to generated the core collection. To genotype the few hundred core collection accessions, an target enrichment sequencing technology was used, which is not only high throughput in generating large number of genotyping data, but also has the candidate genes targeted to genotyping. The data generated would be sufficient in identifying the alleles contributing to the traits of interests. All the techniques used in this project are effective though extensive time was invested specifically for establish the pipeline in the experimental design, data analysis, and different approach comparison. 3) the research can benefit to the public in polyploid genotyping and new and cost efficient genotyping platform development« less
Roguev, Assen; Xu, Jiewei; Krogan, Nevan
2018-02-01
This protocol describes an optimized high-throughput procedure for generating double deletion mutants in Schizosaccharomyces pombe using the colony replicating robot ROTOR HDA and the PEM (pombe epistasis mapper) system. The method is based on generating high-density colony arrays (1536 colonies per agar plate) and passaging them through a series of antidiploid and mating-type selection (ADS-MTS) and double-mutant selection (DMS) steps. Detailed program parameters for each individual replication step are provided. Using this procedure, batches of 25 or more screens can be routinely performed. © 2018 Cold Spring Harbor Laboratory Press.
A high-throughput core sampling device for the evaluation of maize stalk composition
2012-01-01
Background A major challenge in the identification and development of superior feedstocks for the production of second generation biofuels is the rapid assessment of biomass composition in a large number of samples. Currently, highly accurate and precise robotic analysis systems are available for the evaluation of biomass composition, on a large number of samples, with a variety of pretreatments. However, the lack of an inexpensive and high-throughput process for large scale sampling of biomass resources is still an important limiting factor. Our goal was to develop a simple mechanical maize stalk core sampling device that can be utilized to collect uniform samples of a dimension compatible with robotic processing and analysis, while allowing the collection of hundreds to thousands of samples per day. Results We have developed a core sampling device (CSD) to collect maize stalk samples compatible with robotic processing and analysis. The CSD facilitates the collection of thousands of uniform tissue cores consistent with high-throughput analysis required for breeding, genetics, and production studies. With a single CSD operated by one person with minimal training, more than 1,000 biomass samples were obtained in an eight-hour period. One of the main advantages of using cores is the high level of homogeneity of the samples obtained and the minimal opportunity for sample contamination. In addition, the samples obtained with the CSD can be placed directly into a bath of ice, dry ice, or liquid nitrogen maintaining the composition of the biomass sample for relatively long periods of time. Conclusions The CSD has been demonstrated to successfully produce homogeneous stalk core samples in a repeatable manner with a throughput substantially superior to the currently available sampling methods. Given the variety of maize developmental stages and the diversity of stalk diameter evaluated, it is expected that the CSD will have utility for other bioenergy crops as well. PMID:22548834
NASA Astrophysics Data System (ADS)
Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Ha, Dong-Gwang; Einzinger, Markus; Wu, Tony; Baldo, Marc A.; Aspuru-Guzik, Alán.
2016-09-01
Discovering new OLED emitters requires many experiments to synthesize candidates and test performance in devices. Large scale computer simulation can greatly speed this search process but the problem remains challenging enough that brute force application of massive computing power is not enough to successfully identify novel structures. We report a successful High Throughput Virtual Screening study that leveraged a range of methods to optimize the search process. The generation of candidate structures was constrained to contain combinatorial explosion. Simulations were tuned to the specific problem and calibrated with experimental results. Experimentalists and theorists actively collaborated such that experimental feedback was regularly utilized to update and shape the computational search. Supervised machine learning methods prioritized candidate structures prior to quantum chemistry simulation to prevent wasting compute on likely poor performers. With this combination of techniques, each multiplying the strength of the search, this effort managed to navigate an area of molecular space and identify hundreds of promising OLED candidate structures. An experimentally validated selection of this set shows emitters with external quantum efficiencies as high as 22%.
The ChIP-exo Method: Identifying Protein-DNA Interactions with Near Base Pair Precision.
Perreault, Andrea A; Venters, Bryan J
2016-12-23
Chromatin immunoprecipitation (ChIP) is an indispensable tool in the fields of epigenetics and gene regulation that isolates specific protein-DNA interactions. ChIP coupled to high throughput sequencing (ChIP-seq) is commonly used to determine the genomic location of proteins that interact with chromatin. However, ChIP-seq is hampered by relatively low mapping resolution of several hundred base pairs and high background signal. The ChIP-exo method is a refined version of ChIP-seq that substantially improves upon both resolution and noise. The key distinction of the ChIP-exo methodology is the incorporation of lambda exonuclease digestion in the library preparation workflow to effectively footprint the left and right 5' DNA borders of the protein-DNA crosslink site. The ChIP-exo libraries are then subjected to high throughput sequencing. The resulting data can be leveraged to provide unique and ultra-high resolution insights into the functional organization of the genome. Here, we describe the ChIP-exo method that we have optimized and streamlined for mammalian systems and next-generation sequencing-by-synthesis platform.
Analysis of high-throughput biological data using their rank values.
Dembélé, Doulaye
2018-01-01
High-throughput biological technologies are routinely used to generate gene expression profiling or cytogenetics data. To achieve high performance, methods available in the literature become more specialized and often require high computational resources. Here, we propose a new versatile method based on the data-ordering rank values. We use linear algebra, the Perron-Frobenius theorem and also extend a method presented earlier for searching differentially expressed genes for the detection of recurrent copy number aberration. A result derived from the proposed method is a one-sample Student's t-test based on rank values. The proposed method is to our knowledge the only that applies to gene expression profiling and to cytogenetics data sets. This new method is fast, deterministic, and requires a low computational load. Probabilities are associated with genes to allow a statistically significant subset selection in the data set. Stability scores are also introduced as quality parameters. The performance and comparative analyses were carried out using real data sets. The proposed method can be accessed through an R package available from the CRAN (Comprehensive R Archive Network) website: https://cran.r-project.org/web/packages/fcros .
NASA Astrophysics Data System (ADS)
Nikapitiya, N. Y. Jagath B.; Nahar, Mun Mun; Moon, Hyejin
2017-12-01
This letter reports two novel electrode design considerations to satisfy two very important aspects of EWOD operation—(1) Highly consistent volume of generated droplets and (2) Highly improved accuracy in the generated droplet volume. Considering the design principles investigated two novel designs were proposed; L-junction electrode design to offer high throughput droplet generation and Y-junction electrode design to split a droplet very fast while maintaining equal volume of each part. Devices of novel designs were fabricated and tested, and the results are compared with those of conventional approach. It is demonstrated that inaccuracy and inconsistency of droplet volume dispensed in the device with novel electrode designs are as low as 0.17 and 0.10%, respectively, while those of conventional approach are 25 and 0.76%, respectively. The dispensing frequency is enhanced from 4 to 9 Hz by using the novel design.
Enrichment analysis in high-throughput genomics - accounting for dependency in the NULL.
Gold, David L; Coombes, Kevin R; Wang, Jing; Mallick, Bani
2007-03-01
Translating the overwhelming amount of data generated in high-throughput genomics experiments into biologically meaningful evidence, which may for example point to a series of biomarkers or hint at a relevant pathway, is a matter of great interest in bioinformatics these days. Genes showing similar experimental profiles, it is hypothesized, share biological mechanisms that if understood could provide clues to the molecular processes leading to pathological events. It is the topic of further study to learn if or how a priori information about the known genes may serve to explain coexpression. One popular method of knowledge discovery in high-throughput genomics experiments, enrichment analysis (EA), seeks to infer if an interesting collection of genes is 'enriched' for a Consortium particular set of a priori Gene Ontology Consortium (GO) classes. For the purposes of statistical testing, the conventional methods offered in EA software implicitly assume independence between the GO classes. Genes may be annotated for more than one biological classification, and therefore the resulting test statistics of enrichment between GO classes can be highly dependent if the overlapping gene sets are relatively large. There is a need to formally determine if conventional EA results are robust to the independence assumption. We derive the exact null distribution for testing enrichment of GO classes by relaxing the independence assumption using well-known statistical theory. In applications with publicly available data sets, our test results are similar to the conventional approach which assumes independence. We argue that the independence assumption is not detrimental.
Novel organosilicone materials and patterning techniques for nanoimprint lithography
NASA Astrophysics Data System (ADS)
Pina, Carlos Alberto
Nanoimprint Lithography (NIL) is a high-throughput patterning technique that allows the fabrication of nanostructures with great precision. It has been listed on the International Technology Roadmap for Semiconductors (ITRS) as a candidate technology for future generation Si chip manufacturing. In nanoimprint Lithography a resist material, e.g. a thermoplastic polymer, is placed in contact with a mold and then mechanically deformed under an applied load to transfer the nano-features on the mold surface into the resist. The success of NIL relies heavily in the capability of fabricating nanostructures on different types of materials. Thus, a key factor for NIL implementation in industrial settings is the development of advanced materials suitable as the nanoimprint resist. This dissertation focuses on the engineering of new polymer materials suitable as NIL resist. A variety of silicone-based polymer precursors were synthesized and formulated for NIL applications. High throughput and high yield nanopatterning was successfully achieved. Furthermore, additional capabilities of the developed materials were explored for a range of NIL applications such as their use as flexible, UV-transparent stamps and silicon compatible etching layers. Finally, new strategies were investigated to expand the NIL potentiality. High throughput, non-residual layer imprinting was achieved with the newly developed resist materials. In addition, several strategies were designed for the precise control of nanoscale size patterned structures with multifunctional resist systems by post-imprinting modification of the pattern size. These developments provide NIL with a new set of tools for a variety of additional important applications.
High-throughput protein analysis integrating bioinformatics and experimental assays
del Val, Coral; Mehrle, Alexander; Falkenhahn, Mechthild; Seiler, Markus; Glatting, Karl-Heinz; Poustka, Annemarie; Suhai, Sandor; Wiemann, Stefan
2004-01-01
The wealth of transcript information that has been made publicly available in recent years requires the development of high-throughput functional genomics and proteomics approaches for its analysis. Such approaches need suitable data integration procedures and a high level of automation in order to gain maximum benefit from the results generated. We have designed an automatic pipeline to analyse annotated open reading frames (ORFs) stemming from full-length cDNAs produced mainly by the German cDNA Consortium. The ORFs are cloned into expression vectors for use in large-scale assays such as the determination of subcellular protein localization or kinase reaction specificity. Additionally, all identified ORFs undergo exhaustive bioinformatic analysis such as similarity searches, protein domain architecture determination and prediction of physicochemical characteristics and secondary structure, using a wide variety of bioinformatic methods in combination with the most up-to-date public databases (e.g. PRINTS, BLOCKS, INTERPRO, PROSITE SWISSPROT). Data from experimental results and from the bioinformatic analysis are integrated and stored in a relational database (MS SQL-Server), which makes it possible for researchers to find answers to biological questions easily, thereby speeding up the selection of targets for further analysis. The designed pipeline constitutes a new automatic approach to obtaining and administrating relevant biological data from high-throughput investigations of cDNAs in order to systematically identify and characterize novel genes, as well as to comprehensively describe the function of the encoded proteins. PMID:14762202
High throughput techniques to reveal the molecular physiology and evolution of digestion in spiders.
Fuzita, Felipe J; Pinkse, Martijn W H; Patane, José S L; Verhaert, Peter D E M; Lopes, Adriana R
2016-09-07
Spiders are known for their predatory efficiency and for their high capacity of digesting relatively large prey. They do this by combining both extracorporeal and intracellular digestion. Whereas many high throughput ("-omics") techniques focus on biomolecules in spider venom, so far this approach has not yet been applied to investigate the protein composition of spider midgut diverticula (MD) and digestive fluid (DF). We here report on our investigations of both MD and DF of the spider Nephilingis (Nephilengys) cruentata through the use of next generation sequencing and shotgun proteomics. This shows that the DF is composed of a variety of hydrolases including peptidases, carbohydrases, lipases and nuclease, as well as of toxins and regulatory proteins. We detect 25 astacins in the DF. Phylogenetic analysis of the corresponding transcript(s) in Arachnida suggests that astacins have acquired an unprecedented role for extracorporeal digestion in Araneae, with different orthologs used by each family. The results of a comparative study of spiders in distinct physiological conditions allow us to propose some digestion mechanisms in this interesting animal taxon. All the high throughput data allowed the demonstration that DF is a secretion originating from the MD. We identified enzymes involved in the extracellular and intracellular phases of digestion. Besides that, data analyses show a large gene duplication event in Araneae digestive process evolution, mainly of astacin genes. We were also able to identify proteins expressed and translated in the digestive system, which until now had been exclusively associated to venom glands.
Recent advances in quantitative high throughput and high content data analysis.
Moutsatsos, Ioannis K; Parker, Christian N
2016-01-01
High throughput screening has become a basic technique with which to explore biological systems. Advances in technology, including increased screening capacity, as well as methods that generate multiparametric readouts, are driving the need for improvements in the analysis of data sets derived from such screens. This article covers the recent advances in the analysis of high throughput screening data sets from arrayed samples, as well as the recent advances in the analysis of cell-by-cell data sets derived from image or flow cytometry application. Screening multiple genomic reagents targeting any given gene creates additional challenges and so methods that prioritize individual gene targets have been developed. The article reviews many of the open source data analysis methods that are now available and which are helping to define a consensus on the best practices to use when analyzing screening data. As data sets become larger, and more complex, the need for easily accessible data analysis tools will continue to grow. The presentation of such complex data sets, to facilitate quality control monitoring and interpretation of the results will require the development of novel visualizations. In addition, advanced statistical and machine learning algorithms that can help identify patterns, correlations and the best features in massive data sets will be required. The ease of use for these tools will be important, as they will need to be used iteratively by laboratory scientists to improve the outcomes of complex analyses.