High Throughput PBTK: Open-Source Data and Tools for ...
Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy
SPIM-fluid: open source light-sheet based platform for high-throughput imaging
Gualda, Emilio J.; Pereira, Hugo; Vale, Tiago; Estrada, Marta Falcão; Brito, Catarina; Moreno, Nuno
2015-01-01
Light sheet fluorescence microscopy has recently emerged as the technique of choice for obtaining high quality 3D images of whole organisms/embryos with low photodamage and fast acquisition rates. Here we present an open source unified implementation based on Arduino and Micromanager, which is capable of operating Light Sheet Microscopes for automatized 3D high-throughput imaging on three-dimensional cell cultures and model organisms like zebrafish, oriented to massive drug screening. PMID:26601007
Inter-Individual Variability in High-Throughput Risk Prioritization of Environmental Chemicals (Sot)
We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have...
We incorporate inter-individual variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which hav...
High-throughput screening, predictive modeling and computational embryology - Abstract
High-throughput screening (HTS) studies are providing a rich source of data that can be applied to chemical profiling to address sensitivity and specificity of molecular targets, biological pathways, cellular and developmental processes. EPA’s ToxCast project is testing 960 uniq...
An image analysis toolbox for high-throughput C. elegans assays
Wählby, Carolina; Kamentsky, Lee; Liu, Zihan H.; Riklin-Raviv, Tammy; Conery, Annie L.; O’Rourke, Eyleen J.; Sokolnicki, Katherine L.; Visvikis, Orane; Ljosa, Vebjorn; Irazoqui, Javier E.; Golland, Polina; Ruvkun, Gary; Ausubel, Frederick M.; Carpenter, Anne E.
2012-01-01
We present a toolbox for high-throughput screening of image-based Caenorhabditis elegans phenotypes. The image analysis algorithms measure morphological phenotypes in individual worms and are effective for a variety of assays and imaging systems. This WormToolbox is available via the open-source CellProfiler project and enables objective scoring of whole-animal high-throughput image-based assays of C. elegans for the study of diverse biological pathways relevant to human disease. PMID:22522656
We incorporate inter-individual variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which hav...
High-throughput screening, predictive modeling and computational embryology
High-throughput screening (HTS) studies are providing a rich source of data that can be applied to profile thousands of chemical compounds for biological activity and potential toxicity. EPA’s ToxCast™ project, and the broader Tox21 consortium, in addition to projects worldwide,...
Niland, Courtney N.; Jankowsky, Eckhard; Harris, Michael E.
2016-01-01
Quantification of the specificity of RNA binding proteins and RNA processing enzymes is essential to understanding their fundamental roles in biological processes. High Throughput Sequencing Kinetics (HTS-Kin) uses high throughput sequencing and internal competition kinetics to simultaneously monitor the processing rate constants of thousands of substrates by RNA processing enzymes. This technique has provided unprecedented insight into the substrate specificity of the tRNA processing endonuclease ribonuclease P. Here, we investigate the accuracy and robustness of measurements associated with each step of the HTS-Kin procedure. We examine the effect of substrate concentration on the observed rate constant, determine the optimal kinetic parameters, and provide guidelines for reducing error in amplification of the substrate population. Importantly, we find that high-throughput sequencing, and experimental reproducibility contribute their own sources of error, and these are the main sources of imprecision in the quantified results when otherwise optimized guidelines are followed. PMID:27296633
Choudhry, Priya
2016-01-01
Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays. PMID:26848849
A novel assay for monoacylglycerol hydrolysis suitable for high-throughput screening.
Brengdahl, Johan; Fowler, Christopher J
2006-12-01
A simple assay for monoacylglycerol hydrolysis suitable for high-throughput screening is described. The assay uses [(3)H]2-oleoylglycerol as substrate, with the tritium label in the glycerol part of the molecule and the use of phenyl sepharose gel to separate the hydrolyzed product ([(3)H]glycerol) from substrate. Using cytosolic fractions derived from rat cerebella as a source of hydrolytic activity, the assay gives the appropriate pH profile and sensitivity to inhibition with compounds known to inhibit hydrolysis of this substrate. The assay could also be adapted to a 96-well plate format, using C6 cells as the source of hydrolytic activity. Thus the assay is simple and appropriate for high-throughput screening of inhibitors of monoacylglycerol hydrolysis.
Thousands of chemicals have been profiled by high-throughput screening (HTS) programs such as ToxCast and Tox21; these chemicals are tested in part because most of them have limited or no data on hazard, exposure, or toxicokinetics (TK). While HTS generates in vitro bioactivity d...
Wu, Jianglai; Tang, Anson H. L.; Mok, Aaron T. Y.; Yan, Wenwei; Chan, Godfrey C. F.; Wong, Kenneth K. Y.; Tsia, Kevin K.
2017-01-01
Apart from the spatial resolution enhancement, scaling of temporal resolution, equivalently the imaging throughput, of fluorescence microscopy is of equal importance in advancing cell biology and clinical diagnostics. Yet, this attribute has mostly been overlooked because of the inherent speed limitation of existing imaging strategies. To address the challenge, we employ an all-optical laser-scanning mechanism, enabled by an array of reconfigurable spatiotemporally-encoded virtual sources, to demonstrate ultrafast fluorescence microscopy at line-scan rate as high as 8 MHz. We show that this technique enables high-throughput single-cell microfluidic fluorescence imaging at 75,000 cells/second and high-speed cellular 2D dynamical imaging at 3,000 frames per second, outperforming the state-of-the-art high-speed cameras and the gold-standard laser scanning strategies. Together with its wide compatibility to the existing imaging modalities, this technology could empower new forms of high-throughput and high-speed biological fluorescence microscopy that was once challenged. PMID:28966855
Image Harvest: an open-source platform for high-throughput plant image processing and analysis
Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal
2016-01-01
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917
Detecting adulterants in milk powder using high-throughput Raman chemical imaging
USDA-ARS?s Scientific Manuscript database
This study used a line-scan high-throughput Raman imaging system to authenticate milk powder. A 5 W 785 nm line laser (240 mm long and 1 mm wide) was used as a Raman excitation source. The system was used to acquire hyperspectral Raman images in a wavenumber range of 103–2881 cm-1 from the skim milk...
USDA-ARS?s Scientific Manuscript database
Milk is a vulnerable target for economically motivated adulteration. In this study, a line-scan high-throughput Raman imaging system was used to authenticate milk powder. A 5 W 785 nm line laser (240 mm long and 1 mm wide) was used as a Raman excitation source. The system was used to acquire hypersp...
Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome
2014-04-25
In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.
Image Harvest: an open-source platform for high-throughput plant image processing and analysis.
Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal
2016-05-01
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.
High-radiance LDP source for mask inspection and beam line applications (Conference Presentation)
NASA Astrophysics Data System (ADS)
Teramoto, Yusuke; Santos, Bárbara; Mertens, Guido; Kops, Ralf; Kops, Margarete; von Wezyk, Alexander; Bergmann, Klaus; Yabuta, Hironobu; Nagano, Akihisa; Ashizawa, Noritaka; Taniguchi, Yuta; Yamatani, Daiki; Shirai, Takahiro; Kasama, Kunihiko
2017-04-01
High-throughput actinic mask inspection tools are needed as EUVL begins to enter into volume production phase. One of the key technologies to realize such inspection tools is a high-radiance EUV source of which radiance is supposed to be as high as 100 W/mm2/sr. Ushio is developing laser-assisted discharge-produced plasma (LDP) sources. Ushio's LDP source is able to provide sufficient radiance as well as cleanliness, stability and reliability. Radiance behind the debris mitigation system was confirmed to be 120 W/mm2/sr at 9 kHz and peak radiance at the plasma was increased to over 200 W/mm2/sr in the recent development which supports high-throughput, high-precision mask inspection in the current and future technology nodes. One of the unique features of Ushio's LDP source is cleanliness. Cleanliness evaluation using both grazing-incidence Ru mirrors and normal-incidence Mo/Si mirrors showed no considerable damage to the mirrors other than smooth sputtering of the surface at the pace of a few nm per Gpulse. In order to prove the system reliability, several long-term tests were performed. Data recorded during the tests was analyzed to assess two-dimensional radiance stability. In addition, several operating parameters were monitored to figure out which contributes to the radiance stability. The latest model that features a large opening angle was recently developed so that the tool can utilize a large number of debris-free photons behind the debris shield. The model was designed both for beam line application and high-throughput mask inspection application. At the time of publication, the first product is supposed to be in use at the customer site.
2013-03-01
1989. Evaluation of the cotton fabric model for screening topical mosquito repellents . J Am Mosq Control Assoc 5:73–76. WHO [World Health Organization...institutions, research libraries, and research funders in the common goal of maximizing access to critical research. High-Throughput Mosquito and Fly Bioassay...A. Allan , Todd W. Walker , Christopher J. Geden , Jerome A. Hogsette , and Kenneth J. Linthicum Source: Journal of the American Mosquito Control
Condor-COPASI: high-throughput computing for biochemical networks
2012-01-01
Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945
Shapaval, V; Møretrø, T; Wold Åsli, A; Suso, H P; Schmitt, J; Lillehaug, D; Kohler, A
2017-05-01
Microbiological source tracking (MST) for food industry is a rapid growing area of research and technology development. In this paper, a new library-independent approach for MST is presented. It is based on a high-throughput liquid microcultivation and FTIR spectroscopy. In this approach, FTIR spectra obtained from micro-organisms isolated along the production line and a product are compared to each other. We tested and evaluated the new source tracking approach by simulating a source tracking situation. In this simulation study, a selection of 20 spoilage mould strains from a total of six genera (Alternaria, Aspergillus, Mucor, Paecilomyces, Peyronellaea and Phoma) was used. The simulation of the source tracking situation showed that 80-100% of the sources could be correctly identified with respect to genus/species level. When performing source tracking simulations, the FTIR identification diverged for Phoma glomerata strain in the reference collection. When reidentifying the strain by sequencing, it turned out that the strain was a Peyronellaea arachidicola. The obtained results demonstrated that the proposed approach is a versatile tool for identifying sources of microbial contamination. Thus, it has a high potential for routine control in the food industry due to low costs and analysis time. The source tracking of fungal contamination in the food industry is an important aspect of food safety. Currently, all available methods are time consuming and require the use of a reference library that may limit the accuracy of the identification. In this study, we report for the first time, a library-independent FTIR spectroscopic approach for MST of fungal contamination along the food production line. It combines high-throughput microcultivation and FTIR spectroscopy and is specific on the genus and species level. Therefore, such an approach possesses great importance for food safety control in food industry. © 2016 The Society for Applied Microbiology.
High-throughput screening of filamentous fungi using nanoliter-range droplet-based microfluidics
NASA Astrophysics Data System (ADS)
Beneyton, Thomas; Wijaya, I. Putu Mahendra; Postros, Prexilia; Najah, Majdi; Leblond, Pascal; Couvent, Angélique; Mayot, Estelle; Griffiths, Andrew D.; Drevelle, Antoine
2016-06-01
Filamentous fungi are an extremely important source of industrial enzymes because of their capacity to secrete large quantities of proteins. Currently, functional screening of fungi is associated with low throughput and high costs, which severely limits the discovery of novel enzymatic activities and better production strains. Here, we describe a nanoliter-range droplet-based microfluidic system specially adapted for the high-throughput sceening (HTS) of large filamentous fungi libraries for secreted enzyme activities. The platform allowed (i) compartmentalization of single spores in ~10 nl droplets, (ii) germination and mycelium growth and (iii) high-throughput sorting of fungi based on enzymatic activity. A 104 clone UV-mutated library of Aspergillus niger was screened based on α-amylase activity in just 90 minutes. Active clones were enriched 196-fold after a single round of microfluidic HTS. The platform is a powerful tool for the development of new production strains with low cost, space and time footprint and should bring enormous benefit for improving the viability of biotechnological processes.
Acquisition of gamma camera and physiological data by computer.
Hack, S N; Chang, M; Line, B R; Cooper, J A; Robeson, G H
1986-11-01
We have designed, implemented, and tested a new Research Data Acquisition System (RDAS) that permits a general purpose digital computer to acquire signals from both gamma camera sources and physiological signal sources concurrently. This system overcomes the limited multi-source, high speed data acquisition capabilities found in most clinically oriented nuclear medicine computers. The RDAS can simultaneously input signals from up to four gamma camera sources with a throughput of 200 kHz per source and from up to eight physiological signal sources with an aggregate throughput of 50 kHz. Rigorous testing has found the RDAS to exhibit acceptable linearity and timing characteristics. In addition, flood images obtained by this system were compared with flood images acquired by a commercial nuclear medicine computer system. National Electrical Manufacturers Association performance standards of the flood images were found to be comparable.
Gene Ontology annotations at SGD: new data sources and annotation methods
Hong, Eurie L.; Balakrishnan, Rama; Dong, Qing; Christie, Karen R.; Park, Julie; Binkley, Gail; Costanzo, Maria C.; Dwight, Selina S.; Engel, Stacia R.; Fisk, Dianna G.; Hirschman, Jodi E.; Hitz, Benjamin C.; Krieger, Cynthia J.; Livstone, Michael S.; Miyasato, Stuart R.; Nash, Robert S.; Oughtred, Rose; Skrzypek, Marek S.; Weng, Shuai; Wong, Edith D.; Zhu, Kathy K.; Dolinski, Kara; Botstein, David; Cherry, J. Michael
2008-01-01
The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org/) collects and organizes biological information about the chromosomal features and gene products of the budding yeast Saccharomyces cerevisiae. Although published data from traditional experimental methods are the primary sources of evidence supporting Gene Ontology (GO) annotations for a gene product, high-throughput experiments and computational predictions can also provide valuable insights in the absence of an extensive body of literature. Therefore, GO annotations available at SGD now include high-throughput data as well as computational predictions provided by the GO Annotation Project (GOA UniProt; http://www.ebi.ac.uk/GOA/). Because the annotation method used to assign GO annotations varies by data source, GO resources at SGD have been modified to distinguish data sources and annotation methods. In addition to providing information for genes that have not been experimentally characterized, GO annotations from independent sources can be compared to those made by SGD to help keep the literature-based GO annotations current. PMID:17982175
Cao, K F; Zhang, H H; Han, H H; Song, Y; Bai, X L; Sun, H
2016-05-01
In this study, we comprehensively investigated the effect of dietary protein sources on the gut microbiome of weaned piglets with diets comprising different protein source using High-throughput 16SrRNA gene-based Illumina Miseq. A total of 48 healthy weaned piglets were allocated randomly to four treatments with 12 piglets in each group. The weaned piglets were fed with diets containing soybean meal (SBM), cottonseed meal (CSM), SBM and CSM (SC) or fish meal (FM). The intestinal content samples were taken from five segments of the small intestine. DNA was extracted from the samples and the V3-V4 regions of the 16SrRNA gene were amplified. The microbiota of the contents of the small intestine were very complex, including more than 4000 operational taxonomic units belonging to 32 different phyla. Four bacterial populations (i.e. Firmicutes, Proteobacteria, Bacteroidetes and Acidobacteria) were the most abundant bacterial groups. The genera Lactobacillus and Clostridium were found in slightly higher proportions in the groups with added CSM compared to the other groups. The proportion of reads assigned to the genus Escherichia/Shigella was much higher in the FM group. In conclusion, dietary protein source had significant effects on the small microbiome of weaned piglets. Dietary protein source have the potential to affect the small intestine microbiome of weaned piglets that will have a large impact on its metabolic capabilities and intestinal health. In this study, we successfully identified the microbiomes in the contents of the small intestine in the weaned piglets that were fed different protein source diets using high-throughput sequencing. The finding provided an evidence for the option of the appropriate protein source in the actual production. © 2016 The Society for Applied Microbiology.
Orchestrating high-throughput genomic analysis with Bioconductor
Huber, Wolfgang; Carey, Vincent J.; Gentleman, Robert; Anders, Simon; Carlson, Marc; Carvalho, Benilton S.; Bravo, Hector Corrada; Davis, Sean; Gatto, Laurent; Girke, Thomas; Gottardo, Raphael; Hahne, Florian; Hansen, Kasper D.; Irizarry, Rafael A.; Lawrence, Michael; Love, Michael I.; MacDonald, James; Obenchain, Valerie; Oleś, Andrzej K.; Pagès, Hervé; Reyes, Alejandro; Shannon, Paul; Smyth, Gordon K.; Tenenbaum, Dan; Waldron, Levi; Morgan, Martin
2015-01-01
Bioconductor is an open-source, open-development software project for the analysis and comprehension of high-throughput data in genomics and molecular biology. The project aims to enable interdisciplinary research, collaboration and rapid development of scientific software. Based on the statistical programming language R, Bioconductor comprises 934 interoperable packages contributed by a large, diverse community of scientists. Packages cover a range of bioinformatic and statistical applications. They undergo formal initial review and continuous automated testing. We present an overview for prospective users and contributors. PMID:25633503
Using high-throughput literature mining to support read-across predictions of toxicity (SOT)
Building scientific confidence in the development and evaluation of read-across remains an ongoing challenge. Approaches include establishing systematic frameworks to identify sources of uncertainty and ways to address them. One source of uncertainty is related to characterizing ...
High-throughput literature mining to support read-across predictions of toxicity (ASCCT meeting)
Building scientific confidence in the development and evaluation of read-across remains an ongoing challenge. Approaches include establishing systematic frameworks to identify sources of uncertainty and ways to address them. One source of uncertainty is related to characterizing ...
Read-across predictions require high quality measured data for source analogues. These data are typically retrieved from structured databases, but biomedical literature data are often untapped because current literature mining approaches are resource intensive. Our high-throughpu...
Klukas, Christian; Chen, Dijun; Pape, Jean-Michel
2014-01-01
High-throughput phenotyping is emerging as an important technology to dissect phenotypic components in plants. Efficient image processing and feature extraction are prerequisites to quantify plant growth and performance based on phenotypic traits. Issues include data management, image analysis, and result visualization of large-scale phenotypic data sets. Here, we present Integrated Analysis Platform (IAP), an open-source framework for high-throughput plant phenotyping. IAP provides user-friendly interfaces, and its core functions are highly adaptable. Our system supports image data transfer from different acquisition environments and large-scale image analysis for different plant species based on real-time imaging data obtained from different spectra. Due to the huge amount of data to manage, we utilized a common data structure for efficient storage and organization of data for both input data and result data. We implemented a block-based method for automated image processing to extract a representative list of plant phenotypic traits. We also provide tools for build-in data plotting and result export. For validation of IAP, we performed an example experiment that contains 33 maize (Zea mays ‘Fernandez’) plants, which were grown for 9 weeks in an automated greenhouse with nondestructive imaging. Subsequently, the image data were subjected to automated analysis with the maize pipeline implemented in our system. We found that the computed digital volume and number of leaves correlate with our manually measured data in high accuracy up to 0.98 and 0.95, respectively. In summary, IAP provides a multiple set of functionalities for import/export, management, and automated analysis of high-throughput plant phenotyping data, and its analysis results are highly reliable. PMID:24760818
CrossCheck: an open-source web tool for high-throughput screen data analysis.
Najafov, Jamil; Najafov, Ayaz
2017-07-19
Modern high-throughput screening methods allow researchers to generate large datasets that potentially contain important biological information. However, oftentimes, picking relevant hits from such screens and generating testable hypotheses requires training in bioinformatics and the skills to efficiently perform database mining. There are currently no tools available to general public that allow users to cross-reference their screen datasets with published screen datasets. To this end, we developed CrossCheck, an online platform for high-throughput screen data analysis. CrossCheck is a centralized database that allows effortless comparison of the user-entered list of gene symbols with 16,231 published datasets. These datasets include published data from genome-wide RNAi and CRISPR screens, interactome proteomics and phosphoproteomics screens, cancer mutation databases, low-throughput studies of major cell signaling mediators, such as kinases, E3 ubiquitin ligases and phosphatases, and gene ontological information. Moreover, CrossCheck includes a novel database of predicted protein kinase substrates, which was developed using proteome-wide consensus motif searches. CrossCheck dramatically simplifies high-throughput screen data analysis and enables researchers to dig deep into the published literature and streamline data-driven hypothesis generation. CrossCheck is freely accessible as a web-based application at http://proteinguru.com/crosscheck.
Annotare--a tool for annotating high-throughput biomedical investigations and resulting data.
Shankar, Ravi; Parkinson, Helen; Burdett, Tony; Hastings, Emma; Liu, Junmin; Miller, Michael; Srinivasa, Rashmi; White, Joseph; Brazma, Alvis; Sherlock, Gavin; Stoeckert, Christian J; Ball, Catherine A
2010-10-01
Computational methods in molecular biology will increasingly depend on standards-based annotations that describe biological experiments in an unambiguous manner. Annotare is a software tool that enables biologists to easily annotate their high-throughput experiments, biomaterials and data in a standards-compliant way that facilitates meaningful search and analysis. Annotare is available from http://code.google.com/p/annotare/ under the terms of the open-source MIT License (http://www.opensource.org/licenses/mit-license.php). It has been tested on both Mac and Windows.
NASA Astrophysics Data System (ADS)
Lawton, Zachary E.; Traub, Angelica; Fatigante, William L.; Mancias, Jose; O'Leary, Adam E.; Hall, Seth E.; Wieland, Jamie R.; Oberacher, Herbert; Gizzi, Michael C.; Mulligan, Christopher C.
2017-06-01
Forensic evidentiary backlogs are indicative of the growing need for cost-effective, high-throughput instrumental methods. One such emerging technology that shows high promise in meeting this demand while also allowing on-site forensic investigation is portable mass spectrometric (MS) instrumentation, particularly that which enables the coupling to ambient ionization techniques. While the benefits of rapid, on-site screening of contraband can be anticipated, the inherent legal implications of field-collected data necessitates that the analytical performance of technology employed be commensurate with accepted techniques. To this end, comprehensive analytical validation studies are required before broad incorporation by forensic practitioners can be considered, and are the focus of this work. Pertinent performance characteristics such as throughput, selectivity, accuracy/precision, method robustness, and ruggedness have been investigated. Reliability in the form of false positive/negative response rates is also assessed, examining the effect of variables such as user training and experience level. To provide flexibility toward broad chemical evidence analysis, a suite of rapidly-interchangeable ion sources has been developed and characterized through the analysis of common illicit chemicals and emerging threats like substituted phenethylamines. [Figure not available: see fulltext.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong
The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less
Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao
2016-04-01
The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.
Learning from Heterogeneous Data Sources: An Application in Spatial Proteomics
Breckels, Lisa M.; Holden, Sean B.; Wojnar, David; Mulvey, Claire M.; Christoforou, Andy; Groen, Arnoud; Trotter, Matthew W. B.; Kohlbacher, Oliver; Lilley, Kathryn S.; Gatto, Laurent
2016-01-01
Sub-cellular localisation of proteins is an essential post-translational regulatory mechanism that can be assayed using high-throughput mass spectrometry (MS). These MS-based spatial proteomics experiments enable us to pinpoint the sub-cellular distribution of thousands of proteins in a specific system under controlled conditions. Recent advances in high-throughput MS methods have yielded a plethora of experimental spatial proteomics data for the cell biology community. Yet, there are many third-party data sources, such as immunofluorescence microscopy or protein annotations and sequences, which represent a rich and vast source of complementary information. We present a unique transfer learning classification framework that utilises a nearest-neighbour or support vector machine system, to integrate heterogeneous data sources to considerably improve on the quantity and quality of sub-cellular protein assignment. We demonstrate the utility of our algorithms through evaluation of five experimental datasets, from four different species in conjunction with four different auxiliary data sources to classify proteins to tens of sub-cellular compartments with high generalisation accuracy. We further apply the method to an experiment on pluripotent mouse embryonic stem cells to classify a set of previously unknown proteins, and validate our findings against a recent high resolution map of the mouse stem cell proteome. The methodology is distributed as part of the open-source Bioconductor pRoloc suite for spatial proteomics data analysis. PMID:27175778
Quality control methodology for high-throughput protein-protein interaction screening.
Vazquez, Alexei; Rual, Jean-François; Venkatesan, Kavitha
2011-01-01
Protein-protein interactions are key to many aspects of the cell, including its cytoskeletal structure, the signaling processes in which it is involved, or its metabolism. Failure to form protein complexes or signaling cascades may sometimes translate into pathologic conditions such as cancer or neurodegenerative diseases. The set of all protein interactions between the proteins encoded by an organism constitutes its protein interaction network, representing a scaffold for biological function. Knowing the protein interaction network of an organism, combined with other sources of biological information, can unravel fundamental biological circuits and may help better understand the molecular basics of human diseases. The protein interaction network of an organism can be mapped by combining data obtained from both low-throughput screens, i.e., "one gene at a time" experiments and high-throughput screens, i.e., screens designed to interrogate large sets of proteins at once. In either case, quality controls are required to deal with the inherent imperfect nature of experimental assays. In this chapter, we discuss experimental and statistical methodologies to quantify error rates in high-throughput protein-protein interactions screens.
Payne, Philip R O; Kwok, Alan; Dhaval, Rakesh; Borlawsky, Tara B
2009-03-01
The conduct of large-scale translational studies presents significant challenges related to the storage, management and analysis of integrative data sets. Ideally, the application of methodologies such as conceptual knowledge discovery in databases (CKDD) provides a means for moving beyond intuitive hypothesis discovery and testing in such data sets, and towards the high-throughput generation and evaluation of knowledge-anchored relationships between complex bio-molecular and phenotypic variables. However, the induction of such high-throughput hypotheses is non-trivial, and requires correspondingly high-throughput validation methodologies. In this manuscript, we describe an evaluation of the efficacy of a natural language processing-based approach to validating such hypotheses. As part of this evaluation, we will examine a phenomenon that we have labeled as "Conceptual Dissonance" in which conceptual knowledge derived from two or more sources of comparable scope and granularity cannot be readily integrated or compared using conventional methods and automated tools.
Bahrami-Samani, Emad; Vo, Dat T.; de Araujo, Patricia Rosa; Vogel, Christine; Smith, Andrew D.; Penalva, Luiz O. F.; Uren, Philip J.
2014-01-01
Co- and post-transcriptional regulation of gene expression is complex and multi-faceted, spanning the complete RNA lifecycle from genesis to decay. High-throughput profiling of the constituent events and processes is achieved through a range of technologies that continue to expand and evolve. Fully leveraging the resulting data is non-trivial, and requires the use of computational methods and tools carefully crafted for specific data sources and often intended to probe particular biological processes. Drawing upon databases of information pre-compiled by other researchers can further elevate analyses. Within this review, we describe the major co- and post-transcriptional events in the RNA lifecycle that are amenable to high-throughput profiling. We place specific emphasis on the analysis of the resulting data, in particular the computational tools and resources available, as well as looking towards future challenges that remain to be addressed. PMID:25515586
Ching, Travers; Zhu, Xun; Garmire, Lana X
2018-04-01
Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet.
Ethoscopes: An open platform for high-throughput ethomics.
Geissmann, Quentin; Garcia Rodriguez, Luis; Beckwith, Esteban J; French, Alice S; Jamasb, Arian R; Gilestro, Giorgio F
2017-10-01
Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope.
Novel Acoustic Loading of a Mass Spectrometer: Toward Next-Generation High-Throughput MS Screening.
Sinclair, Ian; Stearns, Rick; Pringle, Steven; Wingfield, Jonathan; Datwani, Sammy; Hall, Eric; Ghislain, Luke; Majlof, Lars; Bachman, Martin
2016-02-01
High-throughput, direct measurement of substrate-to-product conversion by label-free detection, without the need for engineered substrates or secondary assays, could be considered the "holy grail" of drug discovery screening. Mass spectrometry (MS) has the potential to be part of this ultimate screening solution, but is constrained by the limitations of existing MS sample introduction modes that cannot meet the throughput requirements of high-throughput screening (HTS). Here we report data from a prototype system (Echo-MS) that uses acoustic droplet ejection (ADE) to transfer femtoliter-scale droplets in a rapid, precise, and accurate fashion directly into the MS. The acoustic source can load samples into the MS from a microtiter plate at a rate of up to three samples per second. The resulting MS signal displays a very sharp attack profile and ions are detected within 50 ms of activation of the acoustic transducer. Additionally, we show that the system is capable of generating multiply charged ion species from simple peptides and large proteins. The combination of high speed and low sample volume has significant potential within not only drug discovery, but also other areas of the industry. © 2015 Society for Laboratory Automation and Screening.
Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz
2018-01-01
High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).
Bałut, Magdalena; Buckley, Patrick G.; Ochocka, J. Renata; Bartoszewski, Rafał; Crossman, David K.; Messiaen, Ludwine M.; Piotrowski, Arkadiusz
2018-01-01
High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp). PMID:29432475
Annotare—a tool for annotating high-throughput biomedical investigations and resulting data
Shankar, Ravi; Parkinson, Helen; Burdett, Tony; Hastings, Emma; Liu, Junmin; Miller, Michael; Srinivasa, Rashmi; White, Joseph; Brazma, Alvis; Sherlock, Gavin; Stoeckert, Christian J.; Ball, Catherine A.
2010-01-01
Summary: Computational methods in molecular biology will increasingly depend on standards-based annotations that describe biological experiments in an unambiguous manner. Annotare is a software tool that enables biologists to easily annotate their high-throughput experiments, biomaterials and data in a standards-compliant way that facilitates meaningful search and analysis. Availability and Implementation: Annotare is available from http://code.google.com/p/annotare/ under the terms of the open-source MIT License (http://www.opensource.org/licenses/mit-license.php). It has been tested on both Mac and Windows. Contact: rshankar@stanford.edu PMID:20733062
Asker, Dalal
2018-09-30
Carotenoids are valuable natural colorants that exhibit numerous health promoting properties, and thus are widely used in food, feeds, pharmaceutical and nutraceuticals industries. In this study, we isolated and identified novel microbial sources that produced high-value carotenoids using high throughput screening (HTS). A total of 701 pigmented microbial strains library including marine bacteria and red yeast was constructed. Carotenoids profiling using HPLC-DAD-MS methods showed 88 marine bacterial strains with potential for the production of high-value carotenoids including astaxanthin (28 strains), zeaxanthin (21 strains), lutein (1 strains) and canthaxanthin (2 strains). A comprehensive 16S rRNA gene based phylogenetic analysis revealed that these strains can be classified into 30 species belonging to five bacterial classes (Flavobacteriia, α-Proteobacteria, γ-Proteobacteria, Actinobacteria and Bacilli). Importantly, we discovered novel producers of zeaxanthin and lutein, and a high diversity in both carotenoids and producing microbial strains, which are promising and highly selective biotechnological sources for high-value carotenoids. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Alexander, Kristen; Hampton, Meredith; Lopez, Rene; Desimone, Joseph
2009-03-01
When a pair of noble metal nanoparticles are brought close together, the plasmonic properties of the pair (known as a ``dimer'') give rise to intense electric field enhancements in the interstitial gap. These fields present a simple yet exquisitely sensitive system for performing single molecule surface-enhanced Raman spectroscopy (SM-SERS). Problems associated with current fabrication methods of SERS-active substrates include reproducibility issues, high cost of production and low throughput. In this study, we present a novel method for the high throughput fabrication of high quality SERS substrates. Using a polymer templating technique followed by the placement of thiolated nanoparticles through meniscus force deposition, we are able to fabricate large arrays of identical, uniformly spaced dimers in a quick, reproducible manner. Subsequent theoretical and experimental studies have confirmed the strong dependence of the SERS enhancement on both substrate geometry (e.g. dimer size, shape and gap size) and the polarization of the excitation source.
NASA Astrophysics Data System (ADS)
Alexander, Kristen; Lopez, Rene; Hampton, Meredith; Desimone, Joseph
2008-10-01
When a pair of noble metal nanoparticles are brought close together, the plasmonic properties of the pair (known as a ``dimer'') give rise to intense electric field enhancements in the interstitial gap. These fields present a simple yet exquisitely sensitive system for performing single molecule surface-enhanced Raman spectroscopy (SM-SERS). Problems associated with current fabrication methods of SERS-active substrates include reproducibility issues, high cost of production and low throughput. In this study, we present a novel method for the high throughput fabrication of high quality SERS substrates. Using a polymer templating technique followed by the placement of thiolated nanoparticles through meniscus force deposition, we are able to fabricate large arrays of identical, uniformly spaced dimers in a quick, reproducible manner. Subsequent theoretical and experimental studies have confirmed the strong dependence of the SERS enhancement on both substrate geometry (e.g. dimer size, shape and gap size) and the polarization of the excitation source.
Optical element for full spectral purity from IR-generated EUV light sources
NASA Astrophysics Data System (ADS)
van den Boogaard, A. J. R.; Louis, E.; van Goor, F. A.; Bijkerk, F.
2009-03-01
Laser produced plasma (LLP) sources are generally considered attractive for high power EUV production in next generation lithography equipment. Such plasmas are most efficiently excited by the relatively long, infrared wavelengths of CO2-lasers, but a significant part of the rotational-vibrational excitation lines of the CO2 radiation will be backscattered by the plasma's critical density surface and consequently will be present as parasitic radiation in the spectrum of such sources. Since most optical elements in the EUV collecting and imaging train have a high reflection coefficient for IR radiation, undesirable heating phenomena at the resist level are likely to occur. In this study a completely new principle is employed to obtain full separation of EUV and IR radiation from the source by a single optical component. While the application of a transmission filter would come at the expense of EUV throughput, this technique potentially enables wavelength separation without loosing reflectance compared to a conventional Mo/Si multilayer coated element. As a result this method provides full spectral purity from the source without loss in EUV throughput. Detailed calculations on the principal of functioning are presented.
Sources of PCR-induced distortions in high-throughput sequencing data sets
Kebschull, Justus M.; Zador, Anthony M.
2015-01-01
PCR permits the exponential and sequence-specific amplification of DNA, even from minute starting quantities. PCR is a fundamental step in preparing DNA samples for high-throughput sequencing. However, there are errors associated with PCR-mediated amplification. Here we examine the effects of four important sources of error—bias, stochasticity, template switches and polymerase errors—on sequence representation in low-input next-generation sequencing libraries. We designed a pool of diverse PCR amplicons with a defined structure, and then used Illumina sequencing to search for signatures of each process. We further developed quantitative models for each process, and compared predictions of these models to our experimental data. We find that PCR stochasticity is the major force skewing sequence representation after amplification of a pool of unique DNA amplicons. Polymerase errors become very common in later cycles of PCR but have little impact on the overall sequence distribution as they are confined to small copy numbers. PCR template switches are rare and confined to low copy numbers. Our results provide a theoretical basis for removing distortions from high-throughput sequencing data. In addition, our findings on PCR stochasticity will have particular relevance to quantification of results from single cell sequencing, in which sequences are represented by only one or a few molecules. PMID:26187991
Nanosurveyor: a framework for real-time data processing
Daurer, Benedikt J.; Krishnan, Hari; Perciano, Talita; ...
2017-01-31
Background: The ever improving brightness of accelerator based sources is enabling novel observations and discoveries with faster frame rates, larger fields of view, higher resolution, and higher dimensionality. Results: Here we present an integrated software/algorithmic framework designed to capitalize on high-throughput experiments through efficient kernels, load-balanced workflows, which are scalable in design. We describe the streamlined processing pipeline of ptychography data analysis. Conclusions: The pipeline provides throughput, compression, and resolution as well as rapid feedback to the microscope operators.
High-throughput sequence alignment using Graphics Processing Units
Schatz, Michael C; Trapnell, Cole; Delcher, Arthur L; Varshney, Amitabh
2007-01-01
Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs) in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU. PMID:18070356
Optima HD Imax: Molecular Implant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tieger, D. R.; Splinter, P. R.; Hsieh, T. J.
2008-11-03
Molecular implantation offers semiconductor device manufacturers multiple advantages over traditional high current ion implanters. The dose multiplication due to implanting more than one atom per molecule and the transport of beams at higher energies relative to the effective particle energies result in significant throughput enhancements without risk of energy contamination. The Optima HD Imax is introduced with molecular implant capability and the ability to reach up to 4.2 keV effective {sup 11}B from octadecaborane (B{sub 18}H{sub 22}). The ion source and beamline are optimized for molecular species ionization and transport. The beamline is coupled to the Optima HD mechanically scannedmore » endstation. The use of spot beam technology with ionized molecules maximizes the throughput potential and produces uniform implants with fast setup time and with superior angle control. The implanter architecture is designed to run multiple molecular species; for example, in addition to B{sub 18}H{sub 22} the system is capable of implanting carbon molecules for strain engineering and shallow junction engineering. Source lifetime data and typical operating conditions are described both for high dose, memory applications such as dual poly gate as well as lower energy implants for source drain extension and contact implants. Throughputs have been achieved in excess of 50 wafers per hour at doses up to 1x10{sup 16} ions/cm{sup 2} and for energies as low as 1 keV.« less
Crop 3D-a LiDAR based platform for 3D high-throughput crop phenotyping.
Guo, Qinghua; Wu, Fangfang; Pang, Shuxin; Zhao, Xiaoqian; Chen, Linhai; Liu, Jin; Xue, Baolin; Xu, Guangcai; Li, Le; Jing, Haichun; Chu, Chengcai
2018-03-01
With the growing population and the reducing arable land, breeding has been considered as an effective way to solve the food crisis. As an important part in breeding, high-throughput phenotyping can accelerate the breeding process effectively. Light detection and ranging (LiDAR) is an active remote sensing technology that is capable of acquiring three-dimensional (3D) data accurately, and has a great potential in crop phenotyping. Given that crop phenotyping based on LiDAR technology is not common in China, we developed a high-throughput crop phenotyping platform, named Crop 3D, which integrated LiDAR sensor, high-resolution camera, thermal camera and hyperspectral imager. Compared with traditional crop phenotyping techniques, Crop 3D can acquire multi-source phenotypic data in the whole crop growing period and extract plant height, plant width, leaf length, leaf width, leaf area, leaf inclination angle and other parameters for plant biology and genomics analysis. In this paper, we described the designs, functions and testing results of the Crop 3D platform, and briefly discussed the potential applications and future development of the platform in phenotyping. We concluded that platforms integrating LiDAR and traditional remote sensing techniques might be the future trend of crop high-throughput phenotyping.
Improved traffic operations through real-time data collection and control.
DOT National Transportation Integrated Search
2016-05-01
Intersections are a major source of delay in urban networks, and reservation-based intersection control for : autonomous vehicles has great potential to improve intersection throughput. However, despite the high : flexibility in reservations, existin...
Pathway Profiling and Tissue Modeling of Developmental Toxicity
High-throughput and high-content screening (HTS-HCS) studies are providing a rich source of data that can be applied to in vitro profiling of chemical compounds for biological activity and potential toxicity. EPA’s ToxCast™ project, and the broader Tox21 consortium, in addition t...
Virtual Embryo: Systems Modeling in Developmental Toxicity
High-throughput and high-content screening (HTS-HCS) studies are providing a rich source of data that can be applied to in vitro profiling of chemical compounds for biological activity and potential toxicity. EPA’s ToxCast™ project, and the broader Tox21 consortium, in addition t...
Quality Control for Ambient Sampling of PCDD/PCDF from Open Combustion Sources
Both long duration (> 6 h) and high temperature (up to 139o C) sampling efforts were conducted using ambient air sampling methods to determine if either high volume throughput or higher than ambient sampling temperatures resulted in loss of target polychlorinated dibenzodioxins/d...
Ethoscopes: An open platform for high-throughput ethomics
Geissmann, Quentin; Garcia Rodriguez, Luis; Beckwith, Esteban J.; French, Alice S.; Jamasb, Arian R.
2017-01-01
Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope. PMID:29049280
Short-read, high-throughput sequencing technology for STR genotyping
Bornman, Daniel M.; Hester, Mark E.; Schuetter, Jared M.; Kasoji, Manjula D.; Minard-Smith, Angela; Barden, Curt A.; Nelson, Scott C.; Godbold, Gene D.; Baker, Christine H.; Yang, Boyu; Walther, Jacquelyn E.; Tornes, Ivan E.; Yan, Pearlly S.; Rodriguez, Benjamin; Bundschuh, Ralf; Dickens, Michael L.; Young, Brian A.; Faith, Seth A.
2013-01-01
DNA-based methods for human identification principally rely upon genotyping of short tandem repeat (STR) loci. Electrophoretic-based techniques for variable-length classification of STRs are universally utilized, but are limited in that they have relatively low throughput and do not yield nucleotide sequence information. High-throughput sequencing technology may provide a more powerful instrument for human identification, but is not currently validated for forensic casework. Here, we present a systematic method to perform high-throughput genotyping analysis of the Combined DNA Index System (CODIS) STR loci using short-read (150 bp) massively parallel sequencing technology. Open source reference alignment tools were optimized to evaluate PCR-amplified STR loci using a custom designed STR genome reference. Evaluation of this approach demonstrated that the 13 CODIS STR loci and amelogenin (AMEL) locus could be accurately called from individual and mixture samples. Sensitivity analysis showed that as few as 18,500 reads, aligned to an in silico referenced genome, were required to genotype an individual (>99% confidence) for the CODIS loci. The power of this technology was further demonstrated by identification of variant alleles containing single nucleotide polymorphisms (SNPs) and the development of quantitative measurements (reads) for resolving mixed samples. PMID:25621315
Hadoop and friends - first experience at CERN with a new platform for high throughput analysis steps
NASA Astrophysics Data System (ADS)
Duellmann, D.; Surdy, K.; Menichetti, L.; Toebbicke, R.
2017-10-01
The statistical analysis of infrastructure metrics comes with several specific challenges, including the fairly large volume of unstructured metrics from a large set of independent data sources. Hadoop and Spark provide an ideal environment in particular for the first steps of skimming rapidly through hundreds of TB of low relevance data to find and extract the much smaller data volume that is relevant for statistical analysis and modelling. This presentation will describe the new Hadoop service at CERN and the use of several of its components for high throughput data aggregation and ad-hoc pattern searches. We will describe the hardware setup used, the service structure with a small set of decoupled clusters and the first experience with co-hosting different applications and performing software upgrades. We will further detail the common infrastructure used for data extraction and preparation from continuous monitoring and database input sources.
Raspberry Pi-powered imaging for plant phenotyping.
Tovar, Jose C; Hoyer, J Steen; Lin, Andy; Tielking, Allison; Callen, Steven T; Elizabeth Castillo, S; Miller, Michael; Tessman, Monica; Fahlgren, Noah; Carrington, James C; Nusinow, Dmitri A; Gehan, Malia A
2018-03-01
Image-based phenomics is a powerful approach to capture and quantify plant diversity. However, commercial platforms that make consistent image acquisition easy are often cost-prohibitive. To make high-throughput phenotyping methods more accessible, low-cost microcomputers and cameras can be used to acquire plant image data. We used low-cost Raspberry Pi computers and cameras to manage and capture plant image data. Detailed here are three different applications of Raspberry Pi-controlled imaging platforms for seed and shoot imaging. Images obtained from each platform were suitable for extracting quantifiable plant traits (e.g., shape, area, height, color) en masse using open-source image processing software such as PlantCV. This protocol describes three low-cost platforms for image acquisition that are useful for quantifying plant diversity. When coupled with open-source image processing tools, these imaging platforms provide viable low-cost solutions for incorporating high-throughput phenomics into a wide range of research programs.
Analysis of High-Throughput ELISA Microarray Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Amanda M.; Daly, Don S.; Zangar, Richard C.
Our research group develops analytical methods and software for the high-throughput analysis of quantitative enzyme-linked immunosorbent assay (ELISA) microarrays. ELISA microarrays differ from DNA microarrays in several fundamental aspects and most algorithms for analysis of DNA microarray data are not applicable to ELISA microarrays. In this review, we provide an overview of the steps involved in ELISA microarray data analysis and how the statistically sound algorithms we have developed provide an integrated software suite to address the needs of each data-processing step. The algorithms discussed are available in a set of open-source software tools (http://www.pnl.gov/statistics/ProMAT).
Tschiersch, Henning; Junker, Astrid; Meyer, Rhonda C; Altmann, Thomas
2017-01-01
Automated plant phenotyping has been established as a powerful new tool in studying plant growth, development and response to various types of biotic or abiotic stressors. Respective facilities mainly apply non-invasive imaging based methods, which enable the continuous quantification of the dynamics of plant growth and physiology during developmental progression. However, especially for plants of larger size, integrative, automated and high throughput measurements of complex physiological parameters such as photosystem II efficiency determined through kinetic chlorophyll fluorescence analysis remain a challenge. We present the technical installations and the establishment of experimental procedures that allow the integrated high throughput imaging of all commonly determined PSII parameters for small and large plants using kinetic chlorophyll fluorescence imaging systems (FluorCam, PSI) integrated into automated phenotyping facilities (Scanalyzer, LemnaTec). Besides determination of the maximum PSII efficiency, we focused on implementation of high throughput amenable protocols recording PSII operating efficiency (Φ PSII ). Using the presented setup, this parameter is shown to be reproducibly measured in differently sized plants despite the corresponding variation in distance between plants and light source that caused small differences in incident light intensity. Values of Φ PSII obtained with the automated chlorophyll fluorescence imaging setup correlated very well with conventionally determined data using a spot-measuring chlorophyll fluorometer. The established high throughput operating protocols enable the screening of up to 1080 small and 184 large plants per hour, respectively. The application of the implemented high throughput protocols is demonstrated in screening experiments performed with large Arabidopsis and maize populations assessing natural variation in PSII efficiency. The incorporation of imaging systems suitable for kinetic chlorophyll fluorescence analysis leads to a substantial extension of the feature spectrum to be assessed in the presented high throughput automated plant phenotyping platforms, thus enabling the simultaneous assessment of plant architectural and biomass-related traits and their relations to physiological features such as PSII operating efficiency. The implemented high throughput protocols are applicable to a broad spectrum of model and crop plants of different sizes (up to 1.80 m height) and architectures. The deeper understanding of the relation of plant architecture, biomass formation and photosynthetic efficiency has a great potential with respect to crop and yield improvement strategies.
Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data
Ching, Travers; Zhu, Xun
2018-01-01
Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet. PMID:29634719
High throughput optical scanner
Basiji, David A.; van den Engh, Gerrit J.
2001-01-01
A scanning apparatus is provided to obtain automated, rapid and sensitive scanning of substrate fluorescence, optical density or phosphorescence. The scanner uses a constant path length optical train, which enables the combination of a moving beam for high speed scanning with phase-sensitive detection for noise reduction, comprising a light source, a scanning mirror to receive light from the light source and sweep it across a steering mirror, a steering mirror to receive light from the scanning mirror and reflect it to the substrate, whereby it is swept across the substrate along a scan arc, and a photodetector to receive emitted or scattered light from the substrate, wherein the optical path length from the light source to the photodetector is substantially constant throughout the sweep across the substrate. The optical train can further include a waveguide or mirror to collect emitted or scattered light from the substrate and direct it to the photodetector. For phase-sensitive detection the light source is intensity modulated and the detector is connected to phase-sensitive detection electronics. A scanner using a substrate translator is also provided. For two dimensional imaging the substrate is translated in one dimension while the scanning mirror scans the beam in a second dimension. For a high throughput scanner, stacks of substrates are loaded onto a conveyor belt from a tray feeder.
Mason, Annaliese S; Zhang, Jing; Tollenaere, Reece; Vasquez Teuber, Paula; Dalton-Morgan, Jessica; Hu, Liyong; Yan, Guijun; Edwards, David; Redden, Robert; Batley, Jacqueline
2015-09-01
Germplasm collections provide an extremely valuable resource for breeders and researchers. However, misclassification of accessions by species often hinders the effective use of these collections. We propose that use of high-throughput genotyping tools can provide a fast, efficient and cost-effective way of confirming species in germplasm collections, as well as providing valuable genetic diversity data. We genotyped 180 Brassicaceae samples sourced from the Australian Grains Genebank across the recently released Illumina Infinium Brassica 60K SNP array. Of these, 76 were provided on the basis of suspected misclassification and another 104 were sourced independently from the germplasm collection. Presence of the A- and C-genomes combined with principle components analysis clearly separated Brassica rapa, B. oleracea, B. napus, B. carinata and B. juncea samples into distinct species groups. Several lines were further validated using chromosome counts. Overall, 18% of samples (32/180) were misclassified on the basis of species. Within these 180 samples, 23/76 (30%) supplied on the basis of suspected misclassification were misclassified, and 9/105 (9%) of the samples randomly sourced from the Australian Grains Genebank were misclassified. Surprisingly, several individuals were also found to be the product of interspecific hybridization events. The SNP (single nucleotide polymorphism) array proved effective at confirming species, and provided useful information related to genetic diversity. As similar genomic resources become available for different crops, high-throughput molecular genotyping will offer an efficient and cost-effective method to screen germplasm collections worldwide, facilitating more effective use of these valuable resources by breeders and researchers. © 2015 John Wiley & Sons Ltd.
Identifying populations sensitive to environmental chemicals by simulating toxicokinetic variability
We incorporate inter-individual variability, including variability across demographic subgroups, into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of...
Binladen, Jonas; Gilbert, M Thomas P; Bollback, Jonathan P; Panitz, Frank; Bendixen, Christian; Nielsen, Rasmus; Willerslev, Eske
2007-02-14
The invention of the Genome Sequence 20 DNA Sequencing System (454 parallel sequencing platform) has enabled the rapid and high-volume production of sequence data. Until now, however, individual emulsion PCR (emPCR) reactions and subsequent sequencing runs have been unable to combine template DNA from multiple individuals, as homologous sequences cannot be subsequently assigned to their original sources. We use conventional PCR with 5'-nucleotide tagged primers to generate homologous DNA amplification products from multiple specimens, followed by sequencing through the high-throughput Genome Sequence 20 DNA Sequencing System (GS20, Roche/454 Life Sciences). Each DNA sequence is subsequently traced back to its individual source through 5'tag-analysis. We demonstrate that this new approach enables the assignment of virtually all the generated DNA sequences to the correct source once sequencing anomalies are accounted for (miss-assignment rate<0.4%). Therefore, the method enables accurate sequencing and assignment of homologous DNA sequences from multiple sources in single high-throughput GS20 run. We observe a bias in the distribution of the differently tagged primers that is dependent on the 5' nucleotide of the tag. In particular, primers 5' labelled with a cytosine are heavily overrepresented among the final sequences, while those 5' labelled with a thymine are strongly underrepresented. A weaker bias also exists with regards to the distribution of the sequences as sorted by the second nucleotide of the dinucleotide tags. As the results are based on a single GS20 run, the general applicability of the approach requires confirmation. However, our experiments demonstrate that 5'primer tagging is a useful method in which the sequencing power of the GS20 can be applied to PCR-based assays of multiple homologous PCR products. The new approach will be of value to a broad range of research areas, such as those of comparative genomics, complete mitochondrial analyses, population genetics, and phylogenetics.
Hassig, Christian A; Zeng, Fu-Yue; Kung, Paul; Kiankarimi, Mehrak; Kim, Sylvia; Diaz, Paul W; Zhai, Dayong; Welsh, Kate; Morshedian, Shana; Su, Ying; O'Keefe, Barry; Newman, David J; Rusman, Yudi; Kaur, Harneet; Salomon, Christine E; Brown, Susan G; Baire, Beeraiah; Michel, Andrew R; Hoye, Thomas R; Francis, Subhashree; Georg, Gunda I; Walters, Michael A; Divlianska, Daniela B; Roth, Gregory P; Wright, Amy E; Reed, John C
2014-09-01
Antiapoptotic Bcl-2 family proteins are validated cancer targets composed of six related proteins. From a drug discovery perspective, these are challenging targets that exert their cellular functions through protein-protein interactions (PPIs). Although several isoform-selective inhibitors have been developed using structure-based design or high-throughput screening (HTS) of synthetic chemical libraries, no large-scale screen of natural product collections has been reported. A competitive displacement fluorescence polarization (FP) screen of nearly 150,000 natural product extracts was conducted against all six antiapoptotic Bcl-2 family proteins using fluorochrome-conjugated peptide ligands that mimic functionally relevant PPIs. The screens were conducted in 1536-well format and displayed satisfactory overall HTS statistics, with Z'-factor values ranging from 0.72 to 0.83 and a hit confirmation rate between 16% and 64%. Confirmed active extracts were orthogonally tested in a luminescent assay for caspase-3/7 activation in tumor cells. Active extracts were resupplied, and effort toward the isolation of pure active components was initiated through iterative bioassay-guided fractionation. Several previously described altertoxins were isolated from a microbial source, and the pure compounds demonstrate activity in both Bcl-2 FP and caspase cellular assays. The studies demonstrate the feasibility of ultra-high-throughput screening using natural product sources and highlight some of the challenges associated with this approach. © 2014 Society for Laboratory Automation and Screening.
Hassig, Christian A.; Zeng, Fu-Yue; Kung, Paul; Kiankarimi, Mehrak; Kim, Sylvia; Diaz, Paul W.; Zhai, Dayong; Welsh, Kate; Morshedian, Shana; Su, Ying; O'Keefe, Barry; Newman, David J.; Rusman, Yudi; Kaur, Harneet; Salomon, Christine E.; Brown, Susan G.; Baire, Beeraiah; Michel, Andrew R.; Hoye, Thomas R.; Francis, Subhashree; Georg, Gunda I.; Walters, Michael A.; Divlianska, Daniela B.; Roth, Gregory P.; Wright, Amy E.; Reed, John C.
2015-01-01
Anti-apoptotic Bcl-2 family proteins are validated cancer targets comprised of six related proteins. From a drug discovery perspective, these are challenging targets that exert their cellular functions through protein-protein interactions (PPIs). While several isoform-selective inhibitors have been developed using structure-based design or high throughput screening (HTS) of synthetic chemical libraries, no large scale screen of natural product collections has been reported. A competitive displacement fluorescence polarization (FP) screen of nearly 150,000 natural product extracts was conducted against all six anti-apoptotic Bcl-2 family proteins using fluorochrome-conjugated peptide ligands that mimic functionally-relevant PPIs. The screens were conducted in 1,536-well format and displayed satisfactory overall HTS statistics, with Z’-factor values ranging from 0.72 to 0.83, and a hit confirmation rate between 16-64%. Confirmed active extracts were orthogonally tested in a luminescent assay for caspase-3/7 activation in tumor cells. Active extracts were resupplied and effort toward the isolation of pure active components was initiated through iterative bioassay-guided fractionation. Several previously described altertoxins were isolated from a microbial source and the pure compounds demonstrate activity in both Bcl-2 FP and caspase cellular assays. The studies demonstrate the feasibility of ultra high throughput screening using natural product sources and highlight some of the challenges associated with this approach. PMID:24870016
Cheng, Sy-Chyi; Huang, Min-Zong; Wu, Li-Chieh; Chou, Chih-Chiang; Cheng, Chu-Nian; Jhang, Siou-Sian; Shiea, Jentaie
2012-07-17
Interfacing thin layer chromatography (TLC) with ambient mass spectrometry (AMS) has been an important area of analytical chemistry because of its capability to rapidly separate and characterize the chemical compounds. In this study, we have developed a high-throughput TLC-AMS system using building blocks to deal, deliver, and collect the TLC plate through an electrospray-assisted laser desorption ionization (ELDI) source. This is the first demonstration of the use of building blocks to construct and test the TLC-MS interfacing system. With the advantages of being readily available, cheap, reusable, and extremely easy to modify without consuming any material or reagent, the use of building blocks to develop the TLC-AMS interface is undoubtedly a green methodology. The TLC plate delivery system consists of a storage box, plate dealing component, conveyer, light sensor, and plate collecting box. During a TLC-AMS analysis, the TLC plate was sent to the conveyer from a stack of TLC plates placed in the storage box. As the TLC plate passed through the ELDI source, the chemical compounds separated on the plate would be desorbed by laser desorption and subsequently postionized by electrospray ionization. The samples, including a mixture of synthetic dyes and extracts of pharmaceutical drugs, were analyzed to demonstrate the capability of this TLC-ELDI/MS system for high-throughput analysis.
Gupta, Surya; De Puysseleyr, Veronic; Van der Heyden, José; Maddelein, Davy; Lemmens, Irma; Lievens, Sam; Degroeve, Sven; Tavernier, Jan; Martens, Lennart
2017-05-01
Protein-protein interaction (PPI) studies have dramatically expanded our knowledge about cellular behaviour and development in different conditions. A multitude of high-throughput PPI techniques have been developed to achieve proteome-scale coverage for PPI studies, including the microarray based Mammalian Protein-Protein Interaction Trap (MAPPIT) system. Because such high-throughput techniques typically report thousands of interactions, managing and analysing the large amounts of acquired data is a challenge. We have therefore built the MAPPIT cell microArray Protein Protein Interaction-Data management & Analysis Tool (MAPPI-DAT) as an automated data management and analysis tool for MAPPIT cell microarray experiments. MAPPI-DAT stores the experimental data and metadata in a systematic and structured way, automates data analysis and interpretation, and enables the meta-analysis of MAPPIT cell microarray data across all stored experiments. MAPPI-DAT is developed in Python, using R for data analysis and MySQL as data management system. MAPPI-DAT is cross-platform and can be ran on Microsoft Windows, Linux and OS X/macOS. The source code and a Microsoft Windows executable are freely available under the permissive Apache2 open source license at https://github.com/compomics/MAPPI-DAT. jan.tavernier@vib-ugent.be or lennart.martens@vib-ugent.be. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
High throughput on-chip analysis of high-energy charged particle tracks using lensfree imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Wei; Shabbir, Faizan; Gong, Chao
2015-04-13
We demonstrate a high-throughput charged particle analysis platform, which is based on lensfree on-chip microscopy for rapid ion track analysis using allyl diglycol carbonate, i.e., CR-39 plastic polymer as the sensing medium. By adopting a wide-area opto-electronic image sensor together with a source-shifting based pixel super-resolution technique, a large CR-39 sample volume (i.e., 4 cm × 4 cm × 0.1 cm) can be imaged in less than 1 min using a compact lensfree on-chip microscope, which detects partially coherent in-line holograms of the ion tracks recorded within the CR-39 detector. After the image capture, using highly parallelized reconstruction and ion track analysis algorithms running on graphics processingmore » units, we reconstruct and analyze the entire volume of a CR-39 detector within ∼1.5 min. This significant reduction in the entire imaging and ion track analysis time not only increases our throughput but also allows us to perform time-resolved analysis of the etching process to monitor and optimize the growth of ion tracks during etching. This computational lensfree imaging platform can provide a much higher throughput and more cost-effective alternative to traditional lens-based scanning optical microscopes for ion track analysis using CR-39 and other passive high energy particle detectors.« less
A bioinformatics roadmap for the human vaccines project.
Scheuermann, Richard H; Sinkovits, Robert S; Schenkelberg, Theodore; Koff, Wayne C
2017-06-01
Biomedical research has become a data intensive science in which high throughput experimentation is producing comprehensive data about biological systems at an ever-increasing pace. The Human Vaccines Project is a new public-private partnership, with the goal of accelerating development of improved vaccines and immunotherapies for global infectious diseases and cancers by decoding the human immune system. To achieve its mission, the Project is developing a Bioinformatics Hub as an open-source, multidisciplinary effort with the overarching goal of providing an enabling infrastructure to support the data processing, analysis and knowledge extraction procedures required to translate high throughput, high complexity human immunology research data into biomedical knowledge, to determine the core principles driving specific and durable protective immune responses.
A compact imaging spectroscopic system for biomolecular detections on plasmonic chips.
Lo, Shu-Cheng; Lin, En-Hung; Wei, Pei-Kuen; Tsai, Wan-Shao
2016-10-17
In this study, we demonstrate a compact imaging spectroscopic system for high-throughput detection of biomolecular interactions on plasmonic chips, based on a curved grating as the key element of light diffraction and light focusing. Both the curved grating and the plasmonic chips are fabricated on flexible plastic substrates using a gas-assisted thermal-embossing method. A fiber-coupled broadband light source and a camera are included in the system. Spectral resolution within 1 nm is achieved in sensing environmental index solutions and protein bindings. The detected sensitivities of the plasmonic chip are comparable with a commercial spectrometer. An extra one-dimensional scanning stage enables high-throughput detection of protein binding on a designed plasmonic chip consisting of several nanoslit arrays with different periods. The detected resonance wavelengths match well with the grating equation under an air environment. Wavelength shifts between 1 and 9 nm are detected for antigens of various concentrations binding with antibodies. A simple, mass-productive and cost-effective method has been demonstrated on the imaging spectroscopic system for real-time, label-free, highly sensitive and high-throughput screening of biomolecular interactions.
Budavari, Tamas; Langmead, Ben; Wheelan, Sarah J.; Salzberg, Steven L.; Szalay, Alexander S.
2015-01-01
When computing alignments of DNA sequences to a large genome, a key element in achieving high processing throughput is to prioritize locations in the genome where high-scoring mappings might be expected. We formulated this task as a series of list-processing operations that can be efficiently performed on graphics processing unit (GPU) hardware.We followed this approach in implementing a read aligner called Arioc that uses GPU-based parallel sort and reduction techniques to identify high-priority locations where potential alignments may be found. We then carried out a read-by-read comparison of Arioc’s reported alignments with the alignments found by several leading read aligners. With simulated reads, Arioc has comparable or better accuracy than the other read aligners we tested. With human sequencing reads, Arioc demonstrates significantly greater throughput than the other aligners we evaluated across a wide range of sensitivity settings. The Arioc software is available at https://github.com/RWilton/Arioc. It is released under a BSD open-source license. PMID:25780763
Accelerating Adverse Outcome Pathway Development Using Publicly Available Data Sources
The adverse outcome pathway (AOP) concept links molecular perturbations with organism and population-level outcomes to support high-throughput toxicity testing. International efforts are underway to define AOPs and store the information supporting these AOPs in a central knowledg...
Virtual Embryo: Systems Modeling in Developmental Toxicity
High-throughput screening (HTS) studies are providing a rich source of data that can be applied to chemical profiling to address sensitivity and specificity of molecular targets, biological pathways, cellular and developmental processes. EPA’s ToxCast project is testing 960 uniq...
Breeding for improved potato nutrition: High amylose starch potatoes show promise as fiber source
USDA-ARS?s Scientific Manuscript database
Potato starch is composed of approximately 75% amylopectin and 25% amylose. We are interested in breeding for higher amylose content, which would increase the fiber content of potato and decrease glycemic index. In order to make progress in a breeding program, we have developed a high throughput ass...
Sun, Huaju; Chang, Qing; Liu, Long; Chai, Kungang; Lin, Guangyan; Huo, Qingling; Zhao, Zhenxia; Zhao, Zhongxing
2017-11-22
Several novel peptides with high ACE-I inhibitory activity were successfully screened from sericin hydrolysate (SH) by coupling in silico and in vitro approaches for the first time. Most screening processes for ACE-I inhibitory peptides were achieved through high-throughput in silico simulation followed by in vitro verification. QSAR model based predicted results indicated that the ACE-I inhibitory activity of these SH peptides and six chosen peptides exhibited moderate high ACE-I inhibitory activities (log IC 50 values: 1.63-2.34). Moreover, two tripeptides among the chosen six peptides were selected for ACE-I inhibition mechanism analysis which based on Lineweaver-Burk plots indicated that they behave as competitive ACE-I inhibitors. The C-terminal residues of short-chain peptides that contain more H-bond acceptor groups could easily form hydrogen bonds with ACE-I and have higher ACE-I inhibitory activity. Overall, sericin protein as a strong ACE-I inhibition source could be deemed a promising agent for antihypertension applications.
Kebschull, Moritz; Fittler, Melanie Julia; Demmer, Ryan T; Papapanou, Panos N
2017-01-01
Today, -omics analyses, including the systematic cataloging of messenger RNA and microRNA sequences or DNA methylation patterns in a cell population, organ, or tissue sample, allow for an unbiased, comprehensive genome-level analysis of complex diseases, offering a large advantage over earlier "candidate" gene or pathway analyses. A primary goal in the analysis of these high-throughput assays is the detection of those features among several thousand that differ between different groups of samples. In the context of oral biology, our group has successfully utilized -omics technology to identify key molecules and pathways in different diagnostic entities of periodontal disease.A major issue when inferring biological information from high-throughput -omics studies is the fact that the sheer volume of high-dimensional data generated by contemporary technology is not appropriately analyzed using common statistical methods employed in the biomedical sciences.In this chapter, we outline a robust and well-accepted bioinformatics workflow for the initial analysis of -omics data generated using microarrays or next-generation sequencing technology using open-source tools. Starting with quality control measures and necessary preprocessing steps for data originating from different -omics technologies, we next outline a differential expression analysis pipeline that can be used for data from both microarray and sequencing experiments, and offers the possibility to account for random or fixed effects. Finally, we present an overview of the possibilities for a functional analysis of the obtained data.
GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit
Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R.; Smith, Jeremy C.; Kasson, Peter M.; van der Spoel, David; Hess, Berk; Lindahl, Erik
2013-01-01
Motivation: Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Results: Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. Availability: GROMACS is an open source and free software available from http://www.gromacs.org. Contact: erik.lindahl@scilifelab.se Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23407358
Improving Data Transfer Throughput with Direct Search Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balaprakash, Prasanna; Morozov, Vitali; Kettimuthu, Rajkumar
2016-01-01
Improving data transfer throughput over high-speed long-distance networks has become increasingly difficult. Numerous factors such as nondeterministic congestion, dynamics of the transfer protocol, and multiuser and multitask source and destination endpoints, as well as interactions among these factors, contribute to this difficulty. A promising approach to improving throughput consists in using parallel streams at the application layer.We formulate and solve the problem of choosing the number of such streams from a mathematical optimization perspective. We propose the use of direct search methods, a class of easy-to-implement and light-weight mathematical optimization algorithms, to improve the performance of data transfers by dynamicallymore » adapting the number of parallel streams in a manner that does not require domain expertise, instrumentation, analytical models, or historic data. We apply our method to transfers performed with the GridFTP protocol, and illustrate the effectiveness of the proposed algorithm when used within Globus, a state-of-the-art data transfer tool, on productionWAN links and servers. We show that when compared to user default settings our direct search methods can achieve up to 10x performance improvement under certain conditions. We also show that our method can overcome performance degradation due to external compute and network load on source end points, a common scenario at high performance computing facilities.« less
David, Fabrice P A; Delafontaine, Julien; Carat, Solenne; Ross, Frederick J; Lefebvre, Gregory; Jarosz, Yohan; Sinclair, Lucas; Noordermeer, Daan; Rougemont, Jacques; Leleu, Marion
2014-01-01
The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch.
HTSstation: A Web Application and Open-Access Libraries for High-Throughput Sequencing Data Analysis
David, Fabrice P. A.; Delafontaine, Julien; Carat, Solenne; Ross, Frederick J.; Lefebvre, Gregory; Jarosz, Yohan; Sinclair, Lucas; Noordermeer, Daan; Rougemont, Jacques; Leleu, Marion
2014-01-01
The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch. PMID:24475057
Novel selection methods for DNA-encoded chemical libraries
Chan, Alix I.; McGregor, Lynn M.; Liu, David R.
2015-01-01
Driven by the need for new compounds to serve as biological probes and leads for therapeutic development and the growing accessibility of DNA technologies including high-throughput sequencing, many academic and industrial groups have begun to use DNA-encoded chemical libraries as a source of bioactive small molecules. In this review, we describe the technologies that have enabled the selection of compounds with desired activities from these libraries. These methods exploit the sensitivity of in vitro selection coupled with DNA amplification to overcome some of the limitations and costs associated with conventional screening methods. In addition, we highlight newer techniques with the potential to be applied to the high-throughput evaluation of DNA-encoded chemical libraries. PMID:25723146
A recently developed, commercially available, open-air, surface sampling ion source for mass spectrometers provides individual analyses in several seconds. To realize its full throughput potential, an autosampler and field sample carrier were designed and built. The autosampler ...
Computational toxicology and in silico modeling of embryogenesis
High-throughput screening (HTS) is providing a rich source of in vitro data for predictive toxicology. ToxCast™ HTS data presently covers 1060 broad-use chemicals and captures >650 in vitro features for diverse biochemical and receptor binding activities, multiplexed reporter gen...
To address this need, new tools have been created for characterizing, simulating, and evaluating chemical biokinetics. Physiologically-based pharmacokinetic (PBPK) models provide estimates of chemical exposures that produce potentially hazardous tissue concentrations, while tissu...
Pipeline for illumination correction of images for high-throughput microscopy.
Singh, S; Bray, M-A; Jones, T R; Carpenter, A E
2014-12-01
The presence of systematic noise in images in high-throughput microscopy experiments can significantly impact the accuracy of downstream results. Among the most common sources of systematic noise is non-homogeneous illumination across the image field. This often adds an unacceptable level of noise, obscures true quantitative differences and precludes biological experiments that rely on accurate fluorescence intensity measurements. In this paper, we seek to quantify the improvement in the quality of high-content screen readouts due to software-based illumination correction. We present a straightforward illumination correction pipeline that has been used by our group across many experiments. We test the pipeline on real-world high-throughput image sets and evaluate the performance of the pipeline at two levels: (a) Z'-factor to evaluate the effect of the image correction on a univariate readout, representative of a typical high-content screen, and (b) classification accuracy on phenotypic signatures derived from the images, representative of an experiment involving more complex data mining. We find that applying the proposed post-hoc correction method improves performance in both experiments, even when illumination correction has already been applied using software associated with the instrument. To facilitate the ready application and future development of illumination correction methods, we have made our complete test data sets as well as open-source image analysis pipelines publicly available. This software-based solution has the potential to improve outcomes for a wide-variety of image-based HTS experiments. © 2014 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
Yi, Ming; Zhao, Yongmei; Jia, Li; He, Mei; Kebebew, Electron; Stephens, Robert M.
2014-01-01
To apply exome-seq-derived variants in the clinical setting, there is an urgent need to identify the best variant caller(s) from a large collection of available options. We have used an Illumina exome-seq dataset as a benchmark, with two validation scenarios—family pedigree information and SNP array data for the same samples, permitting global high-throughput cross-validation, to evaluate the quality of SNP calls derived from several popular variant discovery tools from both the open-source and commercial communities using a set of designated quality metrics. To the best of our knowledge, this is the first large-scale performance comparison of exome-seq variant discovery tools using high-throughput validation with both Mendelian inheritance checking and SNP array data, which allows us to gain insights into the accuracy of SNP calling through such high-throughput validation in an unprecedented way, whereas the previously reported comparison studies have only assessed concordance of these tools without directly assessing the quality of the derived SNPs. More importantly, the main purpose of our study was to establish a reusable procedure that applies high-throughput validation to compare the quality of SNP discovery tools with a focus on exome-seq, which can be used to compare any forthcoming tool(s) of interest. PMID:24831545
Pan, Yuchen; Sackmann, Eric K; Wypisniak, Karolina; Hornsby, Michael; Datwani, Sammy S; Herr, Amy E
2016-12-23
High-quality immunoreagents enhance the performance and reproducibility of immunoassays and, in turn, the quality of both biological and clinical measurements. High quality recombinant immunoreagents are generated using antibody-phage display. One metric of antibody quality - the binding affinity - is quantified through the dissociation constant (K D ) of each recombinant antibody and the target antigen. To characterize the K D of recombinant antibodies and target antigen, we introduce affinity electrophoretic mobility shift assays (EMSAs) in a high-throughput format suitable for small volume samples. A microfluidic card comprised of free-standing polyacrylamide gel (fsPAG) separation lanes supports 384 concurrent EMSAs in 30 s using a single power source. Sample is dispensed onto the microfluidic EMSA card by acoustic droplet ejection (ADE), which reduces EMSA variability compared to sample dispensing using manual or pin tools. The K D for each of a six-member fragment antigen-binding fragment library is reported using ~25-fold less sample mass and ~5-fold less time than conventional heterogeneous assays. Given the form factor and performance of this micro- and mesofluidic workflow, we have developed a sample-sparing, high-throughput, solution-phase alternative for biomolecular affinity characterization.
Pan, Yuchen; Sackmann, Eric K.; Wypisniak, Karolina; Hornsby, Michael; Datwani, Sammy S.; Herr, Amy E.
2016-01-01
High-quality immunoreagents enhance the performance and reproducibility of immunoassays and, in turn, the quality of both biological and clinical measurements. High quality recombinant immunoreagents are generated using antibody-phage display. One metric of antibody quality – the binding affinity – is quantified through the dissociation constant (KD) of each recombinant antibody and the target antigen. To characterize the KD of recombinant antibodies and target antigen, we introduce affinity electrophoretic mobility shift assays (EMSAs) in a high-throughput format suitable for small volume samples. A microfluidic card comprised of free-standing polyacrylamide gel (fsPAG) separation lanes supports 384 concurrent EMSAs in 30 s using a single power source. Sample is dispensed onto the microfluidic EMSA card by acoustic droplet ejection (ADE), which reduces EMSA variability compared to sample dispensing using manual or pin tools. The KD for each of a six-member fragment antigen-binding fragment library is reported using ~25-fold less sample mass and ~5-fold less time than conventional heterogeneous assays. Given the form factor and performance of this micro- and mesofluidic workflow, we have developed a sample-sparing, high-throughput, solution-phase alternative for biomolecular affinity characterization. PMID:28008969
High-throughput and functional SNP detection assays for oleic and linolenic acids in soybean
USDA-ARS?s Scientific Manuscript database
Soybean is a primary source of vegetable oil, accounting for 53% of the total vegetable oil consumption in the USA in 2013. Soybean oil with high oleic acid and low linolenic acid content is desired, because it not only improves the oxidative stability of the oil, but also reduces the amount of unde...
Source-to-Dose Modeling of Phthalates: Lessons for Prioritization
Globally there is a need to characterize potential risk to human health and the environment that arises from the manufacture and use of tens of thousands of chemicals. The US EPA is developing methods for using computational chemistry, high-throughput screening, and toxicogenomi...
An Automated, High-Throughput System for GISAXS and GIWAXS Measurements of Thin Films
NASA Astrophysics Data System (ADS)
Schaible, Eric; Jimenez, Jessica; Church, Matthew; Lim, Eunhee; Stewart, Polite; Hexemer, Alexander
Grazing incidence small-angle X-ray scattering (GISAXS) and grazing incidence wide-angle X-ray scattering (GIWAXS) are important techniques for characterizing thin films. In order to meet rapidly increasing demand, the SAXSWAXS beamline at the Advanced Light Source (beamline 7.3.3) has implemented a fully automated, high-throughput system to conduct SAXS, GISAXS and GIWAXS measurements. An automated robot arm transfers samples from a holding tray to a measurement stage. Intelligent software aligns each sample in turn, and measures each according to user-defined specifications. Users mail in trays of samples on individually barcoded pucks, and can download and view their data remotely. Data will be pipelined to the NERSC supercomputing facility, and will be available to users via a web portal that facilitates highly parallelized analysis.
Fast multiclonal clusterization of V(D)J recombinations from high-throughput sequencing.
Giraud, Mathieu; Salson, Mikaël; Duez, Marc; Villenet, Céline; Quief, Sabine; Caillault, Aurélie; Grardel, Nathalie; Roumier, Christophe; Preudhomme, Claude; Figeac, Martin
2014-05-28
V(D)J recombinations in lymphocytes are essential for immunological diversity. They are also useful markers of pathologies. In leukemia, they are used to quantify the minimal residual disease during patient follow-up. However, the full breadth of lymphocyte diversity is not fully understood. We propose new algorithms that process high-throughput sequencing (HTS) data to extract unnamed V(D)J junctions and gather them into clones for quantification. This analysis is based on a seed heuristic and is fast and scalable because in the first phase, no alignment is performed with germline database sequences. The algorithms were applied to TR γ HTS data from a patient with acute lymphoblastic leukemia, and also on data simulating hypermutations. Our methods identified the main clone, as well as additional clones that were not identified with standard protocols. The proposed algorithms provide new insight into the analysis of high-throughput sequencing data for leukemia, and also to the quantitative assessment of any immunological profile. The methods described here are implemented in a C++ open-source program called Vidjil.
Rioualen, Claire; Da Costa, Quentin; Chetrit, Bernard; Charafe-Jauffret, Emmanuelle; Ginestier, Christophe
2017-01-01
High-throughput RNAi screenings (HTS) allow quantifying the impact of the deletion of each gene in any particular function, from virus-host interactions to cell differentiation. However, there has been less development for functional analysis tools dedicated to RNAi analyses. HTS-Net, a network-based analysis program, was developed to identify gene regulatory modules impacted in high-throughput screenings, by integrating transcription factors-target genes interaction data (regulome) and protein-protein interaction networks (interactome) on top of screening z-scores. HTS-Net produces exhaustive HTML reports for results navigation and exploration. HTS-Net is a new pipeline for RNA interference screening analyses that proves better performance than simple gene rankings by z-scores, by re-prioritizing genes and replacing them in their biological context, as shown by the three studies that we reanalyzed. Formatted input data for the three studied datasets, source code and web site for testing the system are available from the companion web site at http://htsnet.marseille.inserm.fr/. We also compared our program with existing algorithms (CARD and hotnet2). PMID:28949986
CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.
Bray, Mark-Anthony; Carpenter, Anne E
2015-11-04
Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.
High-throughput literature mining to support read-across ...
Building scientific confidence in the development and evaluation of read-across remains an ongoing challenge. Approaches include establishing systematic frameworks to identify sources of uncertainty and ways to address them. One source of uncertainty is related to characterizing biological similarity. Many research efforts are underway such as structuring mechanistic data in adverse outcome pathways and investigating the utility of high throughput (HT)/high content (HC) screening data. A largely untapped resource for read-across to date is the biomedical literature. This information has the potential to support read-across by facilitating the identification of valid source analogues with similar biological and toxicological profiles as well as providing the mechanistic understanding for any prediction made. A key challenge in using biomedical literature is to convert and translate its unstructured form into a computable format that can be linked to chemical structure. We developed a novel text-mining strategy to represent literature information for read across. Keywords were used to organize literature into toxicity signatures at the chemical level. These signatures were integrated with HT in vitro data and curated chemical structures. A rule-based algorithm assessed the strength of the literature relationship, providing a mechanism to rank and visualize the signature as literature ToxPIs (LitToxPIs). LitToxPIs were developed for over 6,000 chemicals for a varie
Nile Red Detection of Bacterial Hydrocarbons and Ketones in a High-Throughput Format
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinzon, NM; Aukema, KG; Gralnick, JA
A method for use in high-throughput screening of bacteria for the production of long-chain hydrocarbons and ketones by monitoring fluorescent light emission in the presence of Nile red is described. Nile red has previously been used to screen for polyhydroxybutyrate (PHB) and fatty acid esters, but this is the first report of screening for recombinant bacteria making hydrocarbons or ketones. The microtiter plate assay was evaluated using wild-type and recombinant strains of Shewanella oneidensis and Escherichia coli expressing the enzyme OleA, previously shown to initiate hydrocarbon biosynthesis. The strains expressing exogenous Stenotrophomonas maltophilia oleA, with increased levels of ketone productionmore » as determined by gas chromatography-mass spectrometry, were distinguished with Nile red fluorescence. Confocal microscopy images of S. oneidensis oleA-expressing strains stained with Nile red were consistent with a membrane localization of the ketones. This differed from Nile red staining of bacterial PHB or algal lipid droplets that showed intracellular inclusion bodies. These results demonstrated the applicability of Nile red in a high-throughput technique for the detection of bacterial hydrocarbons and ketones. IMPORTANCE In recent years, there has been renewed interest in advanced biofuel sources such as bacterial hydrocarbon production. Previous studies used solvent extraction of bacterial cultures followed by gas chromatography-mass spectrometry (GC-MS) to detect and quantify ketones and hydrocarbons (Beller HR, Goh EB, Keasling JD, Appl. Environ. Microbiol. 76: 1212-1223, 2010; Sukovich DJ, Seffernick JL, Richman JE, Gralnick JA, Wackett LP, Appl. Environ. Microbiol. 76: 3850-3862, 2010). While these analyses are powerful and accurate, their labor-intensive nature makes them intractable to high-throughput screening; therefore, methods for rapid identification of bacterial strains that are overproducing hydrocarbons are needed. The use of high-throughput evaluation of bacterial and algal hydrophobic molecule production via Nile red fluorescence from lipids and esters was extended in this study to include hydrocarbons and ketones. This work demonstrated accurate, high-throughput detection of high-level bacterial long-chain ketone and hydrocarbon production by screening for increased fluorescence of the hydrophobic dye Nile red.« less
USDA Potato Small RNA Database
USDA-ARS?s Scientific Manuscript database
Small RNAs (sRNAs) are now understood to be involved in gene regulation, function and development. High throughput sequencing (HTS) of sRNAs generates large data sets for analyzing the abundance, source and roles for specific sRNAs. These sRNAs result from transcript degradation as well as specific ...
VIRTUAL EMBRYO: SYSTEMS MODELING IN DEVELOPMENTAL TOXICITY - Symposium: SOT 2012
High-throughput screening (HTS) studies are providing a rich source of data that can be applied to in vitro profiling of chemical compounds for biological activity and potential toxicity. Chemical profiling in ToxCast covered 965 drugs-chemicals in over 500 diverse assays testing...
Chang, Yun-Chorng; Lu, Sih-Chen; Chung, Hsin-Chan; Wang, Shih-Ming; Tsai, Tzung-Da; Guo, Tzung-Fang
2013-01-01
Various infra-red and planar chiral metamaterials were fabricated using the modified Nanospherical-Lens Lithography. By replacing the light source with a hand-held ultraviolet lamp, its asymmetric light emission pattern produces the elliptical-shaped photoresist holes after passing through the spheres. The long axis of the ellipse is parallel to the lamp direction. The fabricated ellipse arrays exhibit localized surface plasmon resonance in mid-infra-red and are ideal platforms for surface enhanced infra-red absorption (SEIRA). We also demonstrate a way to design and fabricate complicated patterns by tuning parameters in each exposure step. This method is both high-throughput and low-cost, which is a powerful tool for future infra-red metamaterials applications. PMID:24284941
Mining high-throughput experimental data to link gene and function
Blaby-Haas, Crysten E.; de Crécy-Lagard, Valérie
2011-01-01
Nearly 2200 genomes encoding some 6 million proteins have now been sequenced. Around 40% of these proteins are of unknown function even when function is loosely and minimally defined as “belonging to a superfamily”. In addition to in silico methods, the swelling stream of high-throughput experimental data can give valuable clues for linking these “unknowns” with precise biological roles. The goal is to develop integrative data-mining platforms that allow the scientific community at large to access and utilize this rich source of experimental knowledge. To this end, we review recent advances in generating whole-genome experimental datasets, where this data can be accessed, and how it can be used to drive prediction of gene function. PMID:21310501
Novel selection methods for DNA-encoded chemical libraries.
Chan, Alix I; McGregor, Lynn M; Liu, David R
2015-06-01
Driven by the need for new compounds to serve as biological probes and leads for therapeutic development and the growing accessibility of DNA technologies including high-throughput sequencing, many academic and industrial groups have begun to use DNA-encoded chemical libraries as a source of bioactive small molecules. In this review, we describe the technologies that have enabled the selection of compounds with desired activities from these libraries. These methods exploit the sensitivity of in vitro selection coupled with DNA amplification to overcome some of the limitations and costs associated with conventional screening methods. In addition, we highlight newer techniques with the potential to be applied to the high-throughput evaluation of DNA-encoded chemical libraries. Copyright © 2015 Elsevier Ltd. All rights reserved.
Schnoes, Alexandra M.; Ream, David C.; Thorman, Alexander W.; Babbitt, Patricia C.; Friedberg, Iddo
2013-01-01
The ongoing functional annotation of proteins relies upon the work of curators to capture experimental findings from scientific literature and apply them to protein sequence and structure data. However, with the increasing use of high-throughput experimental assays, a small number of experimental studies dominate the functional protein annotations collected in databases. Here, we investigate just how prevalent is the “few articles - many proteins” phenomenon. We examine the experimentally validated annotation of proteins provided by several groups in the GO Consortium, and show that the distribution of proteins per published study is exponential, with 0.14% of articles providing the source of annotations for 25% of the proteins in the UniProt-GOA compilation. Since each of the dominant articles describes the use of an assay that can find only one function or a small group of functions, this leads to substantial biases in what we know about the function of many proteins. Mass-spectrometry, microscopy and RNAi experiments dominate high throughput experiments. Consequently, the functional information derived from these experiments is mostly of the subcellular location of proteins, and of the participation of proteins in embryonic developmental pathways. For some organisms, the information provided by different studies overlap by a large amount. We also show that the information provided by high throughput experiments is less specific than those provided by low throughput experiments. Given the experimental techniques available, certain biases in protein function annotation due to high-throughput experiments are unavoidable. Knowing that these biases exist and understanding their characteristics and extent is important for database curators, developers of function annotation programs, and anyone who uses protein function annotation data to plan experiments. PMID:23737737
NASA Astrophysics Data System (ADS)
Lin, W.; Noormets, A.; domec, J.; King, J. S.; Sun, G.; McNulty, S.
2012-12-01
Wood stable isotope ratios (δ13C and δ18O) offer insight to water source and plant water use efficiency (WUE), which in turn provide a glimpse to potential plant responses to changing climate, particularly rainfall patterns. The synthetic pathways of cell wall deposition in wood rings differ in their discrimination ratios between the light and heavy isotopes, and α-cellulose is broadly seen as the best indicator of plant water status due to its local and temporal fixation and to its high abundance within the wood. To use the effects of recent severe droughts on the WUE of loblolly pine (Pinus taeda) throughout Southeastern USA as a harbinger of future changes, an effort has been undertaken to sample the entire range of the species and to sample the isotopic composition in a consistent manner. To be able to accommodate the large number of samples required by this analysis, we have developed a new high-throughput method for α-cellulose extraction, which is the rate-limiting step in such an endeavor. Although an entire family of methods has been developed and perform well, their throughput in a typical research lab setting is limited to 16-75 samples per week with intensive labor input. The resin exclusion step in conifersis is particularly time-consuming. We have combined the recent advances of α-cellulose extraction in plant ecology and wood science, including a high-throughput extraction device developed in the Potsdam Dendro Lab and a simple chemical-based resin exclusion method. By transferring the entire extraction process to a multiport-based system allows throughputs of up to several hundred samples in two weeks, while minimizing labor requirements to 2-3 days per batch of samples.
Doyle, Conor J; Gleeson, David; O'Toole, Paul W; Cotter, Paul D
2017-01-15
In pasture-based systems, changes in dairy herd habitat due to seasonality results in the exposure of animals to different environmental niches. These niches contain distinct microbial communities that may be transferred to raw milk, with potentially important food quality and safety implications for milk producers. It is postulated that the extent to which these microorganisms are transferred could be limited by the inclusion of a teat preparation step prior to milking. High-throughput sequencing on a variety of microbial niches on farms was used to study the patterns of microbial movement through the dairy production chain and, in the process, to investigate the impact of seasonal housing and the inclusion/exclusion of a teat preparation regime on the raw milk microbiota from the same herd over two sampling periods, i.e., indoor and outdoor. Beta diversity and network analyses showed that environmental and milk microbiotas separated depending on whether they were sourced from an indoor or outdoor environment. Within these respective habitats, similarities between the milk microbiota and that of teat swab samples and, to a lesser extent, fecal samples were apparent. Indeed, SourceTracker identified the teat surface as the most significant source of contamination, with herd feces being the next most prevalent source of contamination. In milk from cows grazing outdoors, teat prep significantly increased the numbers of total bacteria present. In summary, sequence-based microbiota analysis identified possible sources of raw milk contamination and highlighted the influence of environment and farm management practices on the raw milk microbiota. The composition of the raw milk microbiota is an important consideration from both a spoilage perspective and a food safety perspective and has implications for milk targeted for direct consumption and for downstream processing. Factors that influence contamination have been examined previously, primarily through the use of culture-based techniques. We describe here the extensive application of high-throughput DNA sequencing technologies to study the relationship between the milk production environment and the raw milk microbiota. The results show that the environment in which the herd was kept was the primary driver of the composition of the milk microbiota composition. Copyright © 2016 American Society for Microbiology.
Doyle, Conor J.; Gleeson, David; O'Toole, Paul W.
2016-01-01
ABSTRACT In pasture-based systems, changes in dairy herd habitat due to seasonality results in the exposure of animals to different environmental niches. These niches contain distinct microbial communities that may be transferred to raw milk, with potentially important food quality and safety implications for milk producers. It is postulated that the extent to which these microorganisms are transferred could be limited by the inclusion of a teat preparation step prior to milking. High-throughput sequencing on a variety of microbial niches on farms was used to study the patterns of microbial movement through the dairy production chain and, in the process, to investigate the impact of seasonal housing and the inclusion/exclusion of a teat preparation regime on the raw milk microbiota from the same herd over two sampling periods, i.e., indoor and outdoor. Beta diversity and network analyses showed that environmental and milk microbiotas separated depending on whether they were sourced from an indoor or outdoor environment. Within these respective habitats, similarities between the milk microbiota and that of teat swab samples and, to a lesser extent, fecal samples were apparent. Indeed, SourceTracker identified the teat surface as the most significant source of contamination, with herd feces being the next most prevalent source of contamination. In milk from cows grazing outdoors, teat prep significantly increased the numbers of total bacteria present. In summary, sequence-based microbiota analysis identified possible sources of raw milk contamination and highlighted the influence of environment and farm management practices on the raw milk microbiota. IMPORTANCE The composition of the raw milk microbiota is an important consideration from both a spoilage perspective and a food safety perspective and has implications for milk targeted for direct consumption and for downstream processing. Factors that influence contamination have been examined previously, primarily through the use of culture-based techniques. We describe here the extensive application of high-throughput DNA sequencing technologies to study the relationship between the milk production environment and the raw milk microbiota. The results show that the environment in which the herd was kept was the primary driver of the composition of the milk microbiota composition. PMID:27815277
Mackie, Amanda; Paley, Suzanne; Keseler, Ingrid M; Shearer, Alexander; Paulsen, Ian T; Karp, Peter D
2014-03-01
The sets of compounds that can support growth of an organism are defined by the presence of transporters and metabolic pathways that convert nutrient sources into cellular components and energy for growth. A collection of known nutrient sources can therefore serve both as an impetus for investigating new metabolic pathways and transporters and as a reference for computational modeling of known metabolic pathways. To establish such a collection for Escherichia coli K-12, we have integrated data on the growth or nongrowth of E. coli K-12 obtained from published observations using a variety of individual media and from high-throughput phenotype microarrays into the EcoCyc database. The assembled collection revealed a substantial number of discrepancies between the high-throughput data sets, which we investigated where possible using low-throughput growth assays on soft agar and in liquid culture. We also integrated six data sets describing 16,119 observations of the growth of single-gene knockout mutants of E. coli K-12 into EcoCyc, which are relevant to antimicrobial drug design, provide clues regarding the roles of genes of unknown function, and are useful for validating metabolic models. To make this information easily accessible to EcoCyc users, we developed software for capturing, querying, and visualizing cellular growth assays and gene essentiality data.
Słomka, Marcin; Sobalska-Kwapis, Marta; Wachulec, Monika; Bartosz, Grzegorz; Strapagiel, Dominik
2017-11-03
High resolution melting (HRM) is a convenient method for gene scanning as well as genotyping of individual and multiple single nucleotide polymorphisms (SNPs). This rapid, simple, closed-tube, homogenous, and cost-efficient approach has the capacity for high specificity and sensitivity, while allowing easy transition to high-throughput scale. In this paper, we provide examples from our laboratory practice of some problematic issues which can affect the performance and data analysis of HRM results, especially with regard to reference curve-based targeted genotyping. We present those examples in order of the typical experimental workflow, and discuss the crucial significance of the respective experimental errors and limitations for the quality and analysis of results. The experimental details which have a decisive impact on correct execution of a HRM genotyping experiment include type and quality of DNA source material, reproducibility of isolation method and template DNA preparation, primer and amplicon design, automation-derived preparation and pipetting inconsistencies, as well as physical limitations in melting curve distinction for alternative variants and careful selection of samples for validation by sequencing. We provide a case-by-case analysis and discussion of actual problems we encountered and solutions that should be taken into account by researchers newly attempting HRM genotyping, especially in a high-throughput setup.
Słomka, Marcin; Sobalska-Kwapis, Marta; Wachulec, Monika; Bartosz, Grzegorz
2017-01-01
High resolution melting (HRM) is a convenient method for gene scanning as well as genotyping of individual and multiple single nucleotide polymorphisms (SNPs). This rapid, simple, closed-tube, homogenous, and cost-efficient approach has the capacity for high specificity and sensitivity, while allowing easy transition to high-throughput scale. In this paper, we provide examples from our laboratory practice of some problematic issues which can affect the performance and data analysis of HRM results, especially with regard to reference curve-based targeted genotyping. We present those examples in order of the typical experimental workflow, and discuss the crucial significance of the respective experimental errors and limitations for the quality and analysis of results. The experimental details which have a decisive impact on correct execution of a HRM genotyping experiment include type and quality of DNA source material, reproducibility of isolation method and template DNA preparation, primer and amplicon design, automation-derived preparation and pipetting inconsistencies, as well as physical limitations in melting curve distinction for alternative variants and careful selection of samples for validation by sequencing. We provide a case-by-case analysis and discussion of actual problems we encountered and solutions that should be taken into account by researchers newly attempting HRM genotyping, especially in a high-throughput setup. PMID:29099791
Predicting SVOC Emissions into Air and Foods in Support of High-Throughput Exposure Assessment
The release of semi-volatile organic compounds (SVOCs) from consumer articles may be a critical human exposure pathway. In addition, the migration of SVOCs from food packaging materials into foods may also be a dominant source of exposure for some chemicals. Here we describe re...
High Throughput Modeling of Indoor Exposures to Chemicals (SOT)
Risk due to chemical exposure is a function of both chemical hazard and exposure. Proximate sources of exposure due to the presence of a chemical in consumer products (i.e. near-field exposure) are identified as key drivers of exposure and yet are not well quantified or understo...
2014-01-01
Background RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. Results We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification” includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module “mRNA identification” includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module “Target screening” provides expression profiling analyses and graphic visualization. The module “Self-testing” offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program’s functionality. Conclusions eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory. PMID:24593312
Yuan, Tiezheng; Huang, Xiaoyi; Dittmar, Rachel L; Du, Meijun; Kohli, Manish; Boardman, Lisa; Thibodeau, Stephen N; Wang, Liang
2014-03-05
RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification" includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module "mRNA identification" includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module "Target screening" provides expression profiling analyses and graphic visualization. The module "Self-testing" offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program's functionality. eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory.
Xi-cam: a versatile interface for data visualization and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandolfi, Ronald J.; Allan, Daniel B.; Arenholz, Elke
Xi-cam is an extensible platform for data management, analysis and visualization.Xi-camaims to provide a flexible and extensible approach to synchrotron data treatment as a solution to rising demands for high-volume/high-throughput processing pipelines. The core ofXi-camis an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms. Plugins are available for SAXS/WAXS/GISAXS/GIWAXS, tomography and NEXAFS data. WithXi-cam's `advanced' mode, data processing steps are designed as a graph-based workflow, which can be executed live, locally or remotely. Remote execution utilizes high-performance computing or de-localized resources, allowing for the effective reduction of high-throughput data.Xi-cam's plugin-based architecture targetsmore » cross-facility and cross-technique collaborative development, in support of multi-modal analysis.Xi-camis open-source and cross-platform, and available for download on GitHub.« less
Xi-cam: a versatile interface for data visualization and analysis
Pandolfi, Ronald J.; Allan, Daniel B.; Arenholz, Elke; ...
2018-05-31
Xi-cam is an extensible platform for data management, analysis and visualization.Xi-camaims to provide a flexible and extensible approach to synchrotron data treatment as a solution to rising demands for high-volume/high-throughput processing pipelines. The core ofXi-camis an extensible plugin-based graphical user interface platform which provides users with an interactive interface to processing algorithms. Plugins are available for SAXS/WAXS/GISAXS/GIWAXS, tomography and NEXAFS data. WithXi-cam's `advanced' mode, data processing steps are designed as a graph-based workflow, which can be executed live, locally or remotely. Remote execution utilizes high-performance computing or de-localized resources, allowing for the effective reduction of high-throughput data.Xi-cam's plugin-based architecture targetsmore » cross-facility and cross-technique collaborative development, in support of multi-modal analysis.Xi-camis open-source and cross-platform, and available for download on GitHub.« less
Liu, X-L; Liu, H-N; Tan, P-H
2017-08-01
Resonant Raman spectroscopy requires that the wavelength of the laser used is close to that of an electronic transition. A tunable laser source and a triple spectrometer are usually necessary for resonant Raman profile measurements. However, such a system is complex with low signal throughput, which limits its wide application by scientific community. Here, a tunable micro-Raman spectroscopy system based on the supercontinuum laser, transmission grating, tunable filters, and single-stage spectrometer is introduced to measure the resonant Raman profile. The supercontinuum laser in combination with transmission grating makes a tunable excitation source with a bandwidth of sub-nanometer. Such a system exhibits continuous excitation tunability and high signal throughput. Its good performance and flexible tunability are verified by resonant Raman profile measurement of twisted bilayer graphene, which demonstrates its potential application prospect for resonant Raman spectroscopy.
High throughput laser processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harley, Gabriel; Pass, Thomas; Cousins, Peter John
A solar cell is formed using a solar cell ablation system. The ablation system includes a single laser source and several laser scanners. The laser scanners include a master laser scanner, with the rest of the laser scanners being slaved to the master laser scanner. A laser beam from the laser source is split into several laser beams, with the laser beams being scanned onto corresponding wafers using the laser scanners in accordance with one or more patterns. The laser beams may be scanned on the wafers using the same or different power levels of the laser source.
web cellHTS2: a web-application for the analysis of high-throughput screening data.
Pelz, Oliver; Gilsdorf, Moritz; Boutros, Michael
2010-04-12
The analysis of high-throughput screening data sets is an expanding field in bioinformatics. High-throughput screens by RNAi generate large primary data sets which need to be analyzed and annotated to identify relevant phenotypic hits. Large-scale RNAi screens are frequently used to identify novel factors that influence a broad range of cellular processes, including signaling pathway activity, cell proliferation, and host cell infection. Here, we present a web-based application utility for the end-to-end analysis of large cell-based screening experiments by cellHTS2. The software guides the user through the configuration steps that are required for the analysis of single or multi-channel experiments. The web-application provides options for various standardization and normalization methods, annotation of data sets and a comprehensive HTML report of the screening data analysis, including a ranked hit list. Sessions can be saved and restored for later re-analysis. The web frontend for the cellHTS2 R/Bioconductor package interacts with it through an R-server implementation that enables highly parallel analysis of screening data sets. web cellHTS2 further provides a file import and configuration module for common file formats. The implemented web-application facilitates the analysis of high-throughput data sets and provides a user-friendly interface. web cellHTS2 is accessible online at http://web-cellHTS2.dkfz.de. A standalone version as a virtual appliance and source code for platforms supporting Java 1.5.0 can be downloaded from the web cellHTS2 page. web cellHTS2 is freely distributed under GPL.
Increased collection efficiency of LIFI high intensity electrodeless light source
NASA Astrophysics Data System (ADS)
Hafidi, Abdeslam; DeVincentis, Marc; Duelli, Markus; Gilliard, Richard
2008-02-01
Recently, RF driven electrodeless high intensity light sources have been implemented successfully in the projection display systems for HDTV and videowall applications. This paper presents advances made in the RF waveguide and electric field concentrator structures with the purpose of reducing effective arc size and increasing light collection. In addition, new optical designs are described that further improve system efficiency. The results of this work demonstrate that projection system light throughput is increased relative to previous implementations and performance is optimized for home theater and other front projector applications that maintain multi-year lifetime without re-lamping, complete spectral range, fast start times and high levels of dynamic contrast due to dimming flexibility in the light source system.
Mining high-throughput experimental data to link gene and function.
Blaby-Haas, Crysten E; de Crécy-Lagard, Valérie
2011-04-01
Nearly 2200 genomes that encode around 6 million proteins have now been sequenced. Around 40% of these proteins are of unknown function, even when function is loosely and minimally defined as 'belonging to a superfamily'. In addition to in silico methods, the swelling stream of high-throughput experimental data can give valuable clues for linking these unknowns with precise biological roles. The goal is to develop integrative data-mining platforms that allow the scientific community at large to access and utilize this rich source of experimental knowledge. To this end, we review recent advances in generating whole-genome experimental datasets, where this data can be accessed, and how it can be used to drive prediction of gene function. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aderogba, S.; Meacham, J.M.; Degertekin, F.L.
2005-05-16
Ultrasonic electrospray ionization (ESI) for high-throughput mass spectrometry is demonstrated using a silicon micromachined microarray. The device uses a micromachined ultrasonic atomizer operating in the 900 kHz-2.5 MHz range for droplet generation and a metal electrode in the fluid cavity for ionization. Since the atomization and ionization processes are separated, the ultrasonic ESI source shows the potential for operation at low voltages with a wide range of solvents in contrast with conventional capillary ESI technology. This is demonstrated using the ultrasonic ESI microarray to obtain the mass spectrum of a 10 {mu}M reserpine sample on a time of flight massmore » spectrometer with 197:1 signal-to-noise ratio at an ionization potential of 200 V.« less
Engelmann, Brett W
2017-01-01
The Src Homology 2 (SH2) domain family primarily recognizes phosphorylated tyrosine (pY) containing peptide motifs. The relative affinity preferences among competing SH2 domains for phosphopeptide ligands define "specificity space," and underpins many functional pY mediated interactions within signaling networks. The degree of promiscuity exhibited and the dynamic range of affinities supported by individual domains or phosphopeptides is best resolved by a carefully executed and controlled quantitative high-throughput experiment. Here, I describe the fabrication and application of a cellulose-peptide conjugate microarray (CPCMA) platform to the quantitative analysis of SH2 domain specificity space. Included herein are instructions for optimal experimental design with special attention paid to common sources of systematic error, phosphopeptide SPOT synthesis, microarray fabrication, analyte titrations, data capture, and analysis.
Bladergroen, Marco R.; van der Burgt, Yuri E. M.
2015-01-01
For large-scale and standardized applications in mass spectrometry- (MS-) based proteomics automation of each step is essential. Here we present high-throughput sample preparation solutions for balancing the speed of current MS-acquisitions and the time needed for analytical workup of body fluids. The discussed workflows reduce body fluid sample complexity and apply for both bottom-up proteomics experiments and top-down protein characterization approaches. Various sample preparation methods that involve solid-phase extraction (SPE) including affinity enrichment strategies have been automated. Obtained peptide and protein fractions can be mass analyzed by direct infusion into an electrospray ionization (ESI) source or by means of matrix-assisted laser desorption ionization (MALDI) without further need of time-consuming liquid chromatography (LC) separations. PMID:25692071
Multiplexed high resolution soft x-ray RIXS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chuang, Y.-D.; Voronov, D.; Warwick, T.
2016-07-27
High-resolution Resonance Inelastic X-ray Scattering (RIXS) is a technique that allows us to probe the electronic excitations of complex materials with unprecedented precision. However, the RIXS process has a low cross section, compounded by the fact that the optical spectrometers used to analyze the scattered photons can only collect a small solid angle and overall have a small efficiency. Here we present a method to significantly increase the throughput of RIXS systems, by energy multiplexing, so that a complete RIXS map of scattered intensity versus photon energy in and photon energy out can be recorded simultaneously{sup 1}. This parallel acquisitionmore » scheme should provide a gain in throughput of over 100.. A system based on this principle, QERLIN, is under construction at the Advanced Light Source (ALS).« less
Dreyer, Florian S; Cantone, Martina; Eberhardt, Martin; Jaitly, Tanushree; Walter, Lisa; Wittmann, Jürgen; Gupta, Shailendra K; Khan, Faiz M; Wolkenhauer, Olaf; Pützer, Brigitte M; Jäck, Hans-Martin; Heinzerling, Lucie; Vera, Julio
2018-06-01
Cellular phenotypes are established and controlled by complex and precisely orchestrated molecular networks. In cancer, mutations and dysregulations of multiple molecular factors perturb the regulation of these networks and lead to malignant transformation. High-throughput technologies are a valuable source of information to establish the complex molecular relationships behind the emergence of malignancy, but full exploitation of this massive amount of data requires bioinformatics tools that rely on network-based analyses. In this report we present the Virtual Melanoma Cell, an online tool developed to facilitate the mining and interpretation of high-throughput data on melanoma by biomedical researches. The platform is based on a comprehensive, manually generated and expert-validated regulatory map composed of signaling pathways important in malignant melanoma. The Virtual Melanoma Cell is a tool designed to accept, visualize and analyze user-generated datasets. It is available at: https://www.vcells.net/melanoma. To illustrate the utilization of the web platform and the regulatory map, we have analyzed a large publicly available dataset accounting for anti-PD1 immunotherapy treatment of malignant melanoma patients. Copyright © 2018 Elsevier B.V. All rights reserved.
Characterization of an atmospheric pressure air plasma source for polymer surface modification
NASA Astrophysics Data System (ADS)
Yang, Shujun; Tang, Jiansheng
2013-10-01
An atmospheric pressure air plasma source was generated through dielectric barrier discharge (DBD). It was used to modify polyethyleneterephthalate (PET) surfaces with very high throughput. An equivalent circuit model was used to calculate the peak average electron density. The emission spectrum from the plasma was taken and the main peaks in the spectrum were identified. The ozone density in the down plasma region was estimated by Absorption Spectroscopy. NSF and ARC-ODU
LIQUID: an-open source software for identifying lipids in LC-MS/MS-based lipidomics data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kyle, Jennifer E.; Crowell, Kevin L.; Casey, Cameron P.
2017-01-31
We introduce an open-source software, LIQUID, for semi-automated processing and visualization of LC-MS/MS based lipidomics data. LIQUID provides users with the capability to process high throughput data and contains a customizable target library and scoring model per project needs. The graphical user interface provides visualization of multiple lines of spectral evidence for each lipid identification, allowing rapid examination of data for making confident identifications of lipid molecular species.
Read-across is a technique used to fill data gaps within chemical safety assessments. It is based on the premise that chemicals with similar structures are likely to have similar biological activities. Known information on the property of a chemical (source) is used to make a pre...
Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander
2015-01-01
Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots. It enables scientists to store, manage and share crop root images with metadata and compute RSA traits from thousands of images in parallel. It makes high-throughput RSA trait computation available to the community with just a few button clicks. As such it enables plant scientists to spend more time on science rather than on technology. All stored and computed data is easily accessible to the public and broader scientific community. We hope that easy data accessibility will attract new tool developers and spur creative data usage that may even be applied to other fields of science.
Devailly, Guillaume; Mantsoki, Anna; Joshi, Anagha
2016-11-01
Better protocols and decreasing costs have made high-throughput sequencing experiments now accessible even to small experimental laboratories. However, comparing one or few experiments generated by an individual lab to the vast amount of relevant data freely available in the public domain might be limited due to lack of bioinformatics expertise. Though several tools, including genome browsers, allow such comparison at a single gene level, they do not provide a genome-wide view. We developed Heat*seq, a web-tool that allows genome scale comparison of high throughput experiments chromatin immuno-precipitation followed by sequencing, RNA-sequencing and Cap Analysis of Gene Expression) provided by a user, to the data in the public domain. Heat*seq currently contains over 12 000 experiments across diverse tissues and cell types in human, mouse and drosophila. Heat*seq displays interactive correlation heatmaps, with an ability to dynamically subset datasets to contextualize user experiments. High quality figures and tables are produced and can be downloaded in multiple formats. Web application: http://www.heatstarseq.roslin.ed.ac.uk/ Source code: https://github.com/gdevailly CONTACT: Guillaume.Devailly@roslin.ed.ac.uk or Anagha.Joshi@roslin.ed.ac.ukSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Pathak, Jyotishman; Bailey, Kent R; Beebe, Calvin E; Bethard, Steven; Carrell, David S; Chen, Pei J; Dligach, Dmitriy; Endle, Cory M; Hart, Lacey A; Haug, Peter J; Huff, Stanley M; Kaggal, Vinod C; Li, Dingcheng; Liu, Hongfang; Marchant, Kyle; Masanz, James; Miller, Timothy; Oniki, Thomas A; Palmer, Martha; Peterson, Kevin J; Rea, Susan; Savova, Guergana K; Stancl, Craig R; Sohn, Sunghwan; Solbrig, Harold R; Suesse, Dale B; Tao, Cui; Taylor, David P; Westberg, Les; Wu, Stephen; Zhuo, Ning; Chute, Christopher G
2013-01-01
Research objective To develop scalable informatics infrastructure for normalization of both structured and unstructured electronic health record (EHR) data into a unified, concept-based model for high-throughput phenotype extraction. Materials and methods Software tools and applications were developed to extract information from EHRs. Representative and convenience samples of both structured and unstructured data from two EHR systems—Mayo Clinic and Intermountain Healthcare—were used for development and validation. Extracted information was standardized and normalized to meaningful use (MU) conformant terminology and value set standards using Clinical Element Models (CEMs). These resources were used to demonstrate semi-automatic execution of MU clinical-quality measures modeled using the Quality Data Model (QDM) and an open-source rules engine. Results Using CEMs and open-source natural language processing and terminology services engines—namely, Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) and Common Terminology Services (CTS2)—we developed a data-normalization platform that ensures data security, end-to-end connectivity, and reliable data flow within and across institutions. We demonstrated the applicability of this platform by executing a QDM-based MU quality measure that determines the percentage of patients between 18 and 75 years with diabetes whose most recent low-density lipoprotein cholesterol test result during the measurement year was <100 mg/dL on a randomly selected cohort of 273 Mayo Clinic patients. The platform identified 21 and 18 patients for the denominator and numerator of the quality measure, respectively. Validation results indicate that all identified patients meet the QDM-based criteria. Conclusions End-to-end automated systems for extracting clinical information from diverse EHR systems require extensive use of standardized vocabularies and terminologies, as well as robust information models for storing, discovering, and processing that information. This study demonstrates the application of modular and open-source resources for enabling secondary use of EHR data through normalization into standards-based, comparable, and consistent format for high-throughput phenotyping to identify patient cohorts. PMID:24190931
Strategic and Operational Plan for Integrating Transcriptomics ...
Plans for incorporating high throughput transcriptomics into the current high throughput screening activities at NCCT; the details are in the attached slide presentation presentation on plans for incorporating high throughput transcriptomics into the current high throughput screening activities at NCCT, given at the OECD meeting on June 23, 2016
High-Throughput Experimental Approach Capabilities | Materials Science |
NREL High-Throughput Experimental Approach Capabilities High-Throughput Experimental Approach by yellow and is for materials in the upper right sector. NREL's high-throughput experimental ,Te) and oxysulfide sputtering Combi-5: Nitrides and oxynitride sputtering We also have several non
Helsens, Kenny; Colaert, Niklaas; Barsnes, Harald; Muth, Thilo; Flikka, Kristian; Staes, An; Timmerman, Evy; Wortelkamp, Steffi; Sickmann, Albert; Vandekerckhove, Joël; Gevaert, Kris; Martens, Lennart
2010-03-01
MS-based proteomics produces large amounts of mass spectra that require processing, identification and possibly quantification before interpretation can be undertaken. High-throughput studies require automation of these various steps, and management of the data in association with the results obtained. We here present ms_lims (http://genesis.UGent.be/ms_lims), a freely available, open-source system based on a central database to automate data management and processing in MS-driven proteomics analyses.
High count-rate study of two TES x-ray microcalorimeters with different transition temperatures
NASA Astrophysics Data System (ADS)
Lee, Sang-Jun; Adams, Joseph S.; Bandler, Simon R.; Betancourt-Martinez, Gabriele L.; Chervenak, James A.; Eckart, Megan E.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.; Porter, Frederick S.; Sadleir, John E.; Smith, Stephen J.; Wassell, Edward J.
2017-10-01
We have developed transition-edge sensor (TES) microcalorimeter arrays with high count-rate capability and high energy resolution to carry out x-ray imaging spectroscopy observations of various astronomical sources and the Sun. We have studied the dependence of the energy resolution and throughput (fraction of processed pulses) on the count rate for such microcalorimeters with two different transition temperatures (T c). Devices with both transition temperatures were fabricated within a single microcalorimeter array directly on top of a solid substrate where the thermal conductance of the microcalorimeter is dependent upon the thermal boundary resistance between the TES sensor and the dielectric substrate beneath. Because the thermal boundary resistance is highly temperature dependent, the two types of device with different T cs had very different thermal decay times, approximately one order of magnitude different. In our earlier report, we achieved energy resolutions of 1.6 and 2.3 eV at 6 keV from lower and higher T c devices, respectively, using a standard analysis method based on optimal filtering in the low flux limit. We have now measured the same devices at elevated x-ray fluxes ranging from 50 Hz to 1000 Hz per pixel. In the high flux limit, however, the standard optimal filtering scheme nearly breaks down because of x-ray pile-up. To achieve the highest possible energy resolution for a fixed throughput, we have developed an analysis scheme based on the so-called event grade method. Using the new analysis scheme, we achieved 5.0 eV FWHM with 96% throughput for 6 keV x-rays of 1025 Hz per pixel with the higher T c (faster) device, and 5.8 eV FWHM with 97% throughput with the lower T c (slower) device at 722 Hz.
2007-12-14
contained dried residues from a collection of terrestrial plants , marine inver- tebrates, and various fungi. NCI plate numbers, sources of extracts, and... plants ), while Fig. 3B displays results from row G of the same plate. In these examples, wells B3, B5, B9, G9, and G12 were selected for further...sources of extracts Plate no. Source Extraction solvent 96110120 Terrestrial plants Water 96110125 Terrestrial plants CH3OH-CH2Cl2 12000707 Marine
High throughput solar cell ablation system
Harley, Gabriel; Pass, Thomas; Cousins, Peter John; Viatella, John
2014-10-14
A solar cell is formed using a solar cell ablation system. The ablation system includes a single laser source and several laser scanners. The laser scanners include a master laser scanner, with the rest of the laser scanners being slaved to the master laser scanner. A laser beam from the laser source is split into several laser beams, with the laser beams being scanned onto corresponding wafers using the laser scanners in accordance with one or more patterns. The laser beams may be scanned on the wafers using the same or different power levels of the laser source.
High throughput solar cell ablation system
Harley, Gabriel; Pass, Thomas; Cousins, Peter John; Viatella, John
2012-09-11
A solar cell is formed using a solar cell ablation system. The ablation system includes a single laser source and several laser scanners. The laser scanners include a master laser scanner, with the rest of the laser scanners being slaved to the master laser scanner. A laser beam from the laser source is split into several laser beams, with the laser beams being scanned onto corresponding wafers using the laser scanners in accordance with one or more patterns. The laser beams may be scanned on the wafers using the same or different power levels of the laser source.
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.
Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid
Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617
Computational methods for evaluation of cell-based data assessment--Bioconductor.
Le Meur, Nolwenn
2013-02-01
Recent advances in miniaturization and automation of technologies have enabled cell-based assay high-throughput screening, bringing along new challenges in data analysis. Automation, standardization, reproducibility have become requirements for qualitative research. The Bioconductor community has worked in that direction proposing several R packages to handle high-throughput data including flow cytometry (FCM) experiment. Altogether, these packages cover the main steps of a FCM analysis workflow, that is, data management, quality assessment, normalization, outlier detection, automated gating, cluster labeling, and feature extraction. Additionally, the open-source philosophy of R and Bioconductor, which offers room for new development, continuously drives research and improvement of theses analysis methods, especially in the field of clustering and data mining. This review presents the principal FCM packages currently available in R and Bioconductor, their advantages and their limits. Copyright © 2012 Elsevier Ltd. All rights reserved.
High sensitive and throughput screening of Aflatoxin using MALDI-TOF-TOF-PSD-MS/MS
USDA-ARS?s Scientific Manuscript database
We have achieved sensitive and efficient detection of aflatoxin B1(AFB1) through matrix-assisted laser desorption/ionization time-of-flight-time-of-flight mass spectrometry (MALDI-TOF-TOF) and post-source decay (PSD) tandem mass spectrometry (MS/MS) using an acetic acid – a-cyano-4-hydroxycinnamic a...
An inexpensive autosampler for a DART/TOFMS provides mass spectra from analytes absorbed on 76 cotton swab, wipe samples in 7.5 min. A field sample carrier simplifies sample collection and provides swabs nearly ready for analysis to the lab. Applications of the high throughput pr...
USDA-ARS?s Scientific Manuscript database
The molecular biological techniques for plasmid-based assembly and cloning of gene open reading frames are essential for elucidating the function of the proteins encoded by the genes. These techniques involve the production of full-length cDNA libraries as a source of plasmid-based clones to expres...
Lee, Si Hoon; Lindquist, Nathan C.; Wittenberg, Nathan J.; Jordan, Luke R.; Oh, Sang-Hyun
2012-01-01
With recent advances in high-throughput proteomics and systems biology, there is a growing demand for new instruments that can precisely quantify a wide range of receptor-ligand binding kinetics in a high-throughput fashion. Here we demonstrate a surface plasmon resonance (SPR) imaging spectroscopy instrument capable of extracting binding kinetics and affinities from 50 parallel microfluidic channels simultaneously. The instrument utilizes large-area (~cm2) metallic nanohole arrays as SPR sensing substrates and combines a broadband light source, a high-resolution imaging spectrometer and a low-noise CCD camera to extract spectral information from every channel in real time with a refractive index resolution of 7.7 × 10−6. To demonstrate the utility of our instrument for quantifying a wide range of biomolecular interactions, each parallel microfluidic channel is coated with a biomimetic supported lipid membrane containing ganglioside (GM1) receptors. The binding kinetics of cholera toxin b (CTX-b) to GM1 are then measured in a single experiment from 50 channels. By combining the highly parallel microfluidic device with large-area periodic nanohole array chips, our SPR imaging spectrometer system enables high-throughput, label-free, real-time SPR biosensing, and its full-spectral imaging capability combined with nanohole arrays could enable integration of SPR imaging with concurrent surface-enhanced Raman spectroscopy. PMID:22895607
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orton, Daniel J.; Tfaily, Malak M.; Moore, Ronald J.
To better understand disease conditions and environmental perturbations, multi-omic studies (i.e. proteomic, lipidomic, metabolomic, etc. analyses) are vastly increasing in popularity. In a multi-omic study, a single sample is typically extracted in multiple ways and numerous analyses are performed using different instruments. Thus, one sample becomes many analyses, making high throughput and reproducible evaluations a necessity. One way to address the numerous samples and varying instrumental conditions is to utilize a flow injection analysis (FIA) system for rapid sample injection. While some FIA systems have been created to address these challenges, many have limitations such as high consumable costs, lowmore » pressure capabilities, limited pressure monitoring and fixed flow rates. To address these limitations, we created an automated, customizable FIA system capable of operating at diverse flow rates (~50 nL/min to 500 µL/min) to accommodate low- and high-flow instrument sources. This system can also operate at varying analytical throughputs from 24 to 1200 samples per day to enable different MS analysis approaches. Applications ranging from native protein analyses to molecular library construction were performed using the FIA system. The results from these studies showed a highly robust platform, providing consistent performance over many days without carryover as long as washing buffers specific to each molecular analysis were utilized.« less
NASA Astrophysics Data System (ADS)
Loisel, G.; Lake, P.; Gard, P.; Dunham, G.; Nielsen-Weber, L.; Wu, M.; Norris, E.
2016-11-01
At Sandia National Laboratories, the x-ray generator Manson source model 5 was upgraded from 10 to 25 kV. The purpose of the upgrade is to drive higher characteristics photon energies with higher throughput. In this work we present characterization studies for the source size and the x-ray intensity when varying the source voltage for a series of K-, L-, and M-shell lines emitted from Al, Y, and Au elements composing the anode. We used a 2-pinhole camera to measure the source size and an energy dispersive detector to monitor the spectral content and intensity of the x-ray source. As the voltage increases, the source size is significantly reduced and line intensity is increased for the three materials. We can take advantage of the smaller source size and higher source throughput to effectively calibrate the suite of Z Pulsed Power Facility crystal spectrometers.
Fluorescence lifetime plate reader: Resolution and precision meet high-throughput
Petersen, Karl J.; Peterson, Kurt C.; Muretta, Joseph M.; Higgins, Sutton E.; Gillispie, Gregory D.; Thomas, David D.
2014-01-01
We describe a nanosecond time-resolved fluorescence spectrometer that acquires fluorescence decay waveforms from each well of a 384-well microplate in 3 min with signal-to-noise exceeding 400 using direct waveform recording. The instrument combines high-energy pulsed laser sources (5–10 kHz repetition rate) with a photomultiplier and high-speed digitizer (1 GHz) to record a fluorescence decay waveform after each pulse. Waveforms acquired from rhodamine or 5-((2-aminoethyl)amino) naphthalene-1-sulfonic acid dyes in a 384-well plate gave lifetime measurements 5- to 25-fold more precise than the simultaneous intensity measurements. Lifetimes as short as 0.04 ns were acquired by interleaving with an effective sample rate of 5 GHz. Lifetime measurements resolved mixtures of single-exponential dyes with better than 1% accuracy. The fluorescence lifetime plate reader enables multiple-well fluorescence lifetime measurements with an acquisition time of 0.5 s per well, suitable for high-throughput fluorescence lifetime screening applications. PMID:25430092
Neutron reflecting supermirror structure
Wood, J.L.
1992-12-01
An improved neutron reflecting supermirror structure comprising a plurality of stacked sets of bilayers of neutron reflecting materials. The improved neutron reflecting supermirror structure is adapted to provide extremely good performance at high incidence angles, i.e. up to four time the critical angle of standard neutron mirror structures. The reflection of neutrons striking the supermirror structure at a high critical angle provides enhanced neutron throughput, and hence more efficient and economical use of neutron sources. 2 figs.
Neutron reflecting supermirror structure
Wood, James L.
1992-01-01
An improved neutron reflecting supermirror structure comprising a plurality of stacked sets of bilayers of neutron reflecting materials. The improved neutron reflecting supermirror structure is adapted to provide extremely good performance at high incidence angles, i.e. up to four time the critical angle of standard neutron mirror structures. The reflection of neutrons striking the supermirror structure at a high critical angle provides enhanced neutron throughput, and hence more efficient and economical use of neutron sources.
Soulard, Patricia; McLaughlin, Meg; Stevens, Jessica; Connolly, Brendan; Coli, Rocco; Wang, Leyu; Moore, Jennifer; Kuo, Ming-Shang T; LaMarr, William A; Ozbal, Can C; Bhat, B Ganesh
2008-10-03
Several recent reports suggest that stearoyl-CoA desaturase 1 (SCD1), the rate-limiting enzyme in monounsaturated fatty acid synthesis, plays an important role in regulating lipid homeostasis and lipid oxidation in metabolically active tissues. As several manifestations of type 2 diabetes and related metabolic disorders are associated with alterations in intracellular lipid partitioning, pharmacological manipulation of SCD1 activity might be of benefit in the treatment of these disease states. In an effort to identify small molecule inhibitors of SCD1, we have developed a mass spectrometry based high-throughput screening (HTS) assay using deuterium labeled stearoyl-CoA substrate and induced rat liver microsomes. The methodology developed allows the use of a nonradioactive substrate which avoids interference by the endogenous SCD1 substrate and/or product that exist in the non-purified enzyme source. Throughput of the assay was up to twenty 384-well assay plates per day. The assay was linear with protein concentration and time, and was saturable for stearoyl-CoA substrate (K(m)=10.5 microM). The assay was highly reproducible with an average Z' value=0.6. Conjugated linoleic acid and sterculic acid, known inhibitors of SCD1, exhibited IC(50) values of 0.88 and 0.12 microM, respectively. High-throughput mass spectrometry screening of over 1.7 million compounds in compressed format demonstrated that the enzyme target is druggable. A total of 2515 hits were identified (0.1% hit rate), and 346 were confirmed active (>40% inhibition of total SCD activity at 20 microM--14% conformation rate). Of the confirmed hits 172 had IC(50) values of <10 microM, including 111 <1 microM and 48 <100 nM. A large number of potent drug-like (MW<450) hits representing six different chemical series were identified. The application of mass spectrometry to high-throughput screening permitted the development of a high-quality screening protocol for an otherwise intractable target, SCD1. Further medicinal chemistry and characterization of SCD inhibitors should lead to the development of reagents to treat metabolic disorders.
Incorporating High-Throughput Exposure Predictions with ...
We previously integrated dosimetry and exposure with high-throughput screening (HTS) to enhance the utility of ToxCast™ HTS data by translating in vitro bioactivity concentrations to oral equivalent doses (OEDs) required to achieve these levels internally. These OEDs were compared against regulatory exposure estimates, providing an activity-to-exposure ratio (AER) useful for a risk-based ranking strategy. As ToxCast™ efforts expand (i.e., Phase II) beyond food-use pesticides towards a wider chemical domain that lacks exposure and toxicity information, prediction tools become increasingly important. In this study, in vitro hepatic clearance and plasma protein binding were measured to estimate OEDs for a subset of Phase II chemicals. OEDs were compared against high-throughput (HT) exposure predictions generated using probabilistic modeling and Bayesian approaches generated by the U.S. EPA ExpoCast™ program. This approach incorporated chemical-specific use and national production volume data with biomonitoring data to inform the exposure predictions. This HT exposure modeling approach provided predictions for all Phase II chemicals assessed in this study whereas estimates from regulatory sources were available for only 7% of chemicals. Of the 163 chemicals assessed in this study, three or 13 chemicals possessed AERs <1 or <100, respectively. Diverse bioactivities y across a range of assays and concentrations was also noted across the wider chemical space su
Validation of high-throughput single cell analysis methodology.
Devonshire, Alison S; Baradez, Marc-Olivier; Morley, Gary; Marshall, Damian; Foy, Carole A
2014-05-01
High-throughput quantitative polymerase chain reaction (qPCR) approaches enable profiling of multiple genes in single cells, bringing new insights to complex biological processes and offering opportunities for single cell-based monitoring of cancer cells and stem cell-based therapies. However, workflows with well-defined sources of variation are required for clinical diagnostics and testing of tissue-engineered products. In a study of neural stem cell lines, we investigated the performance of lysis, reverse transcription (RT), preamplification (PA), and nanofluidic qPCR steps at the single cell level in terms of efficiency, precision, and limit of detection. We compared protocols using a separate lysis buffer with cell capture directly in RT-PA reagent. The two methods were found to have similar lysis efficiencies, whereas the direct RT-PA approach showed improved precision. Digital PCR was used to relate preamplified template copy numbers to Cq values and reveal where low-quality signals may affect the analysis. We investigated the impact of calibration and data normalization strategies as a means of minimizing the impact of inter-experimental variation on gene expression values and found that both approaches can improve data comparability. This study provides validation and guidance for the application of high-throughput qPCR workflows for gene expression profiling of single cells. Copyright © 2014 Elsevier Inc. All rights reserved.
Lu, Xin; Zhang, Xu-Xiang; Wang, Zhu; Huang, Kailong; Wang, Yuan; Liang, Weigang; Tan, Yunfei; Liu, Bo; Tang, Junying
2015-01-01
This study used 454 pyrosequencing, Illumina high-throughput sequencing and metagenomic analysis to investigate bacterial pathogens and their potential virulence in a sewage treatment plant (STP) applying both conventional and advanced treatment processes. Pyrosequencing and Illumina sequencing consistently demonstrated that Arcobacter genus occupied over 43.42% of total abundance of potential pathogens in the STP. At species level, potential pathogens Arcobacter butzleri, Aeromonas hydrophila and Klebsiella pneumonia dominated in raw sewage, which was also confirmed by quantitative real time PCR. Illumina sequencing also revealed prevalence of various types of pathogenicity islands and virulence proteins in the STP. Most of the potential pathogens and virulence factors were eliminated in the STP, and the removal efficiency mainly depended on oxidation ditch. Compared with sand filtration, magnetic resin seemed to have higher removals in most of the potential pathogens and virulence factors. However, presence of the residual A. butzleri in the final effluent still deserves more concerns. The findings indicate that sewage acts as an important source of environmental pathogens, but STPs can effectively control their spread in the environment. Joint use of the high-throughput sequencing technologies is considered a reliable method for deep and comprehensive overview of environmental bacterial virulence. PMID:25938416
Efthymiou, Anastasia; Shaltouki, Atossa; Steiner, Joseph P; Jha, Balendu; Heman-Ackah, Sabrina M; Swistowski, Andrzej; Zeng, Xianmin; Rao, Mahendra S; Malik, Nasir
2014-01-01
Rapid and effective drug discovery for neurodegenerative disease is currently impeded by an inability to source primary neural cells for high-throughput and phenotypic screens. This limitation can be addressed through the use of pluripotent stem cells (PSCs), which can be derived from patient-specific samples and differentiated to neural cells for use in identifying novel compounds for the treatment of neurodegenerative diseases. We have developed an efficient protocol to culture pure populations of neurons, as confirmed by gene expression analysis, in the 96-well format necessary for screens. These differentiated neurons were subjected to viability assays to illustrate their potential in future high-throughput screens. We have also shown that organelles such as nuclei and mitochondria could be live-labeled and visualized through fluorescence, suggesting that we should be able to monitor subcellular phenotypic changes. Neurons derived from a green fluorescent protein-expressing reporter line of PSCs were live-imaged to assess markers of neuronal maturation such as neurite length and co-cultured with astrocytes to demonstrate further maturation. These studies confirm that PSC-derived neurons can be used effectively in viability and functional assays and pave the way for high-throughput screens on neurons derived from patients with neurodegenerative disorders.
Rello, Luis; Aramendía, Maite; Belarra, Miguel A; Resano, Martín
2015-01-01
DBS have become a clinical specimen especially adequate for establishing home-based collection protocols. In this work, high-resolution continuum source graphite furnace atomic absorption spectrometry is evaluated for the direct monitoring of Pb in DBS, both as a quantitative tool and a screening method. The development of the screening model is based on the establishment of the unreliability region around the threshold limits, 100 or 50 μg l(-1). More than 500 samples were analyzed to validate the model. The screening method demonstrated high sensitivity (the rate of true positives detected was always higher than 95%), an excellent LOD (1 µg l(-1)) and high throughput (10 min per sample).
The planning and establishment of a sample preparation laboratory for drug discovery
Dufresne, Claude
2000-01-01
Nature has always been a productive source of new drugs. With the advent of high-throughput screening, it has now become possible to rapidly screen large sample collections. In addition to seeking greater diversity from natural product sources (micro-organisms, plants, etc.), fractionation of the crude extracts prior to screening is becoming a more important part of our efforts. As sample preparation protocols become more involved, automation can help to achieve and maintain a desired sample throughput. To address the needs of our screening program, two robotic systems were designed. The first system processes crude extracts all the way to 96-well plates, containing solutions suitable for screening in biological and biochemical assays. The system can dissolve crude extracts, fractionate them on solid-phase extraction cartridges, dry and weigh each fraction, re-dissolve them to a known concentration, and prepare mother plates. The second system replicates mother plates into a number of daughter plates. PMID:18924691
Automated glycopeptide analysis—review of current state and future directions
Dallas, David C.; Martin, William F.; Hua, Serenus
2013-01-01
Glycosylation of proteins is involved in immune defense, cell–cell adhesion, cellular recognition and pathogen binding and is one of the most common and complex post-translational modifications. Science is still struggling to assign detailed mechanisms and functions to this form of conjugation. Even the structural analysis of glycoproteins—glycoproteomics—remains in its infancy due to the scarcity of high-throughput analytical platforms capable of determining glycopeptide composition and structure, especially platforms for complex biological mixtures. Glycopeptide composition and structure can be determined with high mass-accuracy mass spectrometry, particularly when combined with chromatographic separation, but the sheer volume of generated data necessitates computational software for interpretation. This review discusses the current state of glycopeptide assignment software—advances made to date and issues that remain to be addressed. The various software and algorithms developed so far provide important insights into glycoproteomics. However, there is currently no freely available software that can analyze spectral data in batch and unambiguously determine glycopeptide compositions for N- and O-linked glycopeptides from relevant biological sources such as human milk and serum. Few programs are capable of aiding in structural determination of the glycan component. To significantly advance the field of glycoproteomics, analytical software and algorithms are required that: (i) solve for both N- and O-linked glycopeptide compositions, structures and glycosites in biological mixtures; (ii) are high-throughput and process data in batches; (iii) can interpret mass spectral data from a variety of sources and (iv) are open source and freely available. PMID:22843980
A high throughput mechanical screening device for cartilage tissue engineering.
Mohanraj, Bhavana; Hou, Chieh; Meloni, Gregory R; Cosgrove, Brian D; Dodge, George R; Mauck, Robert L
2014-06-27
Articular cartilage enables efficient and near-frictionless load transmission, but suffers from poor inherent healing capacity. As such, cartilage tissue engineering strategies have focused on mimicking both compositional and mechanical properties of native tissue in order to provide effective repair materials for the treatment of damaged or degenerated joint surfaces. However, given the large number design parameters available (e.g. cell sources, scaffold designs, and growth factors), it is difficult to conduct combinatorial experiments of engineered cartilage. This is particularly exacerbated when mechanical properties are a primary outcome, given the long time required for testing of individual samples. High throughput screening is utilized widely in the pharmaceutical industry to rapidly and cost-effectively assess the effects of thousands of compounds for therapeutic discovery. Here we adapted this approach to develop a high throughput mechanical screening (HTMS) system capable of measuring the mechanical properties of up to 48 materials simultaneously. The HTMS device was validated by testing various biomaterials and engineered cartilage constructs and by comparing the HTMS results to those derived from conventional single sample compression tests. Further evaluation showed that the HTMS system was capable of distinguishing and identifying 'hits', or factors that influence the degree of tissue maturation. Future iterations of this device will focus on reducing data variability, increasing force sensitivity and range, as well as scaling-up to even larger (96-well) formats. This HTMS device provides a novel tool for cartilage tissue engineering, freeing experimental design from the limitations of mechanical testing throughput. © 2013 Published by Elsevier Ltd.
'PACLIMS': a component LIM system for high-throughput functional genomic analysis.
Donofrio, Nicole; Rajagopalon, Ravi; Brown, Douglas; Diener, Stephen; Windham, Donald; Nolin, Shelly; Floyd, Anna; Mitchell, Thomas; Galadima, Natalia; Tucker, Sara; Orbach, Marc J; Patel, Gayatri; Farman, Mark; Pampanwar, Vishal; Soderlund, Cari; Lee, Yong-Hwan; Dean, Ralph A
2005-04-12
Recent advances in sequencing techniques leading to cost reduction have resulted in the generation of a growing number of sequenced eukaryotic genomes. Computational tools greatly assist in defining open reading frames and assigning tentative annotations. However, gene functions cannot be asserted without biological support through, among other things, mutational analysis. In taking a genome-wide approach to functionally annotate an entire organism, in this application the approximately 11,000 predicted genes in the rice blast fungus (Magnaporthe grisea), an effective platform for tracking and storing both the biological materials created and the data produced across several participating institutions was required. The platform designed, named PACLIMS, was built to support our high throughput pipeline for generating 50,000 random insertion mutants of Magnaporthe grisea. To be a useful tool for materials and data tracking and storage, PACLIMS was designed to be simple to use, modifiable to accommodate refinement of research protocols, and cost-efficient. Data entry into PACLIMS was simplified through the use of barcodes and scanners, thus reducing the potential human error, time constraints, and labor. This platform was designed in concert with our experimental protocol so that it leads the researchers through each step of the process from mutant generation through phenotypic assays, thus ensuring that every mutant produced is handled in an identical manner and all necessary data is captured. Many sequenced eukaryotes have reached the point where computational analyses are no longer sufficient and require biological support for their predicted genes. Consequently, there is an increasing need for platforms that support high throughput genome-wide mutational analyses. While PACLIMS was designed specifically for this project, the source and ideas present in its implementation can be used as a model for other high throughput mutational endeavors.
'PACLIMS': A component LIM system for high-throughput functional genomic analysis
Donofrio, Nicole; Rajagopalon, Ravi; Brown, Douglas; Diener, Stephen; Windham, Donald; Nolin, Shelly; Floyd, Anna; Mitchell, Thomas; Galadima, Natalia; Tucker, Sara; Orbach, Marc J; Patel, Gayatri; Farman, Mark; Pampanwar, Vishal; Soderlund, Cari; Lee, Yong-Hwan; Dean, Ralph A
2005-01-01
Background Recent advances in sequencing techniques leading to cost reduction have resulted in the generation of a growing number of sequenced eukaryotic genomes. Computational tools greatly assist in defining open reading frames and assigning tentative annotations. However, gene functions cannot be asserted without biological support through, among other things, mutational analysis. In taking a genome-wide approach to functionally annotate an entire organism, in this application the ~11,000 predicted genes in the rice blast fungus (Magnaporthe grisea), an effective platform for tracking and storing both the biological materials created and the data produced across several participating institutions was required. Results The platform designed, named PACLIMS, was built to support our high throughput pipeline for generating 50,000 random insertion mutants of Magnaporthe grisea. To be a useful tool for materials and data tracking and storage, PACLIMS was designed to be simple to use, modifiable to accommodate refinement of research protocols, and cost-efficient. Data entry into PACLIMS was simplified through the use of barcodes and scanners, thus reducing the potential human error, time constraints, and labor. This platform was designed in concert with our experimental protocol so that it leads the researchers through each step of the process from mutant generation through phenotypic assays, thus ensuring that every mutant produced is handled in an identical manner and all necessary data is captured. Conclusion Many sequenced eukaryotes have reached the point where computational analyses are no longer sufficient and require biological support for their predicted genes. Consequently, there is an increasing need for platforms that support high throughput genome-wide mutational analyses. While PACLIMS was designed specifically for this project, the source and ideas present in its implementation can be used as a model for other high throughput mutational endeavors. PMID:15826298
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sauter, Nicholas K., E-mail: nksauter@lbl.gov; Hattne, Johan; Grosse-Kunstleve, Ralf W.
The Computational Crystallography Toolbox (cctbx) is a flexible software platform that has been used to develop high-throughput crystal-screening tools for both synchrotron sources and X-ray free-electron lasers. Plans for data-processing and visualization applications are discussed, and the benefits and limitations of using graphics-processing units are evaluated. Current pixel-array detectors produce diffraction images at extreme data rates (of up to 2 TB h{sup −1}) that make severe demands on computational resources. New multiprocessing frameworks are required to achieve rapid data analysis, as it is important to be able to inspect the data quickly in order to guide the experiment in realmore » time. By utilizing readily available web-serving tools that interact with the Python scripting language, it was possible to implement a high-throughput Bragg-spot analyzer (cctbx.spotfinder) that is presently in use at numerous synchrotron-radiation beamlines. Similarly, Python interoperability enabled the production of a new data-reduction package (cctbx.xfel) for serial femtosecond crystallography experiments at the Linac Coherent Light Source (LCLS). Future data-reduction efforts will need to focus on specialized problems such as the treatment of diffraction spots on interleaved lattices arising from multi-crystal specimens. In these challenging cases, accurate modeling of close-lying Bragg spots could benefit from the high-performance computing capabilities of graphics-processing units.« less
High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.
Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John
2017-02-01
The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.
USDA-ARS?s Scientific Manuscript database
A high-throughput Raman chemical imaging method was developed for direct inspection of benzoyl peroxide (BPO) mixed in wheat flour. A 5 W 785 nm line laser (240 mm long and 1 mm wide) was used as a Raman excitation source in a push-broom Raman imaging system. Hyperspectral Raman images were collecte...
Huang, Xiaojing; Lauer, Kenneth; Clark, Jesse N.; ...
2015-03-13
We report an experimental ptychography measurement performed in fly-scan mode. With a visible-light laser source, we demonstrate a 5-fold reduction of data acquisition time. By including multiple mutually incoherent modes into the incident illumination, high quality images were successfully reconstructed from blurry diffraction patterns. Thus, this approach significantly increases the throughput of ptychography, especially for three-dimensional applications and the visualization of dynamic systems.
USDA-ARS?s Scientific Manuscript database
The molecular biological techniques for plasmid-based assembly and cloning of synthetic assembled gene open reading frames are essential for elucidating the function of the proteins encoded by the genes. These techniques involve the production of full-length cDNA libraries as a source of plasmid-bas...
Ma, Jian; Casey, Cameron P.; Zheng, Xueyun; Ibrahim, Yehia M.; Wilkins, Christopher S.; Renslow, Ryan S.; Thomas, Dennis G.; Payne, Samuel H.; Monroe, Matthew E.; Smith, Richard D.; Teeguarden, Justin G.; Baker, Erin S.; Metz, Thomas O.
2017-01-01
Abstract Motivation: Drift tube ion mobility spectrometry coupled with mass spectrometry (DTIMS-MS) is increasingly implemented in high throughput omics workflows, and new informatics approaches are necessary for processing the associated data. To automatically extract arrival times for molecules measured by DTIMS at multiple electric fields and compute their associated collisional cross sections (CCS), we created the PNNL Ion Mobility Cross Section Extractor (PIXiE). The primary application presented for this algorithm is the extraction of data that can then be used to create a reference library of experimental CCS values for use in high throughput omics analyses. Results: We demonstrate the utility of this approach by automatically extracting arrival times and calculating the associated CCSs for a set of endogenous metabolites and xenobiotics. The PIXiE-generated CCS values were within error of those calculated using commercially available instrument vendor software. Availability and implementation: PIXiE is an open-source tool, freely available on Github. The documentation, source code of the software, and a GUI can be found at https://github.com/PNNL-Comp-Mass-Spec/PIXiE and the source code of the backend workflow library used by PIXiE can be found at https://github.com/PNNL-Comp-Mass-Spec/IMS-Informed-Library. Contact: erin.baker@pnnl.gov or thomas.metz@pnnl.gov Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28505286
Ma, Jian; Casey, Cameron P; Zheng, Xueyun; Ibrahim, Yehia M; Wilkins, Christopher S; Renslow, Ryan S; Thomas, Dennis G; Payne, Samuel H; Monroe, Matthew E; Smith, Richard D; Teeguarden, Justin G; Baker, Erin S; Metz, Thomas O
2017-09-01
Drift tube ion mobility spectrometry coupled with mass spectrometry (DTIMS-MS) is increasingly implemented in high throughput omics workflows, and new informatics approaches are necessary for processing the associated data. To automatically extract arrival times for molecules measured by DTIMS at multiple electric fields and compute their associated collisional cross sections (CCS), we created the PNNL Ion Mobility Cross Section Extractor (PIXiE). The primary application presented for this algorithm is the extraction of data that can then be used to create a reference library of experimental CCS values for use in high throughput omics analyses. We demonstrate the utility of this approach by automatically extracting arrival times and calculating the associated CCSs for a set of endogenous metabolites and xenobiotics. The PIXiE-generated CCS values were within error of those calculated using commercially available instrument vendor software. PIXiE is an open-source tool, freely available on Github. The documentation, source code of the software, and a GUI can be found at https://github.com/PNNL-Comp-Mass-Spec/PIXiE and the source code of the backend workflow library used by PIXiE can be found at https://github.com/PNNL-Comp-Mass-Spec/IMS-Informed-Library . erin.baker@pnnl.gov or thomas.metz@pnnl.gov. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
fluff: exploratory analysis and visualization of high-throughput sequencing data
Georgiou, Georgios
2016-01-01
Summary. In this article we describe fluff, a software package that allows for simple exploration, clustering and visualization of high-throughput sequencing data mapped to a reference genome. The package contains three command-line tools to generate publication-quality figures in an uncomplicated manner using sensible defaults. Genome-wide data can be aggregated, clustered and visualized in a heatmap, according to different clustering methods. This includes a predefined setting to identify dynamic clusters between different conditions or developmental stages. Alternatively, clustered data can be visualized in a bandplot. Finally, fluff includes a tool to generate genomic profiles. As command-line tools, the fluff programs can easily be integrated into standard analysis pipelines. The installation is straightforward and documentation is available at http://fluff.readthedocs.org. Availability. fluff is implemented in Python and runs on Linux. The source code is freely available for download at https://github.com/simonvh/fluff. PMID:27547532
High-throughput detection of ethanol-producing cyanobacteria in a microdroplet platform.
Abalde-Cela, Sara; Gould, Anna; Liu, Xin; Kazamia, Elena; Smith, Alison G; Abell, Chris
2015-05-06
Ethanol production by microorganisms is an important renewable energy source. Most processes involve fermentation of sugars from plant feedstock, but there is increasing interest in direct ethanol production by photosynthetic organisms. To facilitate this, a high-throughput screening technique for the detection of ethanol is required. Here, a method for the quantitative detection of ethanol in a microdroplet-based platform is described that can be used for screening cyanobacterial strains to identify those with the highest ethanol productivity levels. The detection of ethanol by enzymatic assay was optimized both in bulk and in microdroplets. In parallel, the encapsulation of engineered ethanol-producing cyanobacteria in microdroplets and their growth dynamics in microdroplet reservoirs were demonstrated. The combination of modular microdroplet operations including droplet generation for cyanobacteria encapsulation, droplet re-injection and pico-injection, and laser-induced fluorescence, were used to create this new platform to screen genetically engineered strains of cyanobacteria with different levels of ethanol production.
CellCognition: time-resolved phenotype annotation in high-throughput live cell imaging.
Held, Michael; Schmitz, Michael H A; Fischer, Bernd; Walter, Thomas; Neumann, Beate; Olma, Michael H; Peter, Matthias; Ellenberg, Jan; Gerlich, Daniel W
2010-09-01
Fluorescence time-lapse imaging has become a powerful tool to investigate complex dynamic processes such as cell division or intracellular trafficking. Automated microscopes generate time-resolved imaging data at high throughput, yet tools for quantification of large-scale movie data are largely missing. Here we present CellCognition, a computational framework to annotate complex cellular dynamics. We developed a machine-learning method that combines state-of-the-art classification with hidden Markov modeling for annotation of the progression through morphologically distinct biological states. Incorporation of time information into the annotation scheme was essential to suppress classification noise at state transitions and confusion between different functional states with similar morphology. We demonstrate generic applicability in different assays and perturbation conditions, including a candidate-based RNA interference screen for regulators of mitotic exit in human cells. CellCognition is published as open source software, enabling live-cell imaging-based screening with assays that directly score cellular dynamics.
Mordwinkin, Nicholas M; Burridge, Paul W; Wu, Joseph C
2013-02-01
Drug attrition rates have increased in past years, resulting in growing costs for the pharmaceutical industry and consumers. The reasons for this include the lack of in vitro models that correlate with clinical results and poor preclinical toxicity screening assays. The in vitro production of human cardiac progenitor cells and cardiomyocytes from human pluripotent stem cells provides an amenable source of cells for applications in drug discovery, disease modeling, regenerative medicine, and cardiotoxicity screening. In addition, the ability to derive human-induced pluripotent stem cells from somatic tissues, combined with current high-throughput screening and pharmacogenomics, may help realize the use of these cells to fulfill the potential of personalized medicine. In this review, we discuss the use of pluripotent stem cell-derived cardiomyocytes for drug discovery and cardiotoxicity screening, as well as current hurdles that must be overcome for wider clinical applications of this promising approach.
BIOREL: the benchmark resource to estimate the relevance of the gene networks.
Antonov, Alexey V; Mewes, Hans W
2006-02-06
The progress of high-throughput methodologies in functional genomics has lead to the development of statistical procedures to infer gene networks from various types of high-throughput data. However, due to the lack of common standards, the biological significance of the results of the different studies is hard to compare. To overcome this problem we propose a benchmark procedure and have developed a web resource (BIOREL), which is useful for estimating the biological relevance of any genetic network by integrating different sources of biological information. The associations of each gene from the network are classified as biologically relevant or not. The proportion of genes in the network classified as "relevant" is used as the overall network relevance score. Employing synthetic data we demonstrated that such a score ranks the networks fairly in respect to the relevance level. Using BIOREL as the benchmark resource we compared the quality of experimental and theoretically predicted protein interaction data.
Fair sharing of resources in a supply network with constraints.
Carvalho, Rui; Buzna, Lubos; Just, Wolfram; Helbing, Dirk; Arrowsmith, David K
2012-04-01
This paper investigates the effect of network topology on the fair allocation of network resources among a set of agents, an all-important issue for the efficiency of transportation networks all around us. We analyze a generic mechanism that distributes network capacity fairly among existing flow demands. The problem can be solved by semianalytical methods on a nearest-neighbor graph with one source and sink pair, when transport occurs over shortest paths. For this setup, we uncover a broad range of patterns of intersecting shortest paths as a function of the distance between the source and the sink. When the number of intersections is the maximum and the distance between the source and the sink is large, we find that a fair allocation implies a decrease of at least 50% from the maximum throughput. We also find that the histogram of the flow allocations assigned to the agents decays as a power law with exponent -1. Our semianalytical framework suggests possible explanations for the well-known reduction of the throughput in fair allocations. It also suggests that the combination of network topology and routing rules can lead to highly uneven (but fair) distributions of resources, a remark of caution to network designers.
Fair sharing of resources in a supply network with constraints
NASA Astrophysics Data System (ADS)
Carvalho, Rui; Buzna, Lubos; Just, Wolfram; Helbing, Dirk; Arrowsmith, David K.
2012-04-01
This paper investigates the effect of network topology on the fair allocation of network resources among a set of agents, an all-important issue for the efficiency of transportation networks all around us. We analyze a generic mechanism that distributes network capacity fairly among existing flow demands. The problem can be solved by semianalytical methods on a nearest-neighbor graph with one source and sink pair, when transport occurs over shortest paths. For this setup, we uncover a broad range of patterns of intersecting shortest paths as a function of the distance between the source and the sink. When the number of intersections is the maximum and the distance between the source and the sink is large, we find that a fair allocation implies a decrease of at least 50% from the maximum throughput. We also find that the histogram of the flow allocations assigned to the agents decays as a power law with exponent -1. Our semianalytical framework suggests possible explanations for the well-known reduction of the throughput in fair allocations. It also suggests that the combination of network topology and routing rules can lead to highly uneven (but fair) distributions of resources, a remark of caution to network designers.
An open-source computational and data resource to analyze digital maps of immunopeptidomes
Caron, Etienne; Espona, Lucia; Kowalewski, Daniel J.; ...
2015-07-08
We present a novel mass spectrometry-based high-throughput workflow and an open-source computational and data resource to reproducibly identify and quantify HLA-associated peptides. Collectively, the resources support the generation of HLA allele-specific peptide assay libraries consisting of consensus fragment ion spectra, and the analysis of quantitative digital maps of HLA peptidomes generated from a range of biological sources by SWATH mass spectrometry (MS). This study represents the first community-based effort to develop a robust platform for the reproducible and quantitative measurement of the entire repertoire of peptides presented by HLA molecules, an essential step towards the design of efficient immunotherapies.
An adaptive distributed data aggregation based on RCPC for wireless sensor networks
NASA Astrophysics Data System (ADS)
Hua, Guogang; Chen, Chang Wen
2006-05-01
One of the most important design issues in wireless sensor networks is energy efficiency. Data aggregation has significant impact on the energy efficiency of the wireless sensor networks. With massive deployment of sensor nodes and limited energy supply, data aggregation has been considered as an essential paradigm for data collection in sensor networks. Recently, distributed source coding has been demonstrated to possess several advantages in data aggregation for wireless sensor networks. Distributed source coding is able to encode sensor data with lower bit rate without direct communication among sensor nodes. To ensure reliable and high throughput transmission with the aggregated data, we proposed in this research a progressive transmission and decoding of Rate-Compatible Punctured Convolutional (RCPC) coded data aggregation with distributed source coding. Our proposed 1/2 RSC codes with Viterbi algorithm for distributed source coding are able to guarantee that, even without any correlation between the data, the decoder can always decode the data correctly without wasting energy. The proposed approach achieves two aspects in adaptive data aggregation for wireless sensor networks. First, the RCPC coding facilitates adaptive compression corresponding to the correlation of the sensor data. When the data correlation is high, higher compression ration can be achieved. Otherwise, lower compression ratio will be achieved. Second, the data aggregation is adaptively accumulated. There is no waste of energy in the transmission; even there is no correlation among the data, the energy consumed is at the same level as raw data collection. Experimental results have shown that the proposed distributed data aggregation based on RCPC is able to achieve high throughput and low energy consumption data collection for wireless sensor networks
Repurposing High-Throughput Image Assays Enables Biological Activity Prediction for Drug Discovery.
Simm, Jaak; Klambauer, Günter; Arany, Adam; Steijaert, Marvin; Wegner, Jörg Kurt; Gustin, Emmanuel; Chupakhin, Vladimir; Chong, Yolanda T; Vialard, Jorge; Buijnsters, Peter; Velter, Ingrid; Vapirev, Alexander; Singh, Shantanu; Carpenter, Anne E; Wuyts, Roel; Hochreiter, Sepp; Moreau, Yves; Ceulemans, Hugo
2018-05-17
In both academia and the pharmaceutical industry, large-scale assays for drug discovery are expensive and often impractical, particularly for the increasingly important physiologically relevant model systems that require primary cells, organoids, whole organisms, or expensive or rare reagents. We hypothesized that data from a single high-throughput imaging assay can be repurposed to predict the biological activity of compounds in other assays, even those targeting alternate pathways or biological processes. Indeed, quantitative information extracted from a three-channel microscopy-based screen for glucocorticoid receptor translocation was able to predict assay-specific biological activity in two ongoing drug discovery projects. In these projects, repurposing increased hit rates by 50- to 250-fold over that of the initial project assays while increasing the chemical structure diversity of the hits. Our results suggest that data from high-content screens are a rich source of information that can be used to predict and replace customized biological assays. Copyright © 2018 Elsevier Ltd. All rights reserved.
Information management systems for pharmacogenomics.
Thallinger, Gerhard G; Trajanoski, Slave; Stocker, Gernot; Trajanoski, Zlatko
2002-09-01
The value of high-throughput genomic research is dramatically enhanced by association with key patient data. These data are generally available but of disparate quality and not typically directly associated. A system that could bring these disparate data sources into a common resource connected with functional genomic data would be tremendously advantageous. However, the integration of clinical and accurate interpretation of the generated functional genomic data requires the development of information management systems capable of effectively capturing the data as well as tools to make that data accessible to the laboratory scientist or to the clinician. In this review these challenges and current information technology solutions associated with the management, storage and analysis of high-throughput data are highlighted. It is suggested that the development of a pharmacogenomic data management system which integrates public and proprietary databases, clinical datasets, and data mining tools embedded in a high-performance computing environment should include the following components: parallel processing systems, storage technologies, network technologies, databases and database management systems (DBMS), and application services.
Neural network Hilbert transform based filtered backprojection for fast inline x-ray inspection
NASA Astrophysics Data System (ADS)
Janssens, Eline; De Beenhouwer, Jan; Van Dael, Mattias; De Schryver, Thomas; Van Hoorebeke, Luc; Verboven, Pieter; Nicolai, Bart; Sijbers, Jan
2018-03-01
X-ray imaging is an important tool for quality control since it allows to inspect the interior of products in a non-destructive way. Conventional x-ray imaging, however, is slow and expensive. Inline x-ray inspection, on the other hand, can pave the way towards fast and individual quality control, provided that a sufficiently high throughput can be achieved at a minimal cost. To meet these criteria, an inline inspection acquisition geometry is proposed where the object moves and rotates on a conveyor belt while it passes a fixed source and detector. Moreover, for this acquisition geometry, a new neural-network-based reconstruction algorithm is introduced: the neural network Hilbert transform based filtered backprojection. The proposed algorithm is evaluated both on simulated and real inline x-ray data and has shown to generate high quality reconstructions of 400 × 400 reconstruction pixels within 200 ms, thereby meeting the high throughput criteria.
Paintdakhi, Ahmad; Parry, Bradley; Campos, Manuel; Irnov, Irnov; Elf, Johan; Surovtsev, Ivan; Jacobs-Wagner, Christine
2016-01-01
Summary With the realization that bacteria display phenotypic variability among cells and exhibit complex subcellular organization critical for cellular function and behavior, microscopy has re-emerged as a primary tool in bacterial research during the last decade. However, the bottleneck in today’s single-cell studies is quantitative image analysis of cells and fluorescent signals. Here, we address current limitations through the development of Oufti, a stand-alone, open-source software package for automated measurements of microbial cells and fluorescence signals from microscopy images. Oufti provides computational solutions for tracking touching cells in confluent samples, handles various cell morphologies, offers algorithms for quantitative analysis of both diffraction and non-diffraction-limited fluorescence signals, and is scalable for high-throughput analysis of massive datasets, all with subpixel precision. All functionalities are integrated in a single package. The graphical user interface, which includes interactive modules for segmentation, image analysis, and post-processing analysis, makes the software broadly accessible to users irrespective of their computational skills. PMID:26538279
R classes and methods for SNP array data.
Scharpf, Robert B; Ruczinski, Ingo
2010-01-01
The Bioconductor project is an "open source and open development software project for the analysis and comprehension of genomic data" (1), primarily based on the R programming language. Infrastructure packages, such as Biobase, are maintained by Bioconductor core developers and serve several key roles to the broader community of Bioconductor software developers and users. In particular, Biobase introduces an S4 class, the eSet, for high-dimensional assay data. Encapsulating the assay data as well as meta-data on the samples, features, and experiment in the eSet class definition ensures propagation of the relevant sample and feature meta-data throughout an analysis. Extending the eSet class promotes code reuse through inheritance as well as interoperability with other R packages and is less error-prone. Recently proposed class definitions for high-throughput SNP arrays extend the eSet class. This chapter highlights the advantages of adopting and extending Biobase class definitions through a working example of one implementation of classes for the analysis of high-throughput SNP arrays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loisel, G., E-mail: gploise@sandia.gov; Lake, P.; Gard, P.
2016-11-15
At Sandia National Laboratories, the x-ray generator Manson source model 5 was upgraded from 10 to 25 kV. The purpose of the upgrade is to drive higher characteristics photon energies with higher throughput. In this work we present characterization studies for the source size and the x-ray intensity when varying the source voltage for a series of K-, L-, and M-shell lines emitted from Al, Y, and Au elements composing the anode. We used a 2-pinhole camera to measure the source size and an energy dispersive detector to monitor the spectral content and intensity of the x-ray source. As themore » voltage increases, the source size is significantly reduced and line intensity is increased for the three materials. We can take advantage of the smaller source size and higher source throughput to effectively calibrate the suite of Z Pulsed Power Facility crystal spectrometers.« less
Application of ToxCast High-Throughput Screening and ...
Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenesis Distruptors Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenssis Distruptors
Cui, Yang; Hanley, Luke
2015-06-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.
Cui, Yang; Hanley, Luke
2015-01-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872
NASA Astrophysics Data System (ADS)
Cui, Yang; Hanley, Luke
2015-06-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.
Pietiainen, Vilja; Saarela, Jani; von Schantz, Carina; Turunen, Laura; Ostling, Paivi; Wennerberg, Krister
2014-05-01
The High Throughput Biomedicine (HTB) unit at the Institute for Molecular Medicine Finland FIMM was established in 2010 to serve as a national and international academic screening unit providing access to state of the art instrumentation for chemical and RNAi-based high throughput screening. The initial focus of the unit was multiwell plate based chemical screening and high content microarray-based siRNA screening. However, over the first four years of operation, the unit has moved to a more flexible service platform where both chemical and siRNA screening is performed at different scales primarily in multiwell plate-based assays with a wide range of readout possibilities with a focus on ultraminiaturization to allow for affordable screening for the academic users. In addition to high throughput screening, the equipment of the unit is also used to support miniaturized, multiplexed and high throughput applications for other types of research such as genomics, sequencing and biobanking operations. Importantly, with the translational research goals at FIMM, an increasing part of the operations at the HTB unit is being focused on high throughput systems biological platforms for functional profiling of patient cells in personalized and precision medicine projects.
High Throughput Screening For Hazard and Risk of Environmental Contaminants
High throughput toxicity testing provides detailed mechanistic information on the concentration response of environmental contaminants in numerous potential toxicity pathways. High throughput screening (HTS) has several key advantages: (1) expense orders of magnitude less than an...
Hoeflinger, Jennifer L; Hoeflinger, Daniel E; Miller, Michael J
2017-01-01
Herein, an open-source method to generate quantitative bacterial growth data from high-throughput microplate assays is described. The bacterial lag time, maximum specific growth rate, doubling time and delta OD are reported. Our method was validated by carbohydrate utilization of lactobacilli, and visual inspection revealed 94% of regressions were deemed excellent. Copyright © 2016 Elsevier B.V. All rights reserved.
Botha, Sabine; Nass, Karol; Barends, Thomas R M; Kabsch, Wolfgang; Latz, Beatrice; Dworkowski, Florian; Foucar, Lutz; Panepucci, Ezequiel; Wang, Meitian; Shoeman, Robert L; Schlichting, Ilme; Doak, R Bruce
2015-02-01
Recent advances in synchrotron sources, beamline optics and detectors are driving a renaissance in room-temperature data collection. The underlying impetus is the recognition that conformational differences are observed in functionally important regions of structures determined using crystals kept at ambient as opposed to cryogenic temperature during data collection. In addition, room-temperature measurements enable time-resolved studies and eliminate the need to find suitable cryoprotectants. Since radiation damage limits the high-resolution data that can be obtained from a single crystal, especially at room temperature, data are typically collected in a serial fashion using a number of crystals to spread the total dose over the entire ensemble. Several approaches have been developed over the years to efficiently exchange crystals for room-temperature data collection. These include in situ collection in trays, chips and capillary mounts. Here, the use of a slowly flowing microscopic stream for crystal delivery is demonstrated, resulting in extremely high-throughput delivery of crystals into the X-ray beam. This free-stream technology, which was originally developed for serial femtosecond crystallography at X-ray free-electron lasers, is here adapted to serial crystallography at synchrotrons. By embedding the crystals in a high-viscosity carrier stream, high-resolution room-temperature studies can be conducted at atmospheric pressure using the unattenuated X-ray beam, thus permitting the analysis of small or weakly scattering crystals. The high-viscosity extrusion injector is described, as is its use to collect high-resolution serial data from native and heavy-atom-derivatized lysozyme crystals at the Swiss Light Source using less than half a milligram of protein crystals. The room-temperature serial data allow de novo structure determination. The crystal size used in this proof-of-principle experiment was dictated by the available flux density. However, upcoming developments in beamline optics, detectors and synchrotron sources will enable the use of true microcrystals. This high-throughput, high-dose-rate methodology provides a new route to investigating the structure and dynamics of macromolecules at ambient temperature.
High Throughput Transcriptomics: From screening to pathways
The EPA ToxCast effort has screened thousands of chemicals across hundreds of high-throughput in vitro screening assays. The project is now leveraging high-throughput transcriptomic (HTTr) technologies to substantially expand its coverage of biological pathways. The first HTTr sc...
NASA Astrophysics Data System (ADS)
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-12-01
Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-01-01
Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.
OLEDs for lighting: new approaches
NASA Astrophysics Data System (ADS)
Duggal, Anil R.; Foust, Donald F.; Nealon, William F.; Heller, Christian M.
2004-02-01
OLED technology has improved to the point where it is now possible to envision developing OLEDs as a low cost solid state light source. In order to realize this, significant advances have to be made in device efficiency, lifetime at high brightness, high throughput fabrication, and the generation of illumination quality white light. In this talk, the requirements for general lighting will be reviewed and various approaches to meeting them will be outlined. Emphasis will be placed on a new monolithic series-connected OLED design architecture that promises scalability without high fabrication cost or design complexity.
High Throughput Experimental Materials Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakutayev, Andriy; Perkins, John; Schwarting, Marcus
The mission of the High Throughput Experimental Materials Database (HTEM DB) is to enable discovery of new materials with useful properties by releasing large amounts of high-quality experimental data to public. The HTEM DB contains information about materials obtained from high-throughput experiments at the National Renewable Energy Laboratory (NREL).
A high throughput screen for biomining cellulase activity from metagenomic libraries.
Mewis, Keith; Taupp, Marcus; Hallam, Steven J
2011-02-01
Cellulose, the most abundant source of organic carbon on the planet, has wide-ranging industrial applications with increasing emphasis on biofuel production (1). Chemical methods to modify or degrade cellulose typically require strong acids and high temperatures. As such, enzymatic methods have become prominent in the bioconversion process. While the identification of active cellulases from bacterial and fungal isolates has been somewhat effective, the vast majority of microbes in nature resist laboratory cultivation. Environmental genomic, also known as metagenomic, screening approaches have great promise in bridging the cultivation gap in the search for novel bioconversion enzymes. Metagenomic screening approaches have successfully recovered novel cellulases from environments as varied as soils (2), buffalo rumen (3) and the termite hind-gut (4) using carboxymethylcellulose (CMC) agar plates stained with congo red dye (based on the method of Teather and Wood (5)). However, the CMC method is limited in throughput, is not quantitative and manifests a low signal to noise ratio (6). Other methods have been reported (7,8) but each use an agar plate-based assay, which is undesirable for high-throughput screening of large insert genomic libraries. Here we present a solution-based screen for cellulase activity using a chromogenic dinitrophenol (DNP)-cellobioside substrate (9). Our library was cloned into the pCC1 copy control fosmid to increase assay sensitivity through copy number induction (10). The method uses one-pot chemistry in 384-well microplates with the final readout provided as an absorbance measurement. This readout is quantitative, sensitive and automated with a throughput of up to 100X 384-well plates per day using a liquid handler and plate reader with attached stacking system.
NASA Astrophysics Data System (ADS)
Guo, Baoshan; Lei, Cheng; Ito, Takuro; Yaxiaer, Yalikun; Kobayashi, Hirofumi; Jiang, Yiyue; Tanaka, Yo; Ozeki, Yasuyuki; Goda, Keisuke
2017-02-01
The development of reliable, sustainable, and economical sources of alternative fuels is an important, but challenging goal for the world. As an alternative to liquid fossil fuels, microalgal biofuel is expected to play a key role in reducing the detrimental effects of global warming since microalgae absorb atmospheric CO2 via photosynthesis. Unfortunately, conventional analytical methods only provide population-averaged lipid contents and fail to characterize a diverse population of microalgal cells with single-cell resolution in a noninvasive and interference-free manner. Here we demonstrate high-throughput label-free single-cell screening of lipid-producing microalgal cells with optofluidic time-stretch quantitative phase microscopy. In particular, we use Euglena gracilis - an attractive microalgal species that produces wax esters (suitable for biodiesel and aviation fuel after refinement) within lipid droplets. Our optofluidic time-stretch quantitative phase microscope is based on an integration of a hydrodynamic-focusing microfluidic chip, an optical time-stretch phase-contrast microscope, and a digital image processor equipped with machine learning. As a result, it provides both the opacity and phase contents of every single cell at a high throughput of 10,000 cells/s. We characterize heterogeneous populations of E. gracilis cells under two different culture conditions to evaluate their lipid production efficiency. Our method holds promise as an effective analytical tool for microalgaebased biofuel production.
High-throughput SNP-genotyping analysis of the relationships among Ponto-Caspian sturgeon species
Rastorguev, Sergey M; Nedoluzhko, Artem V; Mazur, Alexander M; Gruzdeva, Natalia M; Volkov, Alexander A; Barmintseva, Anna E; Mugue, Nikolai S; Prokhortchouk, Egor B
2013-01-01
Abstract Legally certified sturgeon fisheries require population protection and conservation methods, including DNA tests to identify the source of valuable sturgeon roe. However, the available genetic data are insufficient to distinguish between different sturgeon populations, and are even unable to distinguish between some species. We performed high-throughput single-nucleotide polymorphism (SNP)-genotyping analysis on different populations of Russian (Acipenser gueldenstaedtii), Persian (A. persicus), and Siberian (A. baerii) sturgeon species from the Caspian Sea region (Volga and Ural Rivers), the Azov Sea, and two Siberian rivers. We found that Russian sturgeons from the Volga and Ural Rivers were essentially indistinguishable, but they differed from Russian sturgeons in the Azov Sea, and from Persian and Siberian sturgeons. We identified eight SNPs that were sufficient to distinguish these sturgeon populations with 80% confidence, and allowed the development of markers to distinguish sturgeon species. Finally, on the basis of our SNP data, we propose that the A. baerii-like mitochondrial DNA found in some Russian sturgeons from the Caspian Sea arose via an introgression event during the Pleistocene glaciation. In the present study, the high-throughput genotyping analysis of several sturgeon populations was performed. SNP markers for species identification were defined. The possible explanation of the baerii-like mitotype presence in some Russian sturgeons in the Caspian Sea was suggested. PMID:24567827
Evaluation of a High Throughput Starch Analysis Optimised for Wood
Bellasio, Chandra; Fini, Alessio; Ferrini, Francesco
2014-01-01
Starch is the most important long-term reserve in trees, and the analysis of starch is therefore useful source of physiological information. Currently published protocols for wood starch analysis impose several limitations, such as long procedures and a neutralization step. The high-throughput standard protocols for starch analysis in food and feed represent a valuable alternative. However, they have not been optimised or tested with woody samples. These have particular chemical and structural characteristics, including the presence of interfering secondary metabolites, low reactivity of starch, and low starch content. In this study, a standard method for starch analysis used for food and feed (AOAC standard method 996.11) was optimised to improve precision and accuracy for the analysis of starch in wood. Key modifications were introduced in the digestion conditions and in the glucose assay. The optimised protocol was then evaluated through 430 starch analyses of standards at known starch content, matrix polysaccharides, and wood collected from three organs (roots, twigs, mature wood) of four species (coniferous and flowering plants). The optimised protocol proved to be remarkably precise and accurate (3%), suitable for a high throughput routine analysis (35 samples a day) of specimens with a starch content between 40 mg and 21 µg. Samples may include lignified organs of coniferous and flowering plants and non-lignified organs, such as leaves, fruits and rhizomes. PMID:24523863
Hughes, Stephen R; Butt, Tauseef R; Bartolett, Scott; Riedmuller, Steven B; Farrelly, Philip
2011-08-01
The molecular biological techniques for plasmid-based assembly and cloning of gene open reading frames are essential for elucidating the function of the proteins encoded by the genes. High-throughput integrated robotic molecular biology platforms that have the capacity to rapidly clone and express heterologous gene open reading frames in bacteria and yeast and to screen large numbers of expressed proteins for optimized function are an important technology for improving microbial strains for biofuel production. The process involves the production of full-length complementary DNA libraries as a source of plasmid-based clones to express the desired proteins in active form for determination of their functions. Proteins that were identified by high-throughput screening as having desired characteristics are overexpressed in microbes to enable them to perform functions that will allow more cost-effective and sustainable production of biofuels. Because the plasmid libraries are composed of several thousand unique genes, automation of the process is essential. This review describes the design and implementation of an automated integrated programmable robotic workcell capable of producing complementary DNA libraries, colony picking, isolating plasmid DNA, transforming yeast and bacteria, expressing protein, and performing appropriate functional assays. These operations will allow tailoring microbial strains to use renewable feedstocks for production of biofuels, bioderived chemicals, fertilizers, and other coproducts for profitable and sustainable biorefineries. Published by Elsevier Inc.
Optimizing the Energy and Throughput of a Water-Quality Monitoring System.
Olatinwo, Segun O; Joubert, Trudi-H
2018-04-13
This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN), with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT) method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near-far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity.
Optimizing the Energy and Throughput of a Water-Quality Monitoring System
Olatinwo, Segun O.
2018-01-01
This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN), with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT) method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near–far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity. PMID:29652866
20180311 - High Throughput Transcriptomics: From screening to pathways (SOT 2018)
The EPA ToxCast effort has screened thousands of chemicals across hundreds of high-throughput in vitro screening assays. The project is now leveraging high-throughput transcriptomic (HTTr) technologies to substantially expand its coverage of biological pathways. The first HTTr sc...
Evaluation of Sequencing Approaches for High-Throughput Transcriptomics - (BOSC)
Whole-genome in vitro transcriptomics has shown the capability to identify mechanisms of action and estimates of potency for chemical-mediated effects in a toxicological framework, but with limited throughput and high cost. The generation of high-throughput global gene expression...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000 Gallons of Gasoline or More1... CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Source Category: Gasoline... Criteria and Management Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000 Gallons of Gasoline or More1... CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Source Category: Gasoline... Criteria and Management Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000 Gallons of Gasoline or More1... CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Source Category: Gasoline... Criteria and Management Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000 Gallons of Gasoline or More1... CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Source Category: Gasoline... Criteria and Management Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000...
High definition infrared chemical imaging of colorectal tissue using a Spero QCL microscope.
Bird, B; Rowlette, J
2017-04-10
Mid-infrared microscopy has become a key technique in the field of biomedical science and spectroscopy. This label-free, non-destructive technique permits the visualisation of a wide range of intrinsic biochemical markers in tissues, cells and biofluids by detection of the vibrational modes of the constituent molecules. Together, infrared microscopy and chemometrics is a widely accepted method that can distinguish healthy and diseased states with high accuracy. However, despite the exponential growth of the field and its research world-wide, several barriers currently exist for its full translation into the clinical sphere, namely sample throughput and data management. The advent and incorporation of quantum cascade lasers (QCLs) into infrared microscopes could help propel the field over these remaining hurdles. Such systems offer several advantages over their FT-IR counterparts, a simpler instrument architecture, improved photon flux, use of room temperature camera systems, and the flexibility of a tunable illumination source. In this current study we explore the use of a QCL infrared microscope to produce high definition, high throughput chemical images useful for the screening of biopsied colorectal tissue.
NASA Astrophysics Data System (ADS)
Guo, Baoshan; Lei, Cheng; Ito, Takuro; Jiang, Yiyue; Ozeki, Yasuyuki; Goda, Keisuke
2016-11-01
The world is faced with environmental problems and the energy crisis due to the combustion and depletion of fossil fuels. The development of reliable, sustainable, and economical sources of alternative fuels is an important, but challenging goal for the world. As an alternative to liquid fossil fuels, algal biofuel is expected to play a key role in alleviating global warming since algae absorb atmospheric CO2 via photosynthesis. Among various algae for fuel production, Euglena gracilis is an attractive microalgal species as it is known to produce wax ester (good for biodiesel and aviation fuel) within lipid droplets. To date, while there exist many techniques for inducing microalgal cells to produce and accumulate lipid with high efficiency, few analytical methods are available for characterizing a population of such lipid-accumulated microalgae including E. gracilis with high throughout, high accuracy, and single-cell resolution simultaneously. Here we demonstrate a high-throughput optofluidic Euglena gracilis profiler which consists of an optical time-stretch microscope and a fluorescence analyzer on top of an inertial-focusing microfluidic device that can detect fluorescence from lipid droplets in their cell body and provide images of E. gracilis cells simultaneously at a high throughput of 10,000 cells/s. With the multi-dimensional information acquired by the system, we classify nitrogen-sufficient (ordinary) and nitrogen-deficient (lipid-accumulated) E. gracilis cells with a low false positive rate of 1.0%. This method provides a promise for evaluating the efficiency of lipid-inducing techniques for biofuel production, which is also applicable for identifying biomedical samples such as blood cells and cancer cells.
High Count-Rate Study of Two TES X-Ray Microcalorimeters With Different Transition Temperatures
NASA Technical Reports Server (NTRS)
Lee, Sang-Jun; Adams, Joseph S.; Bandler, Simon R.; Betancourt-Martinez, Gabriele L.; Chervenak, James A.; Eckart, Megan E.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.; Porter, Frederick S.;
2017-01-01
We have developed transition-edge sensor (TES) microcalorimeter arrays with high count-rate capability and high energy resolution to carry out x-ray imaging spectroscopy observations of various astronomical sources and the Sun. We have studied the dependence of the energy resolution and throughput (fraction of processed pulses) on the count rate for such microcalorimeters with two different transition temperatures T(sub c). Devices with both transition temperatures were fabricated within a single microcalorimeter array directly on top of a solid substrate where the thermal conductance of the microcalorimeter is dependent upon the thermal boundary resistance between the TES sensor and the dielectric substrate beneath. Because the thermal boundary resistance is highly temperature dependent, the two types of device with different T(sub c)(sup s) had very different thermal decay times, approximately one order of magnitude different. In our earlier report, we achieved energy resolutions of 1.6 and 2.eV at 6 keV from lower and higher T(sub c) devices, respectively, using a standard analysis method based on optimal filtering in the low flux limit. We have now measured the same devices at elevated x-ray fluxes ranging from 50 Hz to 1000 Hz per pixel. In the high flux limit, however, the standard optimal filtering scheme nearly breaks down because of x-ray pile-up. To achieve the highest possible energy resolution for a fixed throughput, we have developed an analysis scheme based on the socalled event grade method. Using the new analysis scheme, we achieved 5.0 eV FWHM with 96 Percent throughput for 6 keV x-rays of 1025 Hz per pixel with the higher T(sub c) (faster) device, and 5.8 eV FWHM with 97 Percent throughput with the lower T(sub c) (slower) device at 722 Hz.
Persson, Nils E; Rafshoon, Joshua; Naghshpour, Kaylie; Fast, Tony; Chu, Ping-Hsun; McBride, Michael; Risteen, Bailey; Grover, Martha; Reichmanis, Elsa
2017-10-18
High-throughput discovery of process-structure-property relationships in materials through an informatics-enabled empirical approach is an increasingly utilized technique in materials research due to the rapidly expanding availability of data. Here, process-structure-property relationships are extracted for the nucleation, growth, and deposition of semiconducting poly(3-hexylthiophene) (P3HT) nanofibers used in organic field effect transistors, via high-throughput image analysis. This study is performed using an automated image analysis pipeline combining existing open-source software and new algorithms, enabling the rapid evaluation of structural metrics for images of fibrillar materials, including local orientational order, fiber length density, and fiber length distributions. We observe that microfluidic processing leads to fibers that pack with unusually high density, while sonication yields fibers that pack sparsely with low alignment. This is attributed to differences in their crystallization mechanisms. P3HT nanofiber packing during thin film deposition exhibits behavior suggesting that fibers are confined to packing in two-dimensional layers. We find that fiber alignment, a feature correlated with charge carrier mobility, is driven by increasing fiber length, and that shorter fibers tend to segregate to the buried dielectric interface during deposition, creating potentially performance-limiting defects in alignment. Another barrier to perfect alignment is the curvature of P3HT fibers; we propose a mechanistic simulation of fiber growth that reconciles both this curvature and the log-normal distribution of fiber lengths inherent to the fiber populations under consideration.
Recent advances in quantitative high throughput and high content data analysis.
Moutsatsos, Ioannis K; Parker, Christian N
2016-01-01
High throughput screening has become a basic technique with which to explore biological systems. Advances in technology, including increased screening capacity, as well as methods that generate multiparametric readouts, are driving the need for improvements in the analysis of data sets derived from such screens. This article covers the recent advances in the analysis of high throughput screening data sets from arrayed samples, as well as the recent advances in the analysis of cell-by-cell data sets derived from image or flow cytometry application. Screening multiple genomic reagents targeting any given gene creates additional challenges and so methods that prioritize individual gene targets have been developed. The article reviews many of the open source data analysis methods that are now available and which are helping to define a consensus on the best practices to use when analyzing screening data. As data sets become larger, and more complex, the need for easily accessible data analysis tools will continue to grow. The presentation of such complex data sets, to facilitate quality control monitoring and interpretation of the results will require the development of novel visualizations. In addition, advanced statistical and machine learning algorithms that can help identify patterns, correlations and the best features in massive data sets will be required. The ease of use for these tools will be important, as they will need to be used iteratively by laboratory scientists to improve the outcomes of complex analyses.
High Throughput Determination of Critical Human Dosing Parameters (SOT)
High throughput toxicokinetics (HTTK) is a rapid approach that uses in vitro data to estimate TK for hundreds of environmental chemicals. Reverse dosimetry (i.e., reverse toxicokinetics or RTK) based on HTTK data converts high throughput in vitro toxicity screening (HTS) data int...
High Throughput Determinations of Critical Dosing Parameters (IVIVE workshop)
High throughput toxicokinetics (HTTK) is an approach that allows for rapid estimations of TK for hundreds of environmental chemicals. HTTK-based reverse dosimetry (i.e, reverse toxicokinetics or RTK) is used in order to convert high throughput in vitro toxicity screening (HTS) da...
Optimization of high-throughput nanomaterial developmental toxicity testing in zebrafish embryos
Nanomaterial (NM) developmental toxicities are largely unknown. With an extensive variety of NMs available, high-throughput screening methods may be of value for initial characterization of potential hazard. We optimized a zebrafish embryo test as an in vivo high-throughput assay...
Analysis of Active Methylotrophic Communities: When DNA-SIP Meets High-Throughput Technologies.
Taubert, Martin; Grob, Carolina; Howat, Alexandra M; Burns, Oliver J; Chen, Yin; Neufeld, Josh D; Murrell, J Colin
2016-01-01
Methylotrophs are microorganisms ubiquitous in the environment that can metabolize one-carbon (C1) compounds as carbon and/or energy sources. The activity of these prokaryotes impacts biogeochemical cycles within their respective habitats and can determine whether these habitats act as sources or sinks of C1 compounds. Due to the high importance of C1 compounds, not only in biogeochemical cycles, but also for climatic processes, it is vital to understand the contributions of these microorganisms to carbon cycling in different environments. One of the most challenging questions when investigating methylotrophs, but also in environmental microbiology in general, is which species contribute to the environmental processes of interest, or "who does what, where and when?" Metabolic labeling with C1 compounds substituted with (13)C, a technique called stable isotope probing, is a key method to trace carbon fluxes within methylotrophic communities. The incorporation of (13)C into the biomass of active methylotrophs leads to an increase in the molecular mass of their biomolecules. For DNA-based stable isotope probing (DNA-SIP), labeled and unlabeled DNA is separated by isopycnic ultracentrifugation. The ability to specifically analyze DNA of active methylotrophs from a complex background community by high-throughput sequencing techniques, i.e. targeted metagenomics, is the hallmark strength of DNA-SIP for elucidating ecosystem functioning, and a protocol is detailed in this chapter.
2013-06-01
data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this...Gaussia luciferase reconstitution, high throughput screen, small molecule inhibitors, human prostate carcinoma cells, pharmacokinetcs, prostate cancer...cells. The increase in ROS levels is probably due to an induction of a polyamine oxidation pathway and specific small molecule inhibitors of this
A Functional High-Throughput Assay of Myelination in Vitro
2014-07-01
iPS cells derived from human astrocytes. These cell lines will serve as an excellent source of human cells from which our model systems may be...image the 3D rat dorsal root ganglion ( DRG ) cultures with sufficiently low background as to detect electrically-evoked depolarization events, as...stimulation and recording system specifically for this purpose. Further, we found that the limitations inherent in optimizing speed and FOV may
A Disk-Based System for Producing and Distributing Science Products from MODIS
NASA Technical Reports Server (NTRS)
Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael
2007-01-01
Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.
Steven J. Hall; Wenjuan Huang; Kenneth Hammel
2017-01-01
RATIONALE: Carbon dioxide isotope (Î13C value) measurements enable quantification of the sources of soil microbial respiration, thus informing ecosystem C dynamics. Tunable diode lasers (TDLs) can precisely measure CO2 isotopes at low cost and high throughput, but are seldom used for small samples (â¤5 mL). We developed a...
Evaluating between-pathway models with expression data.
Hescott, B J; Leiserson, M D M; Cowen, L J; Slonim, D K
2010-03-01
Between-pathway models (BPMs) are network motifs consisting of pairs of putative redundant pathways. In this article, we show how adding another source of high-throughput data--microarray gene expression data from knockout experiments--allows us to identify a compensatory functional relationship between genes from the two BPM pathways. We evaluate the quality of the BPMs from four different studies, and we describe how our methods might be extended to refine pathways.
2010-01-01
Background Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. Results We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. Conclusions The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities. PMID:20482787
Adverse outcome pathways (AOPs) to enhance EDC ...
Screening and testing for endocrine active chemicals was mandated under 1996 amendments to the Safe Drinking Water Act and Food Quality Protection Act. Efficiencies can be gained in the endocrine disruptor screening program by using available biological and toxicological knowledge to facilitate greater use of high throughput screening data and other data sources to inform endocrine disruptor assessments. Likewise, existing knowledge, when properly organized, can help aid interpretation of test results. The adverse outcome pathway (AOP) framework, which organizes information concerning measureable changes that link initial biological interactions with a chemical to adverse effects that are meaningful to risk assessment and management, can aid this process. This presentation outlines the ways in which the AOP framework has already been employed to support EDSP and how it may further enhance endocrine disruptor assessments in the future. Screening and testing for endocrine active chemicals was mandated under 1996 amendments to the Safe Drinking Water Act and Food Quality Protection Act. Efficiencies can be gained in the endocrine disruptor screening program by using available biological and toxicological knowledge to facilitate greater use of high throughput screening data and other data sources to inform endocrine disruptor assessments. Likewise, existing knowledge, when properly organized, can help aid interpretation of test results. The adverse outcome pathway
Coprolites as a source of information on the genome and diet of the cave hyena
Bon, Céline; Berthonaud, Véronique; Maksud, Frédéric; Labadie, Karine; Poulain, Julie; Artiguenave, François; Wincker, Patrick; Aury, Jean-Marc; Elalouf, Jean-Marc
2012-01-01
We performed high-throughput sequencing of DNA from fossilized faeces to evaluate this material as a source of information on the genome and diet of Pleistocene carnivores. We analysed coprolites derived from the extinct cave hyena (Crocuta crocuta spelaea), and sequenced 90 million DNA fragments from two specimens. The DNA reads enabled a reconstruction of the cave hyena mitochondrial genome with up to a 158-fold coverage. This genome, and those sequenced from extant spotted (Crocuta crocuta) and striped (Hyaena hyaena) hyena specimens, allows for the establishment of a robust phylogeny that supports a close relationship between the cave and the spotted hyena. We also demonstrate that high-throughput sequencing yields data for cave hyena multi-copy and single-copy nuclear genes, and that about 50 per cent of the coprolite DNA can be ascribed to this species. Analysing the data for additional species to indicate the cave hyena diet, we retrieved abundant sequences for the red deer (Cervus elaphus), and characterized its mitochondrial genome with up to a 3.8-fold coverage. In conclusion, we have demonstrated the presence of abundant ancient DNA in the coprolites surveyed. Shotgun sequencing of this material yielded a wealth of DNA sequences for a Pleistocene carnivore and allowed unbiased identification of diet. PMID:22456883
BlackOPs: increasing confidence in variant detection through mappability filtering.
Cabanski, Christopher R; Wilkerson, Matthew D; Soloway, Matthew; Parker, Joel S; Liu, Jinze; Prins, Jan F; Marron, J S; Perou, Charles M; Hayes, D Neil
2013-10-01
Identifying variants using high-throughput sequencing data is currently a challenge because true biological variants can be indistinguishable from technical artifacts. One source of technical artifact results from incorrectly aligning experimentally observed sequences to their true genomic origin ('mismapping') and inferring differences in mismapped sequences to be true variants. We developed BlackOPs, an open-source tool that simulates experimental RNA-seq and DNA whole exome sequences derived from the reference genome, aligns these sequences by custom parameters, detects variants and outputs a blacklist of positions and alleles caused by mismapping. Blacklists contain thousands of artifact variants that are indistinguishable from true variants and, for a given sample, are expected to be almost completely false positives. We show that these blacklist positions are specific to the alignment algorithm and read length used, and BlackOPs allows users to generate a blacklist specific to their experimental setup. We queried the dbSNP and COSMIC variant databases and found numerous variants indistinguishable from mapping errors. We demonstrate how filtering against blacklist positions reduces the number of potential false variants using an RNA-seq glioblastoma cell line data set. In summary, accounting for mapping-caused variants tuned to experimental setups reduces false positives and, therefore, improves genome characterization by high-throughput sequencing.
Accelerating Adverse Outcome Pathway Development Using ...
The adverse outcome pathway (AOP) concept links molecular perturbations with organism and population-level outcomes to support high-throughput toxicity testing. International efforts are underway to define AOPs and store the information supporting these AOPs in a central knowledgebase, however, this process is currently labor-intensive and time-consuming. Publicly available data sources provide a wealth of information that could be used to define computationally-predicted AOPs (cpAOPs), which could serve as a basis for creating expert-derived AOPs in a much more efficient way. Computational tools for mining large datasets provide the means for extracting and organizing the information captured in these public data sources. Using cpAOPs as a starting point for expert-derived AOPs should accelerate AOP development. Coupling this with tools to coordinate and facilitate the expert development efforts will increase the number and quality of AOPs produced, which should play a key role in advancing the adoption of twenty-first century toxicity testing strategies. This review article describes how effective knowledge management and automated approaches to AOP development can enhance and accelerate the development and use of AOPs. As the principles documented in this review are put into practice, we anticipate that the quality and quantity of AOPs available will increase substantially. This, in turn, will aid in the interpretation of ToxCast and other high-throughput tox
Tolopko, Andrew N; Sullivan, John P; Erickson, Sean D; Wrobel, David; Chiang, Su L; Rudnicki, Katrina; Rudnicki, Stewart; Nale, Jennifer; Selfors, Laura M; Greenhouse, Dara; Muhlich, Jeremy L; Shamu, Caroline E
2010-05-18
Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities.
Jung, Seung-Yong; Notton, Timothy; Fong, Erika; ...
2015-01-07
Particle sorting using acoustofluidics has enormous potential but widespread adoption has been limited by complex device designs and low throughput. Here, we report high-throughput separation of particles and T lymphocytes (600 μL min -1) by altering the net sonic velocity to reposition acoustic pressure nodes in a simple two-channel device. Finally, the approach is generalizable to other microfluidic platforms for rapid, high-throughput analysis.
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-01-01
Abstract Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure–property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure–property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure–property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials. PMID:28458737
Hospital economics of the hospitalist.
Gregory, Douglas; Baigelman, Walter; Wilson, Ira B
2003-06-01
To determine the economic impact on the hospital of a hospitalist program and to develop insights into the relative economic importance of variables such as reductions in mean length of stay and cost, improvements in throughput (patients discharged per unit time), payer methods of reimbursement, and the cost of the hospitalist program. The primary data source was Tufts-New England Medical Center in Boston. Patient demographics, utilization, cost, and revenue data were obtained from the hospital's cost accounting system and medical records. The hospitalist admitted and managed all patients during a six-week period on the general medical unit of Tufts-New England Medical Center. Reimbursement, cost, length of stay, and throughput outcomes during this period were contrasted with patients admitted to the unit in the same period in the prior year, in the preceding period, and in the following period. The hospitalist group compared with the control group demonstrated: length of stay reduced to 2.19 days from 3.45 days (p<.001); total hospital costs per admission reduced to 1,775 dollars from 2,332 dollars (p<.001); costs per day increased to 811 dollars from 679 dollars (p<.001); no differences for readmission within 30 days of discharge to extended care facilities. The hospital's expected incremental profitability with the hospitalist was -1.44 dollars per admission excluding incremental throughput effects, and it was most sensitive to changes in the ratio of per diem to case rate reimbursement. Incremental throughput with the hospitalist was estimated at 266 patients annually with an associated incremental profitability of 1.3 million dollars. Hospital interventions designed to reduce length of stay, such as the hospitalist, should be evaluated in terms of cost, throughput, and reimbursement effects. Excluding throughput effects, the hospitalist program was not economically viable due to the influence of per diem reimbursement. Throughput improvements occasioned by the hospitalist program with high baseline occupancy levels are substantial and tend to favor a hospitalist program.
High-throughput screening (HTS) and modeling of the retinoid ...
Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system
Evaluating High Throughput Toxicokinetics and Toxicodynamics for IVIVE (WC10)
High-throughput screening (HTS) generates in vitro data for characterizing potential chemical hazard. TK models are needed to allow in vitro to in vivo extrapolation (IVIVE) to real world situations. The U.S. EPA has created a public tool (R package “httk” for high throughput tox...
High-throughput RAD-SNP genotyping for characterization of sugar beet genotypes
USDA-ARS?s Scientific Manuscript database
High-throughput SNP genotyping provides a rapid way of developing resourceful set of markers for delineating the genetic architecture and for effective species discrimination. In the presented research, we demonstrate a set of 192 SNPs for effective genotyping in sugar beet using high-throughput mar...
Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays (SOT)
Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays DE DeGroot, RS Thomas, and SO SimmonsNational Center for Computational Toxicology, US EPA, Research Triangle Park, NC USAThe EPA’s ToxCast program utilizes a wide variety of high-throughput s...
A quantitative literature-curated gold standard for kinase-substrate pairs
2011-01-01
We describe the Yeast Kinase Interaction Database (KID, http://www.moseslab.csb.utoronto.ca/KID/), which contains high- and low-throughput data relevant to phosphorylation events. KID includes 6,225 low-throughput and 21,990 high-throughput interactions, from greater than 35,000 experiments. By quantitatively integrating these data, we identified 517 high-confidence kinase-substrate pairs that we consider a gold standard. We show that this gold standard can be used to assess published high-throughput datasets, suggesting that it will enable similar rigorous assessments in the future. PMID:21492431
High-Throughput Industrial Coatings Research at The Dow Chemical Company.
Kuo, Tzu-Chi; Malvadkar, Niranjan A; Drumright, Ray; Cesaretti, Richard; Bishop, Matthew T
2016-09-12
At The Dow Chemical Company, high-throughput research is an active area for developing new industrial coatings products. Using the principles of automation (i.e., using robotic instruments), parallel processing (i.e., prepare, process, and evaluate samples in parallel), and miniaturization (i.e., reduce sample size), high-throughput tools for synthesizing, formulating, and applying coating compositions have been developed at Dow. In addition, high-throughput workflows for measuring various coating properties, such as cure speed, hardness development, scratch resistance, impact toughness, resin compatibility, pot-life, surface defects, among others have also been developed in-house. These workflows correlate well with the traditional coatings tests, but they do not necessarily mimic those tests. The use of such high-throughput workflows in combination with smart experimental designs allows accelerated discovery and commercialization.
Tiersch, Terrence R.; Yang, Huiping; Hu, E.
2011-01-01
With the development of genomic research technologies, comparative genome studies among vertebrate species are becoming commonplace for human biomedical research. Fish offer unlimited versatility for biomedical research. Extensive studies are done using these fish models, yielding tens of thousands of specific strains and lines, and the number is increasing every day. Thus, high-throughput sperm cryopreservation is urgently needed to preserve these genetic resources. Although high-throughput processing has been widely applied for sperm cryopreservation in livestock for decades, application in biomedical model fishes is still in the concept-development stage because of the limited sample volumes and the biological characteristics of fish sperm. High-throughput processing in livestock was developed based on advances made in the laboratory and was scaled up for increased processing speed, capability for mass production, and uniformity and quality assurance. Cryopreserved germplasm combined with high-throughput processing constitutes an independent industry encompassing animal breeding, preservation of genetic diversity, and medical research. Currently, there is no specifically engineered system available for high-throughput of cryopreserved germplasm for aquatic species. This review is to discuss the concepts and needs for high-throughput technology for model fishes, propose approaches for technical development, and overview future directions of this approach. PMID:21440666
Code of Federal Regulations, 2013 CFR
2013-07-01
... monthly throughput of 10,000 gallons of gasoline or more. 63.11117 Section 63.11117 Protection of... Hazardous Air Pollutants for Source Category: Gasoline Dispensing Facilities Emission Limitations and... gasoline or more. (a) You must comply with the requirements in section § 63.11116(a). (b) Except as...
Code of Federal Regulations, 2012 CFR
2012-07-01
... monthly throughput of less than 10,000 gallons of gasoline. 63.11116 Section 63.11116 Protection of... Hazardous Air Pollutants for Source Category: Gasoline Dispensing Facilities Emission Limitations and... gallons of gasoline. (a) You must not allow gasoline to be handled in a manner that would result in vapor...
Code of Federal Regulations, 2010 CFR
2010-07-01
... monthly throughput of 10,000 gallons of gasoline or more. 63.11117 Section 63.11117 Protection of... Hazardous Air Pollutants for Source Category: Gasoline Dispensing Facilities Emission Limitations and... gasoline or more. (a) You must comply with the requirements in section § 63.11116(a). (b) Except as...
Code of Federal Regulations, 2013 CFR
2013-07-01
... monthly throughput of less than 10,000 gallons of gasoline. 63.11116 Section 63.11116 Protection of... Hazardous Air Pollutants for Source Category: Gasoline Dispensing Facilities Emission Limitations and... gallons of gasoline. (a) You must not allow gasoline to be handled in a manner that would result in vapor...
Code of Federal Regulations, 2011 CFR
2011-07-01
... monthly throughput of less than 10,000 gallons of gasoline. 63.11116 Section 63.11116 Protection of... Hazardous Air Pollutants for Source Category: Gasoline Dispensing Facilities Emission Limitations and... gallons of gasoline. (a) You must not allow gasoline to be handled in a manner that would result in vapor...
Code of Federal Regulations, 2010 CFR
2010-07-01
... monthly throughput of less than 10,000 gallons of gasoline. 63.11116 Section 63.11116 Protection of... Hazardous Air Pollutants for Source Category: Gasoline Dispensing Facilities Emission Limitations and... gallons of gasoline. (a) You must not allow gasoline to be handled in a manner that would result in vapor...
Code of Federal Regulations, 2011 CFR
2011-07-01
... monthly throughput of 10,000 gallons of gasoline or more. 63.11117 Section 63.11117 Protection of... Hazardous Air Pollutants for Source Category: Gasoline Dispensing Facilities Emission Limitations and... gasoline or more. (a) You must comply with the requirements in section § 63.11116(a). (b) Except as...
Code of Federal Regulations, 2012 CFR
2012-07-01
... monthly throughput of 10,000 gallons of gasoline or more. 63.11117 Section 63.11117 Protection of... Hazardous Air Pollutants for Source Category: Gasoline Dispensing Facilities Emission Limitations and... gasoline or more. (a) You must comply with the requirements in section § 63.11116(a). (b) Except as...
GiNA, an Efficient and High-Throughput Software for Horticultural Phenotyping
Diaz-Garcia, Luis; Covarrubias-Pazaran, Giovanny; Schlautman, Brandon; Zalapa, Juan
2016-01-01
Traditional methods for trait phenotyping have been a bottleneck for research in many crop species due to their intensive labor, high cost, complex implementation, lack of reproducibility and propensity to subjective bias. Recently, multiple high-throughput phenotyping platforms have been developed, but most of them are expensive, species-dependent, complex to use, and available only for major crops. To overcome such limitations, we present the open-source software GiNA, which is a simple and free tool for measuring horticultural traits such as shape- and color-related parameters of fruits, vegetables, and seeds. GiNA is multiplatform software available in both R and MATLAB® programming languages and uses conventional images from digital cameras with minimal requirements. It can process up to 11 different horticultural morphological traits such as length, width, two-dimensional area, volume, projected skin, surface area, RGB color, among other parameters. Different validation tests produced highly consistent results under different lighting conditions and camera setups making GiNA a very reliable platform for high-throughput phenotyping. In addition, five-fold cross validation between manually generated and GiNA measurements for length and width in cranberry fruits were 0.97 and 0.92. In addition, the same strategy yielded prediction accuracies above 0.83 for color estimates produced from images of cranberries analyzed with GiNA compared to total anthocyanin content (TAcy) of the same fruits measured with the standard methodology of the industry. Our platform provides a scalable, easy-to-use and affordable tool for massive acquisition of phenotypic data of fruits, seeds, and vegetables. PMID:27529547
GiNA, an Efficient and High-Throughput Software for Horticultural Phenotyping.
Diaz-Garcia, Luis; Covarrubias-Pazaran, Giovanny; Schlautman, Brandon; Zalapa, Juan
2016-01-01
Traditional methods for trait phenotyping have been a bottleneck for research in many crop species due to their intensive labor, high cost, complex implementation, lack of reproducibility and propensity to subjective bias. Recently, multiple high-throughput phenotyping platforms have been developed, but most of them are expensive, species-dependent, complex to use, and available only for major crops. To overcome such limitations, we present the open-source software GiNA, which is a simple and free tool for measuring horticultural traits such as shape- and color-related parameters of fruits, vegetables, and seeds. GiNA is multiplatform software available in both R and MATLAB® programming languages and uses conventional images from digital cameras with minimal requirements. It can process up to 11 different horticultural morphological traits such as length, width, two-dimensional area, volume, projected skin, surface area, RGB color, among other parameters. Different validation tests produced highly consistent results under different lighting conditions and camera setups making GiNA a very reliable platform for high-throughput phenotyping. In addition, five-fold cross validation between manually generated and GiNA measurements for length and width in cranberry fruits were 0.97 and 0.92. In addition, the same strategy yielded prediction accuracies above 0.83 for color estimates produced from images of cranberries analyzed with GiNA compared to total anthocyanin content (TAcy) of the same fruits measured with the standard methodology of the industry. Our platform provides a scalable, easy-to-use and affordable tool for massive acquisition of phenotypic data of fruits, seeds, and vegetables.
Compton backscattered collimated x-ray source
Ruth, R.D.; Huang, Z.
1998-10-20
A high-intensity, inexpensive and collimated x-ray source is disclosed for applications such as x-ray lithography is disclosed. An intense pulse from a high power laser, stored in a high-finesse resonator, repetitively collides nearly head-on with and Compton backscatters off a bunched electron beam, having relatively low energy and circulating in a compact storage ring. Both the laser and the electron beams are tightly focused and matched at the interaction region inside the optical resonator. The laser-electron interaction not only gives rise to x-rays at the desired wavelength, but also cools and stabilizes the electrons against intrabeam scattering and Coulomb repulsion with each other in the storage ring. This cooling provides a compact, intense bunch of electrons suitable for many applications. In particular, a sufficient amount of x-rays can be generated by this device to make it an excellent and flexible Compton backscattered x-ray (CBX) source for high throughput x-ray lithography and many other applications. 4 figs.
Compton backscattered collimated x-ray source
Ruth, Ronald D.; Huang, Zhirong
1998-01-01
A high-intensity, inexpensive and collimated x-ray source for applications such as x-ray lithography is disclosed. An intense pulse from a high power laser, stored in a high-finesse resonator, repetitively collides nearly head-on with and Compton backscatters off a bunched electron beam, having relatively low energy and circulating in a compact storage ring. Both the laser and the electron beams are tightly focused and matched at the interaction region inside the optical resonator. The laser-electron interaction not only gives rise to x-rays at the desired wavelength, but also cools and stabilizes the electrons against intrabeam scattering and Coulomb repulsion with each other in the storage ring. This cooling provides a compact, intense bunch of electrons suitable for many applications. In particular, a sufficient amount of x-rays can be generated by this device to make it an excellent and flexible Compton backscattered x-ray (CBX) source for high throughput x-ray lithography and many other applications.
Compton backscattered collmated X-ray source
Ruth, Ronald D.; Huang, Zhirong
2000-01-01
A high-intensity, inexpensive and collimated x-ray source for applications such as x-ray lithography is disclosed. An intense pulse from a high power laser, stored in a high-finesse resonator, repetitively collides nearly head-on with and Compton backscatters off a bunched electron beam, having relatively low energy and circulating in a compact storage ring. Both the laser and the electron beams are tightly focused and matched at the interaction region inside the optical resonator. The laser-electron interaction not only gives rise to x-rays at the desired wavelength, but also cools and stabilizes the electrons against intrabeam scattering and Coulomb repulsion with each other in the storage ring. This cooling provides a compact, intense bunch of electrons suitable for many applications. In particular, a sufficient amount of x-rays can be generated by this device to make it an excellent and flexible Compton backscattered x-ray (CBX) source for high throughput x-ray lithography and many other applications.
Huang, Kuo-Sen; Mark, David; Gandenberger, Frank Ulrich
2006-01-01
The plate::vision is a high-throughput multimode reader capable of reading absorbance, fluorescence, fluorescence polarization, time-resolved fluorescence, and luminescence. Its performance has been shown to be quite comparable with other readers. When the reader is integrated into the plate::explorer, an ultrahigh-throughput screening system with event-driven software and parallel plate-handling devices, it becomes possible to run complicated assays with kinetic readouts in high-density microtiter plate formats for high-throughput screening. For the past 5 years, we have used the plate::vision and the plate::explorer to run screens and have generated more than 30 million data points. Their throughput, performance, and robustness have speeded up our drug discovery process greatly.
Carmona, Santiago J.; Nielsen, Morten; Schafer-Nielsen, Claus; Mucci, Juan; Altcheh, Jaime; Balouz, Virginia; Tekiel, Valeria; Frasch, Alberto C.; Campetella, Oscar; Buscaglia, Carlos A.; Agüero, Fernán
2015-01-01
Complete characterization of antibody specificities associated to natural infections is expected to provide a rich source of serologic biomarkers with potential applications in molecular diagnosis, follow-up of chemotherapeutic treatments, and prioritization of targets for vaccine development. Here, we developed a highly-multiplexed platform based on next-generation high-density peptide microarrays to map these specificities in Chagas Disease, an exemplar of a human infectious disease caused by the protozoan Trypanosoma cruzi. We designed a high-density peptide microarray containing more than 175,000 overlapping 15mer peptides derived from T. cruzi proteins. Peptides were synthesized in situ on microarray slides, spanning the complete length of 457 parasite proteins with fully overlapped 15mers (1 residue shift). Screening of these slides with antibodies purified from infected patients and healthy donors demonstrated both a high technical reproducibility as well as epitope mapping consistency when compared with earlier low-throughput technologies. Using a conservative signal threshold to classify positive (reactive) peptides we identified 2,031 disease-specific peptides and 97 novel parasite antigens, effectively doubling the number of known antigens and providing a 10-fold increase in the number of fine mapped antigenic determinants for this disease. Finally, further analysis of the chip data showed that optimizing the amount of sequence overlap of displayed peptides can increase the protein space covered in a single chip by at least ∼threefold without sacrificing sensitivity. In conclusion, we show the power of high-density peptide chips for the discovery of pathogen-specific linear B-cell epitopes from clinical samples, thus setting the stage for high-throughput biomarker discovery screenings and proteome-wide studies of immune responses against pathogens. PMID:25922409
Carmona, Santiago J; Nielsen, Morten; Schafer-Nielsen, Claus; Mucci, Juan; Altcheh, Jaime; Balouz, Virginia; Tekiel, Valeria; Frasch, Alberto C; Campetella, Oscar; Buscaglia, Carlos A; Agüero, Fernán
2015-07-01
Complete characterization of antibody specificities associated to natural infections is expected to provide a rich source of serologic biomarkers with potential applications in molecular diagnosis, follow-up of chemotherapeutic treatments, and prioritization of targets for vaccine development. Here, we developed a highly-multiplexed platform based on next-generation high-density peptide microarrays to map these specificities in Chagas Disease, an exemplar of a human infectious disease caused by the protozoan Trypanosoma cruzi. We designed a high-density peptide microarray containing more than 175,000 overlapping 15 mer peptides derived from T. cruzi proteins. Peptides were synthesized in situ on microarray slides, spanning the complete length of 457 parasite proteins with fully overlapped 15 mers (1 residue shift). Screening of these slides with antibodies purified from infected patients and healthy donors demonstrated both a high technical reproducibility as well as epitope mapping consistency when compared with earlier low-throughput technologies. Using a conservative signal threshold to classify positive (reactive) peptides we identified 2,031 disease-specific peptides and 97 novel parasite antigens, effectively doubling the number of known antigens and providing a 10-fold increase in the number of fine mapped antigenic determinants for this disease. Finally, further analysis of the chip data showed that optimizing the amount of sequence overlap of displayed peptides can increase the protein space covered in a single chip by at least ∼ threefold without sacrificing sensitivity. In conclusion, we show the power of high-density peptide chips for the discovery of pathogen-specific linear B-cell epitopes from clinical samples, thus setting the stage for high-throughput biomarker discovery screenings and proteome-wide studies of immune responses against pathogens. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Fixed Delay Interferometry for Doppler Extrasolar Planet Detection
NASA Astrophysics Data System (ADS)
Ge, Jian
2002-06-01
We present a new technique based on fixed delay interferometry for high-throughput, high-precision, and multiobject Doppler radial velocity (RV) surveys for extrasolar planets. The Doppler measurements are conducted by monitoring the stellar fringe phase shifts of the interferometer instead of absorption-line centroid shifts as in state-of-the-art echelle spectroscopy. High Doppler sensitivity is achieved through optimizing the optical delay in the interferometer and reducing photon noise by measuring multiple fringes over a broad band. This broadband operation is performed by coupling the interferometer with a low- to medium-resolution postdisperser. The resulting fringing spectra over the bandpass are recorded on a two-dimensional detector, with fringes sampled in the slit spatial direction and the spectrum sampled in the dispersion direction. The resulting total Doppler sensitivity is, in theory, independent of the dispersing power of the postdisperser, which allows for the development of new-generation RV machines with much reduced size, high stability, and low cost compared to echelles. This technique has the potential to improve RV survey efficiency by 2-3 orders of magnitude over the cross-dispersed echelle spectroscopy approach, which would allow a full-sky RV survey of hundreds of thousands of stars for planets, brown dwarfs, and stellar companions once the instrument is operated as a multiobject instrument and is optimized for high throughput. The simple interferometer response potentially allows this technique to be operated at other wavelengths independent of popular iodine reference sources, being actively used in most of the current echelles for Doppler planet searches, to search for planets around early-type stars, white dwarfs, and M, L, and T dwarfs for the first time. The high throughput of this instrument could also allow investigation of extragalactic objects for RV variations at high precision.
Guo, Baoshan; Lei, Cheng; Ito, Takuro; Jiang, Yiyue; Ozeki, Yasuyuki; Goda, Keisuke
2016-01-01
The development of reliable, sustainable, and economical sources of alternative fuels is an important, but challenging goal for the world. As an alternative to liquid fossil fuels, algal biofuel is expected to play a key role in alleviating global warming since algae absorb atmospheric CO2 via photosynthesis. Among various algae for fuel production, Euglena gracilis is an attractive microalgal species as it is known to produce wax ester (good for biodiesel and aviation fuel) within lipid droplets. To date, while there exist many techniques for inducing microalgal cells to produce and accumulate lipid with high efficiency, few analytical methods are available for characterizing a population of such lipid-accumulated microalgae including E. gracilis with high throughout, high accuracy, and single-cell resolution simultaneously. Here we demonstrate high-throughput, high-accuracy, single-cell screening of E. gracilis with fluorescence-assisted optofluidic time-stretch microscopy-a method that combines the strengths of microfluidic cell focusing, optical time-stretch microscopy, and fluorescence detection used in conventional flow cytometry. Specifically, our fluorescence-assisted optofluidic time-stretch microscope consists of an optical time-stretch microscope and a fluorescence analyzer on top of a hydrodynamically focusing microfluidic device and can detect fluorescence from every E. gracilis cell in a population and simultaneously obtain its image with a high throughput of 10,000 cells/s. With the multi-dimensional information acquired by the system, we classify nitrogen-sufficient (ordinary) and nitrogen-deficient (lipid-accumulated) E. gracilis cells with a low false positive rate of 1.0%. This method holds promise for evaluating cultivation techniques and selective breeding for microalgae-based biofuel production.
High power parallel ultrashort pulse laser processing
NASA Astrophysics Data System (ADS)
Gillner, Arnold; Gretzki, Patrick; Büsing, Lasse
2016-03-01
The class of ultra-short-pulse (USP) laser sources are used, whenever high precession and high quality material processing is demanded. These laser sources deliver pulse duration in the range of ps to fs and are characterized with high peak intensities leading to a direct vaporization of the material with a minimum thermal damage. With the availability of industrial laser source with an average power of up to 1000W, the main challenge consist of the effective energy distribution and disposition. Using lasers with high repetition rates in the MHz region can cause thermal issues like overheating, melt production and low ablation quality. In this paper, we will discuss different approaches for multibeam processing for utilization of high pulse energies. The combination of diffractive optics and conventional galvometer scanner can be used for high throughput laser ablation, but are limited in the optical qualities. We will show which applications can benefit from this hybrid optic and which improvements in productivity are expected. In addition, the optical limitations of the system will be compiled, in order to evaluate the suitability of this approach for any given application.
The Correction of Fiber Throughput Variation due to Focal Ratio Degradation
NASA Astrophysics Data System (ADS)
Chen, Jianjun; Bai, Zhongrui; Li, Guangwei; Zhang, Haotong
2014-01-01
The focal ratio degradation (FRD) of optical fibers is a major source causing light loss to astronomical multi-fibre instruments like LAMOST (Oliveira, A. C, et al. 2005). The effects of stress and twist during mounting and rotation of the fibers could change the FRD for individual fibers (Clayton 1989), which means that the transmission efficiency of each individual fiber will vary. We investigate such throughput variation among LAMOST fibers and its relevance to the intensity of sky emission lines (Garstang 1989) over the full wavelength coverage. On the basis of the work, we present an approach to correct the varied fiber throughput by measuring the strength of the sky emission lines as the secondary throughput correction.
TCP Throughput Profiles Using Measurements over Dedicated Connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata
Wide-area data transfers in high-performance computing infrastructures are increasingly being carried over dynamically provisioned dedicated network connections that provide high capacities with no competing traffic. We present extensive TCP throughput measurements and time traces over a suite of physical and emulated 10 Gbps connections with 0-366 ms round-trip times (RTTs). Contrary to the general expectation, they show significant statistical and temporal variations, in addition to the overall dependencies on the congestion control mechanism, buffer size, and the number of parallel streams. We analyze several throughput profiles that have highly desirable concave regions wherein the throughput decreases slowly with RTTs, inmore » stark contrast to the convex profiles predicted by various TCP analytical models. We present a generic throughput model that abstracts the ramp-up and sustainment phases of TCP flows, which provides insights into qualitative trends observed in measurements across TCP variants: (i) slow-start followed by well-sustained throughput leads to concave regions; (ii) large buffers and multiple parallel streams expand the concave regions in addition to improving the throughput; and (iii) stable throughput dynamics, indicated by a smoother Poincare map and smaller Lyapunov exponents, lead to wider concave regions. These measurements and analytical results together enable us to select a TCP variant and its parameters for a given connection to achieve high throughput with statistical guarantees.« less
High throughput toxicology programs, such as ToxCast and Tox21, have provided biological effects data for thousands of chemicals at multiple concentrations. Compared to traditional, whole-organism approaches, high throughput assays are rapid and cost-effective, yet they generall...
The U.S. EPA, under its ExpoCast program, is developing high-throughput near-field modeling methods to estimate human chemical exposure and to provide real-world context to high-throughput screening (HTS) hazard data. These novel modeling methods include reverse methods to infer ...
The development of a general purpose ARM-based processing unit for the ATLAS TileCal sROD
NASA Astrophysics Data System (ADS)
Cox, M. A.; Reed, R.; Mellado, B.
2015-01-01
After Phase-II upgrades in 2022, the data output from the LHC ATLAS Tile Calorimeter will increase significantly. ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface to the ARM processors. An overview of the PU is given and the results for performance and throughput testing of four different ARM Cortex System on Chips are presented.
HTSeq--a Python framework to work with high-throughput sequencing data.
Anders, Simon; Pyl, Paul Theodor; Huber, Wolfgang
2015-01-15
A large choice of tools exists for many standard tasks in the analysis of high-throughput sequencing (HTS) data. However, once a project deviates from standard workflows, custom scripts are needed. We present HTSeq, a Python library to facilitate the rapid development of such scripts. HTSeq offers parsers for many common data formats in HTS projects, as well as classes to represent data, such as genomic coordinates, sequences, sequencing reads, alignments, gene model information and variant calls, and provides data structures that allow for querying via genomic coordinates. We also present htseq-count, a tool developed with HTSeq that preprocesses RNA-Seq data for differential expression analysis by counting the overlap of reads with genes. HTSeq is released as an open-source software under the GNU General Public Licence and available from http://www-huber.embl.de/HTSeq or from the Python Package Index at https://pypi.python.org/pypi/HTSeq. © The Author 2014. Published by Oxford University Press.
Xi-cam: Flexible High Throughput Data Processing for GISAXS
NASA Astrophysics Data System (ADS)
Pandolfi, Ronald; Kumar, Dinesh; Venkatakrishnan, Singanallur; Sarje, Abinav; Krishnan, Hari; Pellouchoud, Lenson; Ren, Fang; Fournier, Amanda; Jiang, Zhang; Tassone, Christopher; Mehta, Apurva; Sethian, James; Hexemer, Alexander
With increasing capabilities and data demand for GISAXS beamlines, supporting software is under development to handle larger data rates, volumes, and processing needs. We aim to provide a flexible and extensible approach to GISAXS data treatment as a solution to these rising needs. Xi-cam is the CAMERA platform for data management, analysis, and visualization. The core of Xi-cam is an extensible plugin-based GUI platform which provides users an interactive interface to processing algorithms. Plugins are available for SAXS/GISAXS data and data series visualization, as well as forward modeling and simulation through HipGISAXS. With Xi-cam's advanced mode, data processing steps are designed as a graph-based workflow, which can be executed locally or remotely. Remote execution utilizes HPC or de-localized resources, allowing for effective reduction of high-throughput data. Xi-cam is open-source and cross-platform. The processing algorithms in Xi-cam include parallel cpu and gpu processing optimizations, also taking advantage of external processing packages such as pyFAI. Xi-cam is available for download online.
High-throughput Crystallography for Structural Genomics
Joachimiak, Andrzej
2009-01-01
Protein X-ray crystallography recently celebrated its 50th anniversary. The structures of myoglobin and hemoglobin determined by Kendrew and Perutz provided the first glimpses into the complex protein architecture and chemistry. Since then, the field of structural molecular biology has experienced extraordinary progress and now over 53,000 proteins structures have been deposited into the Protein Data Bank. In the past decade many advances in macromolecular crystallography have been driven by world-wide structural genomics efforts. This was made possible because of third-generation synchrotron sources, structure phasing approaches using anomalous signal and cryo-crystallography. Complementary progress in molecular biology, proteomics, hardware and software for crystallographic data collection, structure determination and refinement, computer science, databases, robotics and automation improved and accelerated many processes. These advancements provide the robust foundation for structural molecular biology and assure strong contribution to science in the future. In this report we focus mainly on reviewing structural genomics high-throughput X-ray crystallography technologies and their impact. PMID:19765976
High-throughput linear optical stretcher for mechanical characterization of blood cells.
Roth, Kevin B; Neeves, Keith B; Squier, Jeff; Marr, David W M
2016-04-01
This study describes a linear optical stretcher as a high-throughput mechanical property cytometer. Custom, inexpensive, and scalable optics image a linear diode bar source into a microfluidic channel, where cells are hydrodynamically focused into the optical stretcher. Upon entering the stretching region, antipodal optical forces generated by the refraction of tightly focused laser light at the cell membrane deform each cell in flow. Each cell relaxes as it flows out of the trap and is compared to the stretched state to determine deformation. The deformation response of untreated red blood cells and neutrophils were compared to chemically treated cells. Statistically significant differences were observed between normal, diamide-treated, and glutaraldehyde-treated red blood cells, as well as between normal and cytochalasin D-treated neutrophils. Based on the behavior of the pure, untreated populations of red cells and neutrophils, a mixed population of these cells was tested and the discrete populations were identified by deformability. © 2015 International Society for Advancement of Cytometry. © 2015 International Society for Advancement of Cytometry.
Decker, Stephen R.; Sykes, Robert W.; Turner, Geoffrey B.; Lupoi, Jason S.; Doepkke, Crissa; Tucker, Melvin P.; Schuster, Logan A.; Mazza, Kimberly; Himmel, Michael E.; Davis, Mark F.; Gjersing, Erica
2015-01-01
The conversion of lignocellulosic biomass to fuels, chemicals, and other commodities has been explored as one possible pathway toward reductions in the use of non-renewable energy sources. In order to identify which plants, out of a diverse pool, have the desired chemical traits for downstream applications, attributes, such as cellulose and lignin content, or monomeric sugar release following an enzymatic saccharification, must be compared. The experimental and data analysis protocols of the standard methods of analysis can be time-consuming, thereby limiting the number of samples that can be measured. High-throughput (HTP) methods alleviate the shortcomings of the standard methods, and permit the rapid screening of available samples to isolate those possessing the desired traits. This study illustrates the HTP sugar release and pyrolysis-molecular beam mass spectrometry pipelines employed at the National Renewable Energy Lab. These pipelines have enabled the efficient assessment of thousands of plants while decreasing experimental time and costs through reductions in labor and consumables. PMID:26437006
Mordwinkin, Nicholas M.; Burridge, Paul W.; Wu, Joseph C.
2013-01-01
Drug attrition rates have increased in past years, resulting in growing costs for the pharmaceutical industry and consumers. The reasons for this include the lack of in vitro models that correlate with clinical results, and poor preclinical toxicity screening assays. The in vitro production of human cardiac progenitor cells and cardiomyocytes from human pluripotent stem cells provides an amenable source of cells for applications in drug discovery, disease modeling, regenerative medicine, and cardiotoxicity screening. In addition, the ability to derive human induced pluripotent stem cells from somatic tissues, combined with current high-throughput screening and pharmacogenomics, may help realize the use of these cells to fulfill the potential of personalized medicine. In this review, we discuss the use of pluripotent stem cell-derived cardiomyocytes for drug discovery and cardiotoxicity screening, as well as current hurdles that must be overcome for wider clinical applications of this promising approach. PMID:23229562
Efficient visualization of high-throughput targeted proteomics experiments: TAPIR.
Röst, Hannes L; Rosenberger, George; Aebersold, Ruedi; Malmström, Lars
2015-07-15
Targeted mass spectrometry comprises a set of powerful methods to obtain accurate and consistent protein quantification in complex samples. To fully exploit these techniques, a cross-platform and open-source software stack based on standardized data exchange formats is required. We present TAPIR, a fast and efficient Python visualization software for chromatograms and peaks identified in targeted proteomics experiments. The input formats are open, community-driven standardized data formats (mzML for raw data storage and TraML encoding the hierarchical relationships between transitions, peptides and proteins). TAPIR is scalable to proteome-wide targeted proteomics studies (as enabled by SWATH-MS), allowing researchers to visualize high-throughput datasets. The framework integrates well with existing automated analysis pipelines and can be extended beyond targeted proteomics to other types of analyses. TAPIR is available for all computing platforms under the 3-clause BSD license at https://github.com/msproteomicstools/msproteomicstools. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
High-throughput detection of ethanol-producing cyanobacteria in a microdroplet platform
Abalde-Cela, Sara; Gould, Anna; Liu, Xin; Kazamia, Elena; Smith, Alison G.; Abell, Chris
2015-01-01
Ethanol production by microorganisms is an important renewable energy source. Most processes involve fermentation of sugars from plant feedstock, but there is increasing interest in direct ethanol production by photosynthetic organisms. To facilitate this, a high-throughput screening technique for the detection of ethanol is required. Here, a method for the quantitative detection of ethanol in a microdroplet-based platform is described that can be used for screening cyanobacterial strains to identify those with the highest ethanol productivity levels. The detection of ethanol by enzymatic assay was optimized both in bulk and in microdroplets. In parallel, the encapsulation of engineered ethanol-producing cyanobacteria in microdroplets and their growth dynamics in microdroplet reservoirs were demonstrated. The combination of modular microdroplet operations including droplet generation for cyanobacteria encapsulation, droplet re-injection and pico-injection, and laser-induced fluorescence, were used to create this new platform to screen genetically engineered strains of cyanobacteria with different levels of ethanol production. PMID:25878135
DOE Office of Scientific and Technical Information (OSTI.GOV)
Decker, Stephen R.; Sykes, Robert W.; Turner, Geoffrey B.
The conversion of lignocellulosic biomass to fuels, chemicals, and other commodities has been explored as one possible pathway toward reductions in the use of non-renewable energy sources. In order to identify which plants, out of a diverse pool, have the desired chemical traits for downstream applications, attributes, such as cellulose and lignin content, or monomeric sugar release following an enzymatic saccharification, must be compared. The experimental and data analysis protocols of the standard methods of analysis can be time-consuming, thereby limiting the number of samples that can be measured. High-throughput (HTP) methods alleviate the shortcomings of the standard methods, andmore » permit the rapid screening of available samples to isolate those possessing the desired traits. This study illustrates the HTP sugar release and pyrolysis-molecular beam mass spectrometry pipelines employed at the National Renewable Energy Lab. These pipelines have enabled the efficient assessment of thousands of plants while decreasing experimental time and costs through reductions in labor and consumables.« less
CSMA Versus Prioritized CSMA for Air-Traffic-Control Improvement
NASA Technical Reports Server (NTRS)
Robinson, Daryl C.
2001-01-01
OPNET version 7.0 simulations are presented involving an important application of the Aeronautical Telecommunications Network (ATN), Controller Pilot Data Link Communications (CPDLC) over the Very High Frequency Data Link, Mode 2 (VDL-2). Communication is modeled for essentially all incoming and outgoing nonstop air-traffic for just three United States cities: Cleveland, Cincinnati, and Detroit. There are 32 airports in the simulation, 29 of which are either sources or destinations for the air-traffic of the aforementioned three airports. The simulation involves 111 Air Traffic Control (ATC) ground stations, and 1,235 equally equipped aircraft-taking off, flying realistic free-flight trajectories, and landing in a 24-hr period. Collisionless, Prioritized Carrier Sense Multiple Access (CSMA) is successfully tested and compared with the traditional CSMA typically associated with VDL-2. The performance measures include latency, throughput, and packet loss. As expected, Prioritized CSMA is much quicker and more efficient than traditional CSMA. These simulation results show the potency of Prioritized CSMA for implementing low latency, high throughput, and efficient connectivity.
DOSE: an R/Bioconductor package for disease ontology semantic and enrichment analysis.
Yu, Guangchuang; Wang, Li-Gen; Yan, Guang-Rong; He, Qing-Yu
2015-02-15
Disease ontology (DO) annotates human genes in the context of disease. DO is important annotation in translating molecular findings from high-throughput data to clinical relevance. DOSE is an R package providing semantic similarity computations among DO terms and genes which allows biologists to explore the similarities of diseases and of gene functions in disease perspective. Enrichment analyses including hypergeometric model and gene set enrichment analysis are also implemented to support discovering disease associations of high-throughput biological data. This allows biologists to verify disease relevance in a biological experiment and identify unexpected disease associations. Comparison among gene clusters is also supported. DOSE is released under Artistic-2.0 License. The source code and documents are freely available through Bioconductor (http://www.bioconductor.org/packages/release/bioc/html/DOSE.html). Supplementary data are available at Bioinformatics online. gcyu@connect.hku.hk or tqyhe@jnu.edu.cn. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Library construction for next-generation sequencing: Overviews and challenges
Head, Steven R.; Komori, H. Kiyomi; LaMere, Sarah A.; Whisenant, Thomas; Van Nieuwerburgh, Filip; Salomon, Daniel R.; Ordoukhanian, Phillip
2014-01-01
High-throughput sequencing, also known as next-generation sequencing (NGS), has revolutionized genomic research. In recent years, NGS technology has steadily improved, with costs dropping and the number and range of sequencing applications increasing exponentially. Here, we examine the critical role of sequencing library quality and consider important challenges when preparing NGS libraries from DNA and RNA sources. Factors such as the quantity and physical characteristics of the RNA or DNA source material as well as the desired application (i.e., genome sequencing, targeted sequencing, RNA-seq, ChIP-seq, RIP-seq, and methylation) are addressed in the context of preparing high quality sequencing libraries. In addition, the current methods for preparing NGS libraries from single cells are also discussed. PMID:24502796
Plasma Doping—Enabling Technology for High Dose Logic and Memory Applications
NASA Astrophysics Data System (ADS)
Miller, T.; Godet, L.; Papasouliotis, G. D.; Singh, V.
2008-11-01
As logic and memory device dimensions shrink with each generation, there are more high dose implants at lower energies. Examples include dual poly gate (also referred to as counter-doped poly), elevated source drain and contact plug implants. Plasma Doping technology throughput and dopant profile benefits at these ultra high dose and lower energy conditions have been well established [1,2,3]. For the first time a production-worthy plasma doping implanter, the VIISta PLAD tool, has been developed with unique architecture suited for precise and repeatable dopant placement. Critical elements of the architecture include pulsed DC wafer bias, closed-loop dosimetry and a uniform low energy, high density plasma source. In this paper key performance metrics such as dose uniformity, dose repeatability and dopant profile control will be presented that demonstrate the production-worthiness of the VIISta PLAD tool for several high dose applications.
[Current applications of high-throughput DNA sequencing technology in antibody drug research].
Yu, Xin; Liu, Qi-Gang; Wang, Ming-Rong
2012-03-01
Since the publication of a high-throughput DNA sequencing technology based on PCR reaction was carried out in oil emulsions in 2005, high-throughput DNA sequencing platforms have been evolved to a robust technology in sequencing genomes and diverse DNA libraries. Antibody libraries with vast numbers of members currently serve as a foundation of discovering novel antibody drugs, and high-throughput DNA sequencing technology makes it possible to rapidly identify functional antibody variants with desired properties. Herein we present a review of current applications of high-throughput DNA sequencing technology in the analysis of antibody library diversity, sequencing of CDR3 regions, identification of potent antibodies based on sequence frequency, discovery of functional genes, and combination with various display technologies, so as to provide an alternative approach of discovery and development of antibody drugs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monti, Henri; Butt, Ali R; Vazhkudai, Sudharshan S
2010-04-01
Innovative scientific applications and emerging dense data sources are creating a data deluge for high-end computing systems. Processing such large input data typically involves copying (or staging) onto the supercomputer's specialized high-speed storage, scratch space, for sustained high I/O throughput. The current practice of conservatively staging data as early as possible makes the data vulnerable to storage failures, which may entail re-staging and consequently reduced job throughput. To address this, we present a timely staging framework that uses a combination of job startup time predictions, user-specified intermediate nodes, and decentralized data delivery to coincide input data staging with job start-up.more » By delaying staging to when it is necessary, the exposure to failures and its effects can be reduced. Evaluation using both PlanetLab and simulations based on three years of Jaguar (No. 1 in Top500) job logs show as much as 85.9% reduction in staging times compared to direct transfers, 75.2% reduction in wait time on scratch, and 2.4% reduction in usage/hour.« less
High-throughput continuous hydrothermal synthesis of an entire nanoceramic phase diagram.
Weng, Xiaole; Cockcroft, Jeremy K; Hyett, Geoffrey; Vickers, Martin; Boldrin, Paul; Tang, Chiu C; Thompson, Stephen P; Parker, Julia E; Knowles, Jonathan C; Rehman, Ihtesham; Parkin, Ivan; Evans, Julian R G; Darr, Jawwad A
2009-01-01
A novel High-Throughput Continuous Hydrothermal (HiTCH) flow synthesis reactor was used to make directly and rapidly a 66-sample nanoparticle library (entire phase diagram) of nanocrystalline Ce(x)Zr(y)Y(z)O(2-delta) in less than 12 h. High resolution PXRD data were obtained for the entire heat-treated library (at 1000 degrees C/1 h) in less than a day using the new robotic beamline I11, located at Diamond Light Source (DLS). This allowed Rietveld-quality powder X-ray diffraction (PXRD) data collection of the entire 66-sample library in <1 day. Consequently, the authors rapidly mapped out phase behavior and sintering behaviors for the entire library. Out of the entire 66-sample heat-treated library, the PXRD data suggests that 43 possess the fluorite structure, of which 30 (out of 36) are ternary compositions. The speed, quantity and quality of data obtained by our new approach, offers an exciting new development which will allow structure-property relationships to be accessed for nanoceramics in much shorter time periods.
AIR TOXICS ASSESSMENT REFINEMENT IN RAPCA'S JURISDICTION - DAYTON, OH AREA
RAPCA has receive two grants to conduct this project. As part of the original project, RAPCA has improved and expanded their point source inventory by converting the following area sources to point sources: dry cleaners, gasoline throughput processes and halogenated solvent clea...
Pan, Jui-Wen; Tu, Sheng-Han
2012-05-20
A cost-effective, high-throughput, and high-yield method for the efficiency enhancement of an optical mouse lighting module is proposed. We integrated imprinting technology and free-form surface design to obtain a lighting module with high illumination efficiency and uniform intensity distribution. The imprinting technique can increase the light extraction efficiency and modulate the intensity distribution of light-emitting diodes. A modulated light source was utilized to add a compact free-form surface element to create a lighting module with 95% uniformity and 80% optical efficiency.
Lessons from high-throughput protein crystallization screening: 10 years of practical experience
JR, Luft; EH, Snell; GT, DeTitta
2011-01-01
Introduction X-ray crystallography provides the majority of our structural biological knowledge at a molecular level and in terms of pharmaceutical design is a valuable tool to accelerate discovery. It is the premier technique in the field, but its usefulness is significantly limited by the need to grow well-diffracting crystals. It is for this reason that high-throughput crystallization has become a key technology that has matured over the past 10 years through the field of structural genomics. Areas covered The authors describe their experiences in high-throughput crystallization screening in the context of structural genomics and the general biomedical community. They focus on the lessons learnt from the operation of a high-throughput crystallization screening laboratory, which to date has screened over 12,500 biological macromolecules. They also describe the approaches taken to maximize the success while minimizing the effort. Through this, the authors hope that the reader will gain an insight into the efficient design of a laboratory and protocols to accomplish high-throughput crystallization on a single-, multiuser-laboratory or industrial scale. Expert Opinion High-throughput crystallization screening is readily available but, despite the power of the crystallographic technique, getting crystals is still not a solved problem. High-throughput approaches can help when used skillfully; however, they still require human input in the detailed analysis and interpretation of results to be more successful. PMID:22646073
High-throughput screening based on label-free detection of small molecule microarrays
NASA Astrophysics Data System (ADS)
Zhu, Chenggang; Fei, Yiyan; Zhu, Xiangdong
2017-02-01
Based on small-molecule microarrays (SMMs) and oblique-incidence reflectivity difference (OI-RD) scanner, we have developed a novel high-throughput drug preliminary screening platform based on label-free monitoring of direct interactions between target proteins and immobilized small molecules. The screening platform is especially attractive for screening compounds against targets of unknown function and/or structure that are not compatible with functional assay development. In this screening platform, OI-RD scanner serves as a label-free detection instrument which is able to monitor about 15,000 biomolecular interactions in a single experiment without the need to label any biomolecule. Besides, SMMs serves as a novel format for high-throughput screening by immobilization of tens of thousands of different compounds on a single phenyl-isocyanate functionalized glass slide. Based on the high-throughput screening platform, we sequentially screened five target proteins (purified target proteins or cell lysate containing target protein) in high-throughput and label-free mode. We found hits for respective target protein and the inhibition effects for some hits were confirmed by following functional assays. Compared to traditional high-throughput screening assay, the novel high-throughput screening platform has many advantages, including minimal sample consumption, minimal distortion of interactions through label-free detection, multi-target screening analysis, which has a great potential to be a complementary screening platform in the field of drug discovery.
Code of Federal Regulations, 2012 CFR
2012-07-01
... monthly throughput of 100,000 gallons of gasoline or more. 63.11118 Section 63.11118 Protection of... Hazardous Air Pollutants for Source Category: Gasoline Dispensing Facilities Emission Limitations and... gasoline or more. (a) You must comply with the requirements in §§ 63.11116(a) and 63.11117(b). (b) Except...
Code of Federal Regulations, 2011 CFR
2011-07-01
... monthly throughput of 100,000 gallons of gasoline or more. 63.11118 Section 63.11118 Protection of... Hazardous Air Pollutants for Source Category: Gasoline Dispensing Facilities Emission Limitations and... gasoline or more. (a) You must comply with the requirements in §§ 63.11116(a) and 63.11117(b). (b) Except...
Code of Federal Regulations, 2013 CFR
2013-07-01
... monthly throughput of 100,000 gallons of gasoline or more. 63.11118 Section 63.11118 Protection of... Hazardous Air Pollutants for Source Category: Gasoline Dispensing Facilities Emission Limitations and... gasoline or more. (a) You must comply with the requirements in §§ 63.11116(a) and 63.11117(b). (b) Except...
Code of Federal Regulations, 2010 CFR
2010-07-01
... monthly throughput of 100,000 gallons of gasoline or more. 63.11118 Section 63.11118 Protection of... Hazardous Air Pollutants for Source Category: Gasoline Dispensing Facilities Emission Limitations and... gasoline or more. (a) You must comply with the requirements in §§ 63.11116(a) and 63.11117(b). (b) Except...
DOE Office of Scientific and Technical Information (OSTI.GOV)
The plpdfa software is a product of an LDRD project at LLNL entitked "Adaptive Sampling for Very High Throughput Data Streams" (tracking number 11-ERD-035). This software was developed by a graduate student summer intern, Chris Challis, who worked under project PI Dan Merl furing the summer of 2011. The software the source code is implementing is a statistical analysis technique for clustering and classification of text-valued data. The method had been previously published by the PI in the open literature.
Quaternary pulse position modulation electronics for free-space laser communications
NASA Technical Reports Server (NTRS)
Budinger, J. M.; Kerslake, S. D.; Nagy, L. A.; Shalkhauser, M. J.; Soni, N. J.; Cauley, M. A.; Mohamed, J. H.; Stover, J. B.; Romanofsky, R. R.; Lizanich, P. J.
1991-01-01
The development of a high data-rate communications electronic subsystem for future application in free-space, direct-detection laser communications is described. The dual channel subsystem uses quaternary pulse position modulation (GPPM) and operates at a throughput of 650 megabits per second. Transmitting functions described include source data multiplexing, channel data multiplexing, and QPPM symbol encoding. Implementation of a prototype version in discrete gallium arsenide logic, radiofrequency components, and microstrip circuitry is presented.
Novel Inhibitors of Protein-Protein Interaction for Prostate Cancer Therapy
2013-04-01
cells. Prostate. 2008;68(9):924-34. 4. Basu HS, Thompson TA, Church DR, Clower CC, Mehraein-Ghomi F, Amlong CA, Martin CT, Woster PM, Lindstrom MJ... data on drug-like small molecule inhibitors of the AR-JunD interaction that initiates this ROS-generating pathway in PCa. A novel high throughput assay...reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of
Novel Inhibitors of Protein-Protein Interaction for Prostate Cancer Therapy
2012-04-01
34. 4. Basu HS, Thompson TA, Church DR, Clower CC, Mehraein-Ghomi F, Amlong CA, Martin CT, Woster PM, Lindstrom MJ, Wilding G. A small molecule...present data on drug-like small molecule inhibitors of the AR-JunD interaction that initiates this ROS-generating pathway in PCa. A novel high throughput...average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data
Quaternary pulse position modulation electronics for free-space laser communications
NASA Technical Reports Server (NTRS)
Budinger, J. M.; Kerslake, S. D.; Nagy, L. A.; Shalkhauser, M. J.; Soni, N. J.; Cauley, M. A.; Mohamed, J. H.; Stover, J. B.; Romanofsky, R. R.; Lizanich, P. J.
1991-01-01
The development of a high data-rate communications electronic subsystem for future application in free-space, direct-detection laser communications is described. The dual channel subsystem uses quaternary pulse position modulation (QPPM) and operates at a throughput of 650 megabits per second. Transmitting functions described include source data multiplexing, channel data multiplexing, and QPPM symbol encoding. Implementation of a prototype version in discrete gallium arsenide logic, radiofrequency components, and microstrip circuitry is presented.
Evaluating Between-Pathway Models with Expression Data
Leiserson, M.D.M.; Cowen, L.J.; Slonim, D.K.
2010-01-01
Abstract Between-pathway models (BPMs) are network motifs consisting of pairs of putative redundant pathways. In this article, we show how adding another source of high-throughput data—microarray gene expression data from knockout experiments—allows us to identify a compensatory functional relationship between genes from the two BPM pathways. We evaluate the quality of the BPMs from four different studies, and we describe how our methods might be extended to refine pathways. PMID:20377458
High-throughput analysis of yeast replicative aging using a microfluidic system
Jo, Myeong Chan; Liu, Wei; Gu, Liang; Dang, Weiwei; Qin, Lidong
2015-01-01
Saccharomyces cerevisiae has been an important model for studying the molecular mechanisms of aging in eukaryotic cells. However, the laborious and low-throughput methods of current yeast replicative lifespan assays limit their usefulness as a broad genetic screening platform for research on aging. We address this limitation by developing an efficient, high-throughput microfluidic single-cell analysis chip in combination with high-resolution time-lapse microscopy. This innovative design enables, to our knowledge for the first time, the determination of the yeast replicative lifespan in a high-throughput manner. Morphological and phenotypical changes during aging can also be monitored automatically with a much higher throughput than previous microfluidic designs. We demonstrate highly efficient trapping and retention of mother cells, determination of the replicative lifespan, and tracking of yeast cells throughout their entire lifespan. Using the high-resolution and large-scale data generated from the high-throughput yeast aging analysis (HYAA) chips, we investigated particular longevity-related changes in cell morphology and characteristics, including critical cell size, terminal morphology, and protein subcellular localization. In addition, because of the significantly improved retention rate of yeast mother cell, the HYAA-Chip was capable of demonstrating replicative lifespan extension by calorie restriction. PMID:26170317
De La Vega, Francisco M; Dailey, David; Ziegle, Janet; Williams, Julie; Madden, Dawn; Gilbert, Dennis A
2002-06-01
Since public and private efforts announced the first draft of the human genome last year, researchers have reported great numbers of single nucleotide polymorphisms (SNPs). We believe that the availability of well-mapped, quality SNP markers constitutes the gateway to a revolution in genetics and personalized medicine that will lead to better diagnosis and treatment of common complex disorders. A new generation of tools and public SNP resources for pharmacogenomic and genetic studies--specifically for candidate-gene, candidate-region, and whole-genome association studies--will form part of the new scientific landscape. This will only be possible through the greater accessibility of SNP resources and superior high-throughput instrumentation-assay systems that enable affordable, highly productive large-scale genetic studies. We are contributing to this effort by developing a high-quality linkage disequilibrium SNP marker map and an accompanying set of ready-to-use, validated SNP assays across every gene in the human genome. This effort incorporates both the public sequence and SNP data sources, and Celera Genomics' human genome assembly and enormous resource ofphysically mapped SNPs (approximately 4,000,000 unique records). This article discusses our approach and methodology for designing the map, choosing quality SNPs, designing and validating these assays, and obtaining population frequency ofthe polymorphisms. We also discuss an advanced, high-performance SNP assay chemisty--a new generation of the TaqMan probe-based, 5' nuclease assay-and high-throughput instrumentation-software system for large-scale genotyping. We provide the new SNP map and validation information, validated SNP assays and reagents, and instrumentation systems as a novel resource for genetic discoveries.
Velez‐Suberbie, M. Lourdes; Betts, John P. J.; Walker, Kelly L.; Robinson, Colin; Zoro, Barney
2017-01-01
High throughput automated fermentation systems have become a useful tool in early bioprocess development. In this study, we investigated a 24 x 15 mL single use microbioreactor system, ambr 15f, designed for microbial culture. We compared the fed‐batch growth and production capabilities of this system for two Escherichia coli strains, BL21 (DE3) and MC4100, and two industrially relevant molecules, hGH and scFv. In addition, different carbon sources were tested using bolus, linear or exponential feeding strategies, showing the capacity of the ambr 15f system to handle automated feeding. We used power per unit volume (P/V) as a scale criterion to compare the ambr 15f with 1 L stirred bioreactors which were previously scaled‐up to 20 L with a different biological system, thus showing a potential 1,300 fold scale comparability in terms of both growth and product yield. By exposing the cells grown in the ambr 15f system to a level of shear expected in an industrial centrifuge, we determined that the cells are as robust as those from a bench scale bioreactor. These results provide evidence that the ambr 15f system is an efficient high throughput microbial system that can be used for strain and molecule selection as well as rapid scale‐up. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 34:58–68, 2018 PMID:28748655
Chipster: user-friendly analysis software for microarray and other high-throughput data.
Kallio, M Aleksi; Tuimala, Jarno T; Hupponen, Taavi; Klemelä, Petri; Gentile, Massimiliano; Scheinin, Ilari; Koski, Mikko; Käki, Janne; Korpelainen, Eija I
2011-10-14
The growth of high-throughput technologies such as microarrays and next generation sequencing has been accompanied by active research in data analysis methodology, producing new analysis methods at a rapid pace. While most of the newly developed methods are freely available, their use requires substantial computational skills. In order to enable non-programming biologists to benefit from the method development in a timely manner, we have created the Chipster software. Chipster (http://chipster.csc.fi/) brings a powerful collection of data analysis methods within the reach of bioscientists via its intuitive graphical user interface. Users can analyze and integrate different data types such as gene expression, miRNA and aCGH. The analysis functionality is complemented with rich interactive visualizations, allowing users to select datapoints and create new gene lists based on these selections. Importantly, users can save the performed analysis steps as reusable, automatic workflows, which can also be shared with other users. Being a versatile and easily extendable platform, Chipster can be used for microarray, proteomics and sequencing data. In this article we describe its comprehensive collection of analysis and visualization tools for microarray data using three case studies. Chipster is a user-friendly analysis software for high-throughput data. Its intuitive graphical user interface enables biologists to access a powerful collection of data analysis and integration tools, and to visualize data interactively. Users can collaborate by sharing analysis sessions and workflows. Chipster is open source, and the server installation package is freely available.
Shen, Shaofei; Ma, Chao; Zhao, Lei; Wang, Yaolei; Wang, Jian-Chun; Xu, Juan; Li, Tianbao; Pang, Long; Wang, Jinyi
2014-07-21
The presence and quantity of rare cells in the bloodstream of cancer patients provide a potentially accessible source for the early detection of invasive cancer and for monitoring the treatment of advanced diseases. The separation of rare cells from peripheral blood, as a "virtual and real-time liquid biopsy", is expected to replace conventional tissue biopsies of metastatic tumors for therapy guidance. However, technical obstacles, similar to looking for a needle in a haystack, have hindered the broad clinical utility of this method. In this study, we developed a multistage microfluidic device for continuous label-free separation and enrichment of rare cells from blood samples based on cell size and deformability. We successfully separated tumor cells (MCF-7 and HeLa cells) and leukemic (K562) cells spiked in diluted whole blood using a unique complementary combination of inertial microfluidics and steric hindrance in a microfluidic system. The processing parameters of the inertial focusing and steric hindrance regions were optimized to achieve high-throughput and high-efficiency separation, significant advantages compared with existing rare cell isolation technologies. The results from experiments with rare cells spiked in 1% hematocrit blood indicated >90% cell recovery at a throughput of 2.24 × 10(7) cells min(-1). The enrichment of rare cells was >2.02 × 10(5)-fold. Thus, this microfluidic system driven by purely hydrodynamic forces has practical potential to be applied either alone or as a sample preparation platform for fundamental studies and clinical applications.
Chipster: user-friendly analysis software for microarray and other high-throughput data
2011-01-01
Background The growth of high-throughput technologies such as microarrays and next generation sequencing has been accompanied by active research in data analysis methodology, producing new analysis methods at a rapid pace. While most of the newly developed methods are freely available, their use requires substantial computational skills. In order to enable non-programming biologists to benefit from the method development in a timely manner, we have created the Chipster software. Results Chipster (http://chipster.csc.fi/) brings a powerful collection of data analysis methods within the reach of bioscientists via its intuitive graphical user interface. Users can analyze and integrate different data types such as gene expression, miRNA and aCGH. The analysis functionality is complemented with rich interactive visualizations, allowing users to select datapoints and create new gene lists based on these selections. Importantly, users can save the performed analysis steps as reusable, automatic workflows, which can also be shared with other users. Being a versatile and easily extendable platform, Chipster can be used for microarray, proteomics and sequencing data. In this article we describe its comprehensive collection of analysis and visualization tools for microarray data using three case studies. Conclusions Chipster is a user-friendly analysis software for high-throughput data. Its intuitive graphical user interface enables biologists to access a powerful collection of data analysis and integration tools, and to visualize data interactively. Users can collaborate by sharing analysis sessions and workflows. Chipster is open source, and the server installation package is freely available. PMID:21999641
Representing high throughput expression profiles via perturbation barcodes reveals compound targets.
Filzen, Tracey M; Kutchukian, Peter S; Hermes, Jeffrey D; Li, Jing; Tudor, Matthew
2017-02-01
High throughput mRNA expression profiling can be used to characterize the response of cell culture models to perturbations such as pharmacologic modulators and genetic perturbations. As profiling campaigns expand in scope, it is important to homogenize, summarize, and analyze the resulting data in a manner that captures significant biological signals in spite of various noise sources such as batch effects and stochastic variation. We used the L1000 platform for large-scale profiling of 978 representative genes across thousands of compound treatments. Here, a method is described that uses deep learning techniques to convert the expression changes of the landmark genes into a perturbation barcode that reveals important features of the underlying data, performing better than the raw data in revealing important biological insights. The barcode captures compound structure and target information, and predicts a compound's high throughput screening promiscuity, to a higher degree than the original data measurements, indicating that the approach uncovers underlying factors of the expression data that are otherwise entangled or masked by noise. Furthermore, we demonstrate that visualizations derived from the perturbation barcode can be used to more sensitively assign functions to unknown compounds through a guilt-by-association approach, which we use to predict and experimentally validate the activity of compounds on the MAPK pathway. The demonstrated application of deep metric learning to large-scale chemical genetics projects highlights the utility of this and related approaches to the extraction of insights and testable hypotheses from big, sometimes noisy data.
Representing high throughput expression profiles via perturbation barcodes reveals compound targets
Kutchukian, Peter S.; Li, Jing; Tudor, Matthew
2017-01-01
High throughput mRNA expression profiling can be used to characterize the response of cell culture models to perturbations such as pharmacologic modulators and genetic perturbations. As profiling campaigns expand in scope, it is important to homogenize, summarize, and analyze the resulting data in a manner that captures significant biological signals in spite of various noise sources such as batch effects and stochastic variation. We used the L1000 platform for large-scale profiling of 978 representative genes across thousands of compound treatments. Here, a method is described that uses deep learning techniques to convert the expression changes of the landmark genes into a perturbation barcode that reveals important features of the underlying data, performing better than the raw data in revealing important biological insights. The barcode captures compound structure and target information, and predicts a compound’s high throughput screening promiscuity, to a higher degree than the original data measurements, indicating that the approach uncovers underlying factors of the expression data that are otherwise entangled or masked by noise. Furthermore, we demonstrate that visualizations derived from the perturbation barcode can be used to more sensitively assign functions to unknown compounds through a guilt-by-association approach, which we use to predict and experimentally validate the activity of compounds on the MAPK pathway. The demonstrated application of deep metric learning to large-scale chemical genetics projects highlights the utility of this and related approaches to the extraction of insights and testable hypotheses from big, sometimes noisy data. PMID:28182661
Erickson, Heidi S
2012-09-28
The future of personalized medicine depends on the ability to efficiently and rapidly elucidate a reliable set of disease-specific molecular biomarkers. High-throughput molecular biomarker analysis methods have been developed to identify disease risk, diagnostic, prognostic, and therapeutic targets in human clinical samples. Currently, high throughput screening allows us to analyze thousands of markers from one sample or one marker from thousands of samples and will eventually allow us to analyze thousands of markers from thousands of samples. Unfortunately, the inherent nature of current high throughput methodologies, clinical specimens, and cost of analysis is often prohibitive for extensive high throughput biomarker analysis. This review summarizes the current state of high throughput biomarker screening of clinical specimens applicable to genetic epidemiology and longitudinal population-based studies with a focus on considerations related to biospecimens, laboratory techniques, and sample pooling. Copyright © 2012 John Wiley & Sons, Ltd.
An open-source computational and data resource to analyze digital maps of immunopeptidomes
Caron, Etienne; Espona, Lucia; Kowalewski, Daniel J; Schuster, Heiko; Ternette, Nicola; Alpízar, Adán; Schittenhelm, Ralf B; Ramarathinam, Sri H; Lindestam Arlehamn, Cecilia S; Chiek Koh, Ching; Gillet, Ludovic C; Rabsteyn, Armin; Navarro, Pedro; Kim, Sangtae; Lam, Henry; Sturm, Theo; Marcilla, Miguel; Sette, Alessandro; Campbell, David S; Deutsch, Eric W; Moritz, Robert L; Purcell, Anthony W; Rammensee, Hans-Georg; Stevanovic, Stefan; Aebersold, Ruedi
2015-01-01
We present a novel mass spectrometry-based high-throughput workflow and an open-source computational and data resource to reproducibly identify and quantify HLA-associated peptides. Collectively, the resources support the generation of HLA allele-specific peptide assay libraries consisting of consensus fragment ion spectra, and the analysis of quantitative digital maps of HLA peptidomes generated from a range of biological sources by SWATH mass spectrometry (MS). This study represents the first community-based effort to develop a robust platform for the reproducible and quantitative measurement of the entire repertoire of peptides presented by HLA molecules, an essential step towards the design of efficient immunotherapies. DOI: http://dx.doi.org/10.7554/eLife.07661.001 PMID:26154972
Mix-and-diffuse serial synchrotron crystallography
Beyerlein, Kenneth R.; Dierksmeyer, Dennis; Mariani, Valerio; ...
2017-10-09
Unravelling the interaction of biological macromolecules with ligands and substrates at high spatial and temporal resolution remains a major challenge in structural biology. The development of serial crystallography methods at X-ray free-electron lasers and subsequently at synchrotron light sources allows new approaches to tackle this challenge. Here, a new polyimide tape drive designed for mix-and-diffuse serial crystallography experiments is reported. The structure of lysozyme bound by the competitive inhibitor chitotriose was determined using this device in combination with microfluidic mixers. The electron densities obtained from mixing times of 2 and 50 s show clear binding of chitotriose to the enzymemore » at a high level of detail. Here, the success of this approach shows the potential for high-throughput drug screening and even structural enzymology on short timescales at bright synchrotron light sources.« less
Mix-and-diffuse serial synchrotron crystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyerlein, Kenneth R.; Dierksmeyer, Dennis; Mariani, Valerio
Unravelling the interaction of biological macromolecules with ligands and substrates at high spatial and temporal resolution remains a major challenge in structural biology. The development of serial crystallography methods at X-ray free-electron lasers and subsequently at synchrotron light sources allows new approaches to tackle this challenge. Here, a new polyimide tape drive designed for mix-and-diffuse serial crystallography experiments is reported. The structure of lysozyme bound by the competitive inhibitor chitotriose was determined using this device in combination with microfluidic mixers. The electron densities obtained from mixing times of 2 and 50 s show clear binding of chitotriose to the enzymemore » at a high level of detail. Here, the success of this approach shows the potential for high-throughput drug screening and even structural enzymology on short timescales at bright synchrotron light sources.« less
Hill, K W; Bitter, M; Delgado-Aparacio, L; Pablant, N A; Beiersdorfer, P; Schneider, M; Widmann, K; Sanchez del Rio, M; Zhang, L
2012-10-01
High resolution (λ∕Δλ ∼ 10 000) 1D imaging x-ray spectroscopy using a spherically bent crystal and a 2D hybrid pixel array detector is used world wide for Doppler measurements of ion-temperature and plasma flow-velocity profiles in magnetic confinement fusion plasmas. Meter sized plasmas are diagnosed with cm spatial resolution and 10 ms time resolution. This concept can also be used as a diagnostic of small sources, such as inertial confinement fusion plasmas and targets on x-ray light source beam lines, with spatial resolution of micrometers, as demonstrated by laboratory experiments using a 250-μm (55)Fe source, and by ray-tracing calculations. Throughput calculations agree with measurements, and predict detector counts in the range 10(-8)-10(-6) times source x-rays, depending on crystal reflectivity and spectrometer geometry. Results of the lab demonstrations, application of the technique to the National Ignition Facility (NIF), and predictions of performance on NIF will be presented.
CellAnimation: an open source MATLAB framework for microscopy assays.
Georgescu, Walter; Wikswo, John P; Quaranta, Vito
2012-01-01
Advances in microscopy technology have led to the creation of high-throughput microscopes that are capable of generating several hundred gigabytes of images in a few days. Analyzing such wealth of data manually is nearly impossible and requires an automated approach. There are at present a number of open-source and commercial software packages that allow the user to apply algorithms of different degrees of sophistication to the images and extract desired metrics. However, the types of metrics that can be extracted are severely limited by the specific image processing algorithms that the application implements, and by the expertise of the user. In most commercial software, code unavailability prevents implementation by the end user of newly developed algorithms better suited for a particular type of imaging assay. While it is possible to implement new algorithms in open-source software, rewiring an image processing application requires a high degree of expertise. To obviate these limitations, we have developed an open-source high-throughput application that allows implementation of different biological assays such as cell tracking or ancestry recording, through the use of small, relatively simple image processing modules connected into sophisticated imaging pipelines. By connecting modules, non-expert users can apply the particular combination of well-established and novel algorithms developed by us and others that are best suited for each individual assay type. In addition, our data exploration and visualization modules make it easy to discover or select specific cell phenotypes from a heterogeneous population. CellAnimation is distributed under the Creative Commons Attribution-NonCommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/). CellAnimationsource code and documentation may be downloaded from www.vanderbilt.edu/viibre/software/documents/CellAnimation.zip. Sample data are available at www.vanderbilt.edu/viibre/software/documents/movies.zip. walter.georgescu@vanderbilt.edu Supplementary data available at Bioinformatics online.
Fixed target matrix for femtosecond time-resolved and in situ serial micro-crystallography
Mueller, C.; Marx, A.; Epp, S. W.; Zhong, Y.; Kuo, A.; Balo, A. R.; Soman, J.; Schotte, F.; Lemke, H. T.; Owen, R. L.; Pai, E. F.; Pearson, A. R.; Olson, J. S.; Anfinrud, P. A.; Ernst, O. P.; Dwayne Miller, R. J.
2015-01-01
We present a crystallography chip enabling in situ room temperature crystallography at microfocus synchrotron beamlines and X-ray free-electron laser (X-FEL) sources. Compared to other in situ approaches, we observe extremely low background and high diffraction data quality. The chip design is robust and allows fast and efficient loading of thousands of small crystals. The ability to load a large number of protein crystals, at room temperature and with high efficiency, into prescribed positions enables high throughput automated serial crystallography with microfocus synchrotron beamlines. In addition, we demonstrate the application of this chip for femtosecond time-resolved serial crystallography at the Linac Coherent Light Source (LCLS, Menlo Park, California, USA). The chip concept enables multiple images to be acquired from each crystal, allowing differential detection of changes in diffraction intensities in order to obtain high signal-to-noise and fully exploit the time resolution capabilities of XFELs. PMID:26798825
Prussin, Aaron J; Zigler, David F; Jain, Avijita; Brown, Jared R; Winkel, Brenda S J; Brewer, Karen J
2008-04-01
Methods for the study of DNA photocleavage are illustrated using a mixed-metal supramolecular complex [{(bpy)(2)Ru(dpp)}(2)RhCl(2)]Cl(5). The methods use supercoiled pUC18 plasmid as a DNA probe and either filtered light from a xenon arc lamp source or monochromatic light from a newly designed, high-intensity light-emitting diode (LED) array. Detailed methods for performing the photochemical experiments and analysis of the DNA photoproduct are delineated. Detailed methods are also given for building an LED array to be used for DNA photolysis experiments. The Xe arc source has a broad spectral range and high light flux. The LEDs have a high-intensity, nearly monochromatic output. Arrays of LEDs have the advantage of allowing tunable, accurate output to multiple samples for high-throughput photochemistry experiments at relatively low cost.
Fixed target matrix for femtosecond time-resolved and in situ serial micro-crystallography.
Mueller, C; Marx, A; Epp, S W; Zhong, Y; Kuo, A; Balo, A R; Soman, J; Schotte, F; Lemke, H T; Owen, R L; Pai, E F; Pearson, A R; Olson, J S; Anfinrud, P A; Ernst, O P; Dwayne Miller, R J
2015-09-01
We present a crystallography chip enabling in situ room temperature crystallography at microfocus synchrotron beamlines and X-ray free-electron laser (X-FEL) sources. Compared to other in situ approaches, we observe extremely low background and high diffraction data quality. The chip design is robust and allows fast and efficient loading of thousands of small crystals. The ability to load a large number of protein crystals, at room temperature and with high efficiency, into prescribed positions enables high throughput automated serial crystallography with microfocus synchrotron beamlines. In addition, we demonstrate the application of this chip for femtosecond time-resolved serial crystallography at the Linac Coherent Light Source (LCLS, Menlo Park, California, USA). The chip concept enables multiple images to be acquired from each crystal, allowing differential detection of changes in diffraction intensities in order to obtain high signal-to-noise and fully exploit the time resolution capabilities of XFELs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Continuous Compliance With Operating Limits-High Throughput Transfer Racks 9 Table 9 to Subpart EEEE of Part 63 Protection of Environment...—Continuous Compliance With Operating Limits—High Throughput Transfer Racks As stated in §§ 63.2378(a) and (b...
Tracing sources of nitrate in snowmelt runoff using a high-resolution isotopic technique
NASA Astrophysics Data System (ADS)
Ohte, N.; Sebestyen, S. D.; Shanley, J. B.; Doctor, D. H.; Kendall, C.; Wankel, S. D.; Boyer, E. W.
2004-11-01
The denitrifier method to determine the dual isotopic composition (δ15N and δ18O) of nitrate is well suited for studies of nitrogen contributions to streams during runoff events. This method requires only 70 nmol of NO3- and enables high throughput of samples. We studied nitrate sources to a headwater stream during snowmelt by generating a high-temporal resolution dataset at the Sleepers River Research Watershed in Vermont, USA. In the earliest phase of runoff, stream NO3- concentrations were highest and stream discharge, NO3- concentrations, and δ18O of NO3- generally tracked one another during diurnal melting. The isotopic composition of stream NO3- varied in-between atmospheric and groundwater NO3- end members indicating a direct contribution of atmospherically-derived NO3- from the snow pack to the stream. During the middle to late phases of snowmelt, the source shifted toward soil NO3- entering the stream via shallow subsurface flow paths.
Xu, Yi-Fan; Lu, Wenyun; Rabinowitz, Joshua D.
2015-01-15
Liquid chromatography–mass spectrometry (LC-MS) technology allows for rapid quantitation of cellular metabolites, with metabolites identified by mass spectrometry and chromatographic retention time. Recently, with the development of rapid scanning high-resolution high accuracy mass spectrometers and the desire for high throughput screening, minimal or no chromatographic separation has become increasingly popular. Furthermore, when analyzing complex cellular extracts, however, the lack of chromatographic separation could potentially result in misannotation of structurally related metabolites. Here, we show that, even using electrospray ionization, a soft ionization method, in-source fragmentation generates unwanted byproducts of identical mass to common metabolites. For example, nucleotide-triphosphates generate nucleotide-diphosphates, andmore » hexose-phosphates generate triose-phosphates. We also evaluated yeast intracellular metabolite extracts and found more than 20 cases of in-source fragments that mimic common metabolites. Finally and accordingly, chromatographic separation is required for accurate quantitation of many common cellular metabolites.« less
Accelerating the design of solar thermal fuel materials through high throughput simulations.
Liu, Yun; Grossman, Jeffrey C
2014-12-10
Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastable structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.
Tolu, Julie; Gerber, Lorenz; Boily, Jean-François; Bindler, Richard
2015-06-23
Molecular-level chemical information about organic matter (OM) in sediments helps to establish the sources of OM and the prevalent degradation/diagenetic processes, both essential for understanding the cycling of carbon (C) and of the elements associated with OM (toxic trace metals and nutrients) in lake ecosystems. Ideally, analytical methods for characterizing OM should allow high sample throughput, consume small amounts of sample and yield relevant chemical information, which are essential for multidisciplinary, high-temporal resolution and/or large spatial scale investigations. We have developed a high-throughput analytical method based on pyrolysis-gas chromatography/mass spectrometry and automated data processing to characterize sedimentary OM in sediments. Our method consumes 200 μg of freeze-dried and ground sediment sample. Pyrolysis was performed at 450°C, which was found to avoid degradation of specific biomarkers (e.g., lignin compounds, fresh carbohydrates/cellulose) compared to 650°C, which is in the range of temperatures commonly applied for environmental samples. The optimization was conducted using the top ten sediment samples of an annually resolved sediment record (containing 16-18% and 1.3-1.9% of total carbon and nitrogen, respectively). Several hundred pyrolytic compound peaks were detected of which over 200 were identified, which represent different classes of organic compounds (i.e., n-alkanes, n-alkenes, 2-ketones, carboxylic acids, carbohydrates, proteins, other N compounds, (methoxy)phenols, (poly)aromatics, chlorophyll and steroids/hopanoids). Technical reproducibility measured as relative standard deviation of the identified peaks in triplicate analyses was 5.5±4.3%, with 90% of the RSD values within 10% and 98% within 15%. Finally, a multivariate calibration model was calculated between the pyrolytic degradation compounds and the sediment depth (i.e., sediment age), which is a function of degradation processes and changes in OM source type. This allowed validation of the Py-GC/MS dataset against fundamental processes involved in OM cycling in aquatic ecosystems. Copyright © 2015 Elsevier B.V. All rights reserved.
High-content screening of small compounds on human embryonic stem cells.
Barbaric, Ivana; Gokhale, Paul J; Andrews, Peter W
2010-08-01
Human ES (embryonic stem) cells and iPS (induced pluripotent stem) cells have been heralded as a source of differentiated cells that could be used in the treatment of degenerative diseases, such as Parkinson's disease or diabetes. Despite the great potential for their use in regenerative therapy, the challenge remains to understand the basic biology of these remarkable cells, in order to differentiate them into any functional cell type. Given the scale of the task, high-throughput screening of agents and culture conditions offers one way to accelerate these studies. The screening of small-compound libraries is particularly amenable to such high-throughput methods. Coupled with high-content screening technology that enables simultaneous assessment of multiple cellular features in an automated and quantitative way, this approach is proving powerful in identifying both small molecules as tools for manipulating stem cell fates and novel mechanisms of differentiation not previously associated with stem cell biology. Such screens performed on human ES cells also demonstrate the usefulness of human ES/iPS cells as cellular models for pharmacological testing of drug efficacy and toxicity, possibly a more imminent use of these cells than in regenerative medicine.
Judson, Richard S.; Martin, Matthew T.; Egeghy, Peter; Gangwal, Sumit; Reif, David M.; Kothiya, Parth; Wolf, Maritja; Cathey, Tommy; Transue, Thomas; Smith, Doris; Vail, James; Frame, Alicia; Mosher, Shad; Cohen Hubal, Elaine A.; Richard, Ann M.
2012-01-01
Computational toxicology combines data from high-throughput test methods, chemical structure analyses and other biological domains (e.g., genes, proteins, cells, tissues) with the goals of predicting and understanding the underlying mechanistic causes of chemical toxicity and for predicting toxicity of new chemicals and products. A key feature of such approaches is their reliance on knowledge extracted from large collections of data and data sets in computable formats. The U.S. Environmental Protection Agency (EPA) has developed a large data resource called ACToR (Aggregated Computational Toxicology Resource) to support these data-intensive efforts. ACToR comprises four main repositories: core ACToR (chemical identifiers and structures, and summary data on hazard, exposure, use, and other domains), ToxRefDB (Toxicity Reference Database, a compilation of detailed in vivo toxicity data from guideline studies), ExpoCastDB (detailed human exposure data from observational studies of selected chemicals), and ToxCastDB (data from high-throughput screening programs, including links to underlying biological information related to genes and pathways). The EPA DSSTox (Distributed Structure-Searchable Toxicity) program provides expert-reviewed chemical structures and associated information for these and other high-interest public inventories. Overall, the ACToR system contains information on about 400,000 chemicals from 1100 different sources. The entire system is built using open source tools and is freely available to download. This review describes the organization of the data repository and provides selected examples of use cases. PMID:22408426
Judson, Richard S; Martin, Matthew T; Egeghy, Peter; Gangwal, Sumit; Reif, David M; Kothiya, Parth; Wolf, Maritja; Cathey, Tommy; Transue, Thomas; Smith, Doris; Vail, James; Frame, Alicia; Mosher, Shad; Cohen Hubal, Elaine A; Richard, Ann M
2012-01-01
Computational toxicology combines data from high-throughput test methods, chemical structure analyses and other biological domains (e.g., genes, proteins, cells, tissues) with the goals of predicting and understanding the underlying mechanistic causes of chemical toxicity and for predicting toxicity of new chemicals and products. A key feature of such approaches is their reliance on knowledge extracted from large collections of data and data sets in computable formats. The U.S. Environmental Protection Agency (EPA) has developed a large data resource called ACToR (Aggregated Computational Toxicology Resource) to support these data-intensive efforts. ACToR comprises four main repositories: core ACToR (chemical identifiers and structures, and summary data on hazard, exposure, use, and other domains), ToxRefDB (Toxicity Reference Database, a compilation of detailed in vivo toxicity data from guideline studies), ExpoCastDB (detailed human exposure data from observational studies of selected chemicals), and ToxCastDB (data from high-throughput screening programs, including links to underlying biological information related to genes and pathways). The EPA DSSTox (Distributed Structure-Searchable Toxicity) program provides expert-reviewed chemical structures and associated information for these and other high-interest public inventories. Overall, the ACToR system contains information on about 400,000 chemicals from 1100 different sources. The entire system is built using open source tools and is freely available to download. This review describes the organization of the data repository and provides selected examples of use cases.
Scafaro, Andrew P; Negrini, A Clarissa A; O'Leary, Brendan; Rashid, F Azzahra Ahmad; Hayes, Lucy; Fan, Yuzhen; Zhang, You; Chochois, Vincent; Badger, Murray R; Millar, A Harvey; Atkin, Owen K
2017-01-01
Mitochondrial respiration in the dark ( R dark ) is a critical plant physiological process, and hence a reliable, efficient and high-throughput method of measuring variation in rates of R dark is essential for agronomic and ecological studies. However, currently methods used to measure R dark in plant tissues are typically low throughput. We assessed a high-throughput automated fluorophore system of detecting multiple O 2 consumption rates. The fluorophore technique was compared with O 2 -electrodes, infrared gas analysers (IRGA), and membrane inlet mass spectrometry, to determine accuracy and speed of detecting respiratory fluxes. The high-throughput fluorophore system provided stable measurements of R dark in detached leaf and root tissues over many hours. High-throughput potential was evident in that the fluorophore system was 10 to 26-fold faster per sample measurement than other conventional methods. The versatility of the technique was evident in its enabling: (1) rapid screening of R dark in 138 genotypes of wheat; and, (2) quantification of rarely-assessed whole-plant R dark through dissection and simultaneous measurements of above- and below-ground organs. Variation in absolute R dark was observed between techniques, likely due to variation in sample conditions (i.e. liquid vs. gas-phase, open vs. closed systems), indicating that comparisons between studies using different measuring apparatus may not be feasible. However, the high-throughput protocol we present provided similar values of R dark to the most commonly used IRGA instrument currently employed by plant scientists. Together with the greater than tenfold increase in sample processing speed, we conclude that the high-throughput protocol enables reliable, stable and reproducible measurements of R dark on multiple samples simultaneously, irrespective of plant or tissue type.
Vidjil: A Web Platform for Analysis of High-Throughput Repertoire Sequencing.
Duez, Marc; Giraud, Mathieu; Herbert, Ryan; Rocher, Tatiana; Salson, Mikaël; Thonier, Florian
2016-01-01
The B and T lymphocytes are white blood cells playing a key role in the adaptive immunity. A part of their DNA, called the V(D)J recombinations, is specific to each lymphocyte, and enables recognition of specific antigenes. Today, with new sequencing techniques, one can get billions of DNA sequences from these regions. With dedicated Repertoire Sequencing (RepSeq) methods, it is now possible to picture population of lymphocytes, and to monitor more accurately the immune response as well as pathologies such as leukemia. Vidjil is an open-source platform for the interactive analysis of high-throughput sequencing data from lymphocyte recombinations. It contains an algorithm gathering reads into clonotypes according to their V(D)J junctions, a web application made of a sample, experiment and patient database and a visualization for the analysis of clonotypes along the time. Vidjil is implemented in C++, Python and Javascript and licensed under the GPLv3 open-source license. Source code, binaries and a public web server are available at http://www.vidjil.org and at http://bioinfo.lille.inria.fr/vidjil. Using the Vidjil web application consists of four steps: 1. uploading a raw sequence file (typically a FASTQ); 2. running RepSeq analysis software; 3. visualizing the results; 4. annotating the results and saving them for future use. For the end-user, the Vidjil web application needs no specific installation and just requires a connection and a modern web browser. Vidjil is used by labs in hematology or immunology for research and clinical applications.
Vidjil: A Web Platform for Analysis of High-Throughput Repertoire Sequencing
Duez, Marc; Herbert, Ryan; Rocher, Tatiana; Salson, Mikaël; Thonier, Florian
2016-01-01
Background The B and T lymphocytes are white blood cells playing a key role in the adaptive immunity. A part of their DNA, called the V(D)J recombinations, is specific to each lymphocyte, and enables recognition of specific antigenes. Today, with new sequencing techniques, one can get billions of DNA sequences from these regions. With dedicated Repertoire Sequencing (RepSeq) methods, it is now possible to picture population of lymphocytes, and to monitor more accurately the immune response as well as pathologies such as leukemia. Methods and Results Vidjil is an open-source platform for the interactive analysis of high-throughput sequencing data from lymphocyte recombinations. It contains an algorithm gathering reads into clonotypes according to their V(D)J junctions, a web application made of a sample, experiment and patient database and a visualization for the analysis of clonotypes along the time. Vidjil is implemented in C++, Python and Javascript and licensed under the GPLv3 open-source license. Source code, binaries and a public web server are available at http://www.vidjil.org and at http://bioinfo.lille.inria.fr/vidjil. Using the Vidjil web application consists of four steps: 1. uploading a raw sequence file (typically a FASTQ); 2. running RepSeq analysis software; 3. visualizing the results; 4. annotating the results and saving them for future use. For the end-user, the Vidjil web application needs no specific installation and just requires a connection and a modern web browser. Vidjil is used by labs in hematology or immunology for research and clinical applications. PMID:27835690
Asati, Atul; Kachurina, Olga; Kachurin, Anatoly
2012-01-01
Considering importance of ganglioside antibodies as biomarkers in various immune-mediated neuropathies and neurological disorders, we developed a high throughput multiplexing tool for the assessment of gangliosides-specific antibodies based on Biolpex/Luminex platform. In this report, we demonstrate that the ganglioside high throughput multiplexing tool is robust, highly specific and demonstrating ∼100-fold higher concentration sensitivity for IgG detection than ELISA. In addition to the ganglioside-coated array, the high throughput multiplexing tool contains beads coated with influenza hemagglutinins derived from H1N1 A/Brisbane/59/07 and H1N1 A/California/07/09 strains. Influenza beads provided an added advantage of simultaneous detection of ganglioside- and influenza-specific antibodies, a capacity important for the assay of both infectious antigen-specific and autoimmune antibodies following vaccination or disease. Taken together, these results support the potential adoption of the ganglioside high throughput multiplexing tool for measuring ganglioside antibodies in various neuropathic and neurological disorders. PMID:22952605
High-throughput sample adaptive offset hardware architecture for high-efficiency video coding
NASA Astrophysics Data System (ADS)
Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin
2018-03-01
A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.
NASA Astrophysics Data System (ADS)
Zhu, Feng; Akagi, Jin; Hall, Chris J.; Crosier, Kathryn E.; Crosier, Philip S.; Delaage, Pierre; Wlodkowic, Donald
2013-12-01
Drug discovery screenings performed on zebrafish embryos mirror with a high level of accuracy. The tests usually performed on mammalian animal models, and the fish embryo toxicity assay (FET) is one of the most promising alternative approaches to acute ecotoxicity testing with adult fish. Notwithstanding this, conventional methods utilising 96-well microtiter plates and manual dispensing of fish embryos are very time-consuming. They rely on laborious and iterative manual pipetting that is a main source of analytical errors and low throughput. In this work, we present development of a miniaturised and high-throughput Lab-on-a-Chip (LOC) platform for automation of FET assays. The 3D high-density LOC array was fabricated in poly-methyl methacrylate (PMMA) transparent thermoplastic using infrared laser micromachining while the off-chip interfaces were fabricated using additive manufacturing processes (FDM and SLA). The system's design facilitates rapid loading and immobilization of a large number of embryos in predefined clusters of traps during continuous microperfusion of drugs/toxins. It has been conceptually designed to seamlessly interface with both upright and inverted fluorescent imaging systems and also to directly interface with conventional microtiter plate readers that accept 96-well plates. We also present proof-of-concept interfacing with a high-speed imaging cytometer Plate RUNNER HD® capable of multispectral image acquisition with resolution of up to 8192 x 8192 pixels and depth of field of about 40 μm. Furthermore, we developed a miniaturized and self-contained analytical device interfaced with a miniaturized USB microscope. This system modification is capable of performing rapid imaging of multiple embryos at a low resolution for drug toxicity analysis.
Gimonet, Johan; Portmann, Anne-Catherine; Fournier, Coralie; Baert, Leen
2018-06-16
This work shows that an incubation time reduced to 4-5 h to prepare a culture for DNA extraction followed by an automated DNA extraction can shorten the hands-on time, the turnaround time by 30% and increase the throughput while maintaining the WGS quality assessed by high quality Single Nucleotide Polymorphism analysis. Copyright © 2018. Published by Elsevier B.V.
High throughput light absorber discovery, Part 1: An algorithm for automated tauc analysis
Suram, Santosh K.; Newhouse, Paul F.; Gregoire, John M.
2016-09-23
High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe 2O 3, Cu 2V 2O 7, and BiVOmore » 4. Here, the applicability of the algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.« less
2015-01-01
High-throughput production of nanoparticles (NPs) with controlled quality is critical for their clinical translation into effective nanomedicines for diagnostics and therapeutics. Here we report a simple and versatile coaxial turbulent jet mixer that can synthesize a variety of NPs at high throughput up to 3 kg/d, while maintaining the advantages of homogeneity, reproducibility, and tunability that are normally accessible only in specialized microscale mixing devices. The device fabrication does not require specialized machining and is easy to operate. As one example, we show reproducible, high-throughput formulation of siRNA-polyelectrolyte polyplex NPs that exhibit effective gene knockdown but exhibit significant dependence on batch size when formulated using conventional methods. The coaxial turbulent jet mixer can accelerate the development of nanomedicines by providing a robust and versatile platform for preparation of NPs at throughputs suitable for in vivo studies, clinical trials, and industrial-scale production. PMID:24824296
NASA Astrophysics Data System (ADS)
Vikhlinin, Alexey
2018-01-01
Lynx is an observatory-class mission, featuring high throughput, exquisite angular resolution over a substantial field of view, and high spectral resolution for point and extended X-ray sources. The design requirements provide a tremendous leap in capabilities relative to missions such as Chandra and Athena. Lynx will observe the dawn of supermassive black holes through detection of very faint X-ray sources in the early universe and will reveal the "invisible drivers" of galaxy and structure formation through observations of hot, diffuse baryons in and around the galaxies. Lynx will enable breakthroughs across all of astrophysics, ranging from detailed understanding of stellar activity including effects on habitability of associated planets to population statistics of neutron stars and black holes in the Local Group galaxies, to earliest groups and clusters of galaxies, and to cosmology
NASA Astrophysics Data System (ADS)
Wei, Chengying; Xiong, Cuilian; Liu, Huanlin
2017-12-01
Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.
Independent component analysis decomposition of hospital emergency department throughput measures
NASA Astrophysics Data System (ADS)
He, Qiang; Chu, Henry
2016-05-01
We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.
Bayat, Pouriya; Rezai, Pouya
2018-05-21
One of the common operations in sample preparation is to separate specific particles (e.g. target cells, embryos or microparticles) from non-target substances (e.g. bacteria) in a fluid and to wash them into clean buffers for further processing like detection (called solution exchange in this paper). For instance, solution exchange is widely needed in preparing fluidic samples for biosensing at the point-of-care and point-of-use, but still conducted via the use of cumbersome and time-consuming off-chip analyte washing and purification techniques. Existing small-scale and handheld active and passive devices for washing particles are often limited to very low throughputs or require external sources of energy. Here, we integrated Dean flow recirculation of two fluids in curved microchannels with selective inertial focusing of target particles to develop a microfluidic centrifuge device that can isolate specific particles (as surrogates for target analytes) from bacteria and wash them into a clean buffer at high throughput and efficiency. We could process micron-size particles at a flow rate of 1 mL min-1 and achieve throughputs higher than 104 particles per second. Our results reveal that the device is capable of singleplex solution exchange of 11 μm and 19 μm particles with efficiencies of 86 ± 2% and 93 ± 0.7%, respectively. A purity of 96 ± 2% was achieved in the duplex experiments where 11 μm particles were isolated from 4 μm particles. Application of our device in biological assays was shown by performing duplex experiments where 11 μm or 19 μm particles were isolated from an Escherichia coli bacterial suspension with purities of 91-98%. We envision that our technique will have applications in point-of-care devices for simultaneous purification and solution exchange of cells and embryos from smaller substances in high-volume suspensions at high throughput and efficiency.
Li, Fumin; Wang, Jun; Jenkins, Rand
2016-05-01
There is an ever-increasing demand for high-throughput LC-MS/MS bioanalytical assays to support drug discovery and development. Matrix effects of sofosbuvir (protonated) and paclitaxel (sodiated) were thoroughly evaluated using high-throughput chromatography (defined as having a run time ≤1 min) under 14 elution conditions with extracts from protein precipitation, liquid-liquid extraction and solid-phase extraction. A slight separation, in terms of retention time, between underlying matrix components and sofosbuvir/paclitaxel can greatly alleviate matrix effects. High-throughput chromatography, with proper optimization, can provide rapid and effective chromatographic separation under 1 min to alleviate matrix effects and enhance assay ruggedness for regulated bioanalysis.
Olvera-Trejo, D; Velásquez-García, L F
2016-10-18
This study reports the first MEMS multiplexed coaxial electrospray sources in the literature. Coaxial electrospraying is a microencapsulation technology based on electrohydrodynamic jetting of two immiscible liquids, which allows precise control with low size variation of the geometry of the core-shell particles it generates, which is of great importance in numerous biomedical and engineering applications, e.g., drug delivery and self-healing composites. By implementing monolithic planar arrays of miniaturized coaxial electrospray emitters that work uniformly in parallel, the throughput of the compound microdroplet source is greatly increased, making the microencapsulation technology compatible with low-cost commercial applications. Miniaturized core-shell particle generators with up to 25 coaxial electrospray emitters (25 emitters cm -2 ) were fabricated via stereolithography, which is an additive manufacturing process that can create complex microfluidic devices at a small fraction of the cost per device and fabrication time associated with silicon-based counterparts. The characterization of devices with the same emitter structure but different array sizes demonstrates uniform array operation. Moreover, the data demonstrate that the per-emitter current is approximately proportional to the square root of the flow rate of the driving liquid, and it is independent of the flow rate of the driven liquid, as predicted by the theory. The core/shell diameters and the size distribution of the generated compound microparticles can be modulated by controlling the flow rates fed to the emitters.
High throughput system for magnetic manipulation of cells, polymers, and biomaterials
Spero, Richard Chasen; Vicci, Leandra; Cribb, Jeremy; Bober, David; Swaminathan, Vinay; O’Brien, E. Timothy; Rogers, Stephen L.; Superfine, R.
2008-01-01
In the past decade, high throughput screening (HTS) has changed the way biochemical assays are performed, but manipulation and mechanical measurement of micro- and nanoscale systems have not benefited from this trend. Techniques using microbeads (particles ∼0.1–10 μm) show promise for enabling high throughput mechanical measurements of microscopic systems. We demonstrate instrumentation to magnetically drive microbeads in a biocompatible, multiwell magnetic force system. It is based on commercial HTS standards and is scalable to 96 wells. Cells can be cultured in this magnetic high throughput system (MHTS). The MHTS can apply independently controlled forces to 16 specimen wells. Force calibrations demonstrate forces in excess of 1 nN, predicted force saturation as a function of pole material, and powerlaw dependence of F∼r−2.7±0.1. We employ this system to measure the stiffness of SR2+ Drosophila cells. MHTS technology is a key step toward a high throughput screening system for micro- and nanoscale biophysical experiments. PMID:19044357
Kračun, Stjepan Krešimir; Fangel, Jonatan Ulrik; Rydahl, Maja Gro; Pedersen, Henriette Lodberg; Vidal-Melgosa, Silvia; Willats, William George Tycho
2017-01-01
Cell walls are an important feature of plant cells and a major component of the plant glycome. They have both structural and physiological functions and are critical for plant growth and development. The diversity and complexity of these structures demand advanced high-throughput techniques to answer questions about their structure, functions and roles in both fundamental and applied scientific fields. Microarray technology provides both the high-throughput and the feasibility aspects required to meet that demand. In this chapter, some of the most recent microarray-based techniques relating to plant cell walls are described together with an overview of related contemporary techniques applied to carbohydrate microarrays and their general potential in glycoscience. A detailed experimental procedure for high-throughput mapping of plant cell wall glycans using the comprehensive microarray polymer profiling (CoMPP) technique is included in the chapter and provides a good example of both the robust and high-throughput nature of microarrays as well as their applicability to plant glycomics.
Identification of functional modules using network topology and high-throughput data.
Ulitsky, Igor; Shamir, Ron
2007-01-26
With the advent of systems biology, biological knowledge is often represented today by networks. These include regulatory and metabolic networks, protein-protein interaction networks, and many others. At the same time, high-throughput genomics and proteomics techniques generate very large data sets, which require sophisticated computational analysis. Usually, separate and different analysis methodologies are applied to each of the two data types. An integrated investigation of network and high-throughput information together can improve the quality of the analysis by accounting simultaneously for topological network properties alongside intrinsic features of the high-throughput data. We describe a novel algorithmic framework for this challenge. We first transform the high-throughput data into similarity values, (e.g., by computing pairwise similarity of gene expression patterns from microarray data). Then, given a network of genes or proteins and similarity values between some of them, we seek connected sub-networks (or modules) that manifest high similarity. We develop algorithms for this problem and evaluate their performance on the osmotic shock response network in S. cerevisiae and on the human cell cycle network. We demonstrate that focused, biologically meaningful and relevant functional modules are obtained. In comparison with extant algorithms, our approach has higher sensitivity and higher specificity. We have demonstrated that our method can accurately identify functional modules. Hence, it carries the promise to be highly useful in analysis of high throughput data.
Stepping into the omics era: Opportunities and challenges for biomaterials science and engineering.
Groen, Nathalie; Guvendiren, Murat; Rabitz, Herschel; Welsh, William J; Kohn, Joachim; de Boer, Jan
2016-04-01
The research paradigm in biomaterials science and engineering is evolving from using low-throughput and iterative experimental designs towards high-throughput experimental designs for materials optimization and the evaluation of materials properties. Computational science plays an important role in this transition. With the emergence of the omics approach in the biomaterials field, referred to as materiomics, high-throughput approaches hold the promise of tackling the complexity of materials and understanding correlations between material properties and their effects on complex biological systems. The intrinsic complexity of biological systems is an important factor that is often oversimplified when characterizing biological responses to materials and establishing property-activity relationships. Indeed, in vitro tests designed to predict in vivo performance of a given biomaterial are largely lacking as we are not able to capture the biological complexity of whole tissues in an in vitro model. In this opinion paper, we explain how we reached our opinion that converging genomics and materiomics into a new field would enable a significant acceleration of the development of new and improved medical devices. The use of computational modeling to correlate high-throughput gene expression profiling with high throughput combinatorial material design strategies would add power to the analysis of biological effects induced by material properties. We believe that this extra layer of complexity on top of high-throughput material experimentation is necessary to tackle the biological complexity and further advance the biomaterials field. In this opinion paper, we postulate that converging genomics and materiomics into a new field would enable a significant acceleration of the development of new and improved medical devices. The use of computational modeling to correlate high-throughput gene expression profiling with high throughput combinatorial material design strategies would add power to the analysis of biological effects induced by material properties. We believe that this extra layer of complexity on top of high-throughput material experimentation is necessary to tackle the biological complexity and further advance the biomaterials field. Copyright © 2016. Published by Elsevier Ltd.
Chan, Leo Li-Ying; Smith, Tim; Kumph, Kendra A; Kuksin, Dmitry; Kessel, Sarah; Déry, Olivier; Cribbes, Scott; Lai, Ning; Qiu, Jean
2016-10-01
To ensure cell-based assays are performed properly, both cell concentration and viability have to be determined so that the data can be normalized to generate meaningful and comparable results. Cell-based assays performed in immuno-oncology, toxicology, or bioprocessing research often require measuring of multiple samples and conditions, thus the current automated cell counter that uses single disposable counting slides is not practical for high-throughput screening assays. In the recent years, a plate-based image cytometry system has been developed for high-throughput biomolecular screening assays. In this work, we demonstrate a high-throughput AO/PI-based cell concentration and viability method using the Celigo image cytometer. First, we validate the method by comparing directly to Cellometer automated cell counter. Next, cell concentration dynamic range, viability dynamic range, and consistency are determined. The high-throughput AO/PI method described here allows for 96-well to 384-well plate samples to be analyzed in less than 7 min, which greatly reduces the time required for the single sample-based automated cell counter. In addition, this method can improve the efficiency for high-throughput screening assays, where multiple cell counts and viability measurements are needed prior to performing assays such as flow cytometry, ELISA, or simply plating cells for cell culture.
LittleQuickWarp: an ultrafast image warping tool.
Qu, Lei; Peng, Hanchuan
2015-02-01
Warping images into a standard coordinate space is critical for many image computing related tasks. However, for multi-dimensional and high-resolution images, an accurate warping operation itself is often very expensive in terms of computer memory and computational time. For high-throughput image analysis studies such as brain mapping projects, it is desirable to have high performance image warping tools that are compatible with common image analysis pipelines. In this article, we present LittleQuickWarp, a swift and memory efficient tool that boosts 3D image warping performance dramatically and at the same time has high warping quality similar to the widely used thin plate spline (TPS) warping. Compared to the TPS, LittleQuickWarp can improve the warping speed 2-5 times and reduce the memory consumption 6-20 times. We have implemented LittleQuickWarp as an Open Source plug-in program on top of the Vaa3D system (http://vaa3d.org). The source code and a brief tutorial can be found in the Vaa3D plugin source code repository. Copyright © 2014 Elsevier Inc. All rights reserved.
Fast Infrared Chemical Imaging with a Quantum Cascade Laser
2015-01-01
Infrared (IR) spectroscopic imaging systems are a powerful tool for visualizing molecular microstructure of a sample without the need for dyes or stains. Table-top Fourier transform infrared (FT-IR) imaging spectrometers, the current established technology, can record broadband spectral data efficiently but requires scanning the entire spectrum with a low throughput source. The advent of high-intensity, broadly tunable quantum cascade lasers (QCL) has now accelerated IR imaging but results in a fundamentally different type of instrument and approach, namely, discrete frequency IR (DF-IR) spectral imaging. While the higher intensity of the source provides a higher signal per channel, the absence of spectral multiplexing also provides new opportunities and challenges. Here, we couple a rapidly tunable QCL with a high performance microscope equipped with a cooled focal plane array (FPA) detector. Our optical system is conceptualized to provide optimal performance based on recent theory and design rules for high-definition (HD) IR imaging. Multiple QCL units are multiplexed together to provide spectral coverage across the fingerprint region (776.9 to 1904.4 cm–1) in our DF-IR microscope capable of broad spectral coverage, wide-field detection, and diffraction-limited spectral imaging. We demonstrate that the spectral and spatial fidelity of this system is at least as good as the best FT-IR imaging systems. Our configuration provides a speedup for equivalent spectral signal-to-noise ratio (SNR) compared to the best spectral quality from a high-performance linear array system that has 10-fold larger pixels. Compared to the fastest available HD FT-IR imaging system, we demonstrate scanning of large tissue microarrays (TMA) in 3-orders of magnitude smaller time per essential spectral frequency. These advances offer new opportunities for high throughput IR chemical imaging, especially for the measurement of cells and tissues. PMID:25474546
Fast infrared chemical imaging with a quantum cascade laser.
Yeh, Kevin; Kenkel, Seth; Liu, Jui-Nung; Bhargava, Rohit
2015-01-06
Infrared (IR) spectroscopic imaging systems are a powerful tool for visualizing molecular microstructure of a sample without the need for dyes or stains. Table-top Fourier transform infrared (FT-IR) imaging spectrometers, the current established technology, can record broadband spectral data efficiently but requires scanning the entire spectrum with a low throughput source. The advent of high-intensity, broadly tunable quantum cascade lasers (QCL) has now accelerated IR imaging but results in a fundamentally different type of instrument and approach, namely, discrete frequency IR (DF-IR) spectral imaging. While the higher intensity of the source provides a higher signal per channel, the absence of spectral multiplexing also provides new opportunities and challenges. Here, we couple a rapidly tunable QCL with a high performance microscope equipped with a cooled focal plane array (FPA) detector. Our optical system is conceptualized to provide optimal performance based on recent theory and design rules for high-definition (HD) IR imaging. Multiple QCL units are multiplexed together to provide spectral coverage across the fingerprint region (776.9 to 1904.4 cm(-1)) in our DF-IR microscope capable of broad spectral coverage, wide-field detection, and diffraction-limited spectral imaging. We demonstrate that the spectral and spatial fidelity of this system is at least as good as the best FT-IR imaging systems. Our configuration provides a speedup for equivalent spectral signal-to-noise ratio (SNR) compared to the best spectral quality from a high-performance linear array system that has 10-fold larger pixels. Compared to the fastest available HD FT-IR imaging system, we demonstrate scanning of large tissue microarrays (TMA) in 3-orders of magnitude smaller time per essential spectral frequency. These advances offer new opportunities for high throughput IR chemical imaging, especially for the measurement of cells and tissues.
Accelerating the Design of Solar Thermal Fuel Materials through High Throughput Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Grossman, JC
2014-12-01
Solar thermal fuels (STF) store the energy of sunlight, which can then be released later in the form of heat, offering an emission-free and renewable solution for both solar energy conversion and storage. However, this approach is currently limited by the lack of low-cost materials with high energy density and high stability. In this Letter, we present an ab initio high-throughput computational approach to accelerate the design process and allow for searches over a broad class of materials. The high-throughput screening platform we have developed can run through large numbers of molecules composed of earth-abundant elements and identifies possible metastablemore » structures of a given material. Corresponding isomerization enthalpies associated with the metastable structures are then computed. Using this high-throughput simulation approach, we have discovered molecular structures with high isomerization enthalpies that have the potential to be new candidates for high-energy density STF. We have also discovered physical principles to guide further STF materials design through structural analysis. More broadly, our results illustrate the potential of using high-throughput ab initio simulations to design materials that undergo targeted structural transitions.« less
40 CFR Table 3 to Subpart Eeee of... - Operating Limits-High Throughput Transfer Racks
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Operating Limits-High Throughput Transfer Racks 3 Table 3 to Subpart EEEE of Part 63 Protection of Environment ENVIRONMENTAL PROTECTION... Throughput Transfer Racks As stated in § 63.2346(e), you must comply with the operating limits for existing...
High-resolution μCT of a mouse embryo using a compact laser-driven X-ray betatron source.
Cole, Jason M; Symes, Daniel R; Lopes, Nelson C; Wood, Jonathan C; Poder, Kristjan; Alatabi, Saleh; Botchway, Stanley W; Foster, Peta S; Gratton, Sarah; Johnson, Sara; Kamperidis, Christos; Kononenko, Olena; De Lazzari, Michael; Palmer, Charlotte A J; Rusby, Dean; Sanderson, Jeremy; Sandholzer, Michael; Sarri, Gianluca; Szoke-Kovacs, Zsombor; Teboul, Lydia; Thompson, James M; Warwick, Jonathan R; Westerberg, Henrik; Hill, Mark A; Norris, Dominic P; Mangles, Stuart P D; Najmudin, Zulfikar
2018-06-19
In the field of X-ray microcomputed tomography (μCT) there is a growing need to reduce acquisition times at high spatial resolution (approximate micrometers) to facilitate in vivo and high-throughput operations. The state of the art represented by synchrotron light sources is not practical for certain applications, and therefore the development of high-brightness laboratory-scale sources is crucial. We present here imaging of a fixed embryonic mouse sample using a compact laser-plasma-based X-ray light source and compare the results to images obtained using a commercial X-ray μCT scanner. The radiation is generated by the betatron motion of electrons inside a dilute and transient plasma, which circumvents the flux limitations imposed by the solid or liquid anodes used in conventional electron-impact X-ray tubes. This X-ray source is pulsed (duration <30 fs), bright (>10 10 photons per pulse), small (diameter <1 μm), and has a critical energy >15 keV. Stable X-ray performance enabled tomographic imaging of equivalent quality to that of the μCT scanner, an important confirmation of the suitability of the laser-driven source for applications. The X-ray flux achievable with this approach scales with the laser repetition rate without compromising the source size, which will allow the recording of high-resolution μCT scans in minutes. Copyright © 2018 the Author(s). Published by PNAS.
Analytical methods development for supramolecular design in solar hydrogen production
NASA Astrophysics Data System (ADS)
Brown, J. R.; Elvington, M.; Mongelli, M. T.; Zigler, D. F.; Brewer, K. J.
2006-08-01
In the investigation of alternative energy sources, specifically, solar hydrogen production from water, the ability to perform experiments with a consistent and reproducible light source is key to meaningful photochemistry. The design, construction, and evaluation of a series of LED array photolysis systems for high throughput photochemistry have been performed. Three array systems of increasing sophistication are evaluated using calorimetric measurements and potassium tris(oxalato)ferrate(II) chemical actinometry and compared with a traditional 1000 W Xe arc lamp source. The results are analyzed using descriptive statistics and analysis of variance (ANOVA). The third generation array is modular, and controllable in design. Furthermore, the third generation array system is shown to be comparable in both precision and photonic output to a 1000 W Xe arc lamp.
Healy, B J; van der Merwe, D; Christaki, K E; Meghzifene, A
2017-02-01
Medical linear accelerators (linacs) and cobalt-60 machines are both mature technologies for external beam radiotherapy. A comparison is made between these two technologies in terms of infrastructure and maintenance, dosimetry, shielding requirements, staffing, costs, security, patient throughput and clinical use. Infrastructure and maintenance are more demanding for linacs due to the complex electric componentry. In dosimetry, a higher beam energy, modulated dose rate and smaller focal spot size mean that it is easier to create an optimised treatment with a linac for conformal dose coverage of the tumour while sparing healthy organs at risk. In shielding, the requirements for a concrete bunker are similar for cobalt-60 machines and linacs but extra shielding and protection from neutrons are required for linacs. Staffing levels can be higher for linacs and more staff training is required for linacs. Life cycle costs are higher for linacs, especially multi-energy linacs. Security is more complex for cobalt-60 machines because of the high activity radioactive source. Patient throughput can be affected by source decay for cobalt-60 machines but poor maintenance and breakdowns can severely affect patient throughput for linacs. In clinical use, more complex treatment techniques are easier to achieve with linacs, and the availability of electron beams on high-energy linacs can be useful for certain treatments. In summary, there is no simple answer to the question of the choice of either cobalt-60 machines or linacs for radiotherapy in low- and middle-income countries. In fact a radiotherapy department with a combination of technologies, including orthovoltage X-ray units, may be an option. Local needs, conditions and resources will have to be factored into any decision on technology taking into account the characteristics of both forms of teletherapy, with the primary goal being the sustainability of the radiotherapy service over the useful lifetime of the equipment. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Martin, Daniel B; Holzman, Ted; May, Damon; Peterson, Amelia; Eastham, Ashley; Eng, Jimmy; McIntosh, Martin
2008-11-01
Multiple reaction monitoring (MRM) mass spectrometry identifies and quantifies specific peptides in a complex mixture with very high sensitivity and speed and thus has promise for the high throughput screening of clinical samples for candidate biomarkers. We have developed an interactive software platform, called MRMer, for managing highly complex MRM-MS experiments, including quantitative analyses using heavy/light isotopic peptide pairs. MRMer parses and extracts information from MS files encoded in the platform-independent mzXML data format. It extracts and infers precursor-product ion transition pairings, computes integrated ion intensities, and permits rapid visual curation for analyses exceeding 1000 precursor-product pairs. Results can be easily output for quantitative comparison of consecutive runs. Additionally MRMer incorporates features that permit the quantitative analysis experiments including heavy and light isotopic peptide pairs. MRMer is open source and provided under the Apache 2.0 license.
NASA Astrophysics Data System (ADS)
Li, Xue; Hou, Guangyue; Xing, Junpeng; Song, Fengrui; Liu, Zhiqiang; Liu, Shuying
2014-12-01
In the present work, direct analysis of real time ionization combined with multi-stage tandem mass spectrometry (DART-MSn) was used to investigate the metabolic profile of aconite alkaloids in rat intestinal bacteria. A total of 36 metabolites from three aconite alkaloids were identified by using DART-MSn, and the feasibility of quantitative analysis of these analytes was examined. Key parameters of the DART ion source, such as helium gas temperature and pressure, the source-to-MS distance, and the speed of the autosampler, were optimized to achieve high sensitivity, enhance reproducibility, and reduce the occurrence of fragmentation. The instrument analysis time for one sample can be less than 10 s for this method. Compared with ESI-MS and UPLC-MS, the DART-MS is more efficient for directly detecting metabolic samples, and has the advantage of being a simple, high-speed, high-throughput method.
Li, Xue; Hou, Guangyue; Xing, Junpeng; Song, Fengrui; Liu, Zhiqiang; Liu, Shuying
2014-12-01
In the present work, direct analysis of real time ionization combined with multi-stage tandem mass spectrometry (DART-MS(n)) was used to investigate the metabolic profile of aconite alkaloids in rat intestinal bacteria. A total of 36 metabolites from three aconite alkaloids were identified by using DART-MS(n), and the feasibility of quantitative analysis of these analytes was examined. Key parameters of the DART ion source, such as helium gas temperature and pressure, the source-to-MS distance, and the speed of the autosampler, were optimized to achieve high sensitivity, enhance reproducibility, and reduce the occurrence of fragmentation. The instrument analysis time for one sample can be less than 10 s for this method. Compared with ESI-MS and UPLC-MS, the DART-MS is more efficient for directly detecting metabolic samples, and has the advantage of being a simple, high-speed, high-throughput method.
Enabling Large-Scale Biomedical Analysis in the Cloud
Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen
2013-01-01
Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665
Wetmore, Barbara A.; Wambaugh, John F.; Allen, Brittany; Ferguson, Stephen S.; Sochaski, Mark A.; Setzer, R. Woodrow; Houck, Keith A.; Strope, Cory L.; Cantwell, Katherine; Judson, Richard S.; LeCluyse, Edward; Clewell, Harvey J.; Thomas, Russell S.; Andersen, Melvin E.
2015-01-01
We previously integrated dosimetry and exposure with high-throughput screening (HTS) to enhance the utility of ToxCast HTS data by translating in vitro bioactivity concentrations to oral equivalent doses (OEDs) required to achieve these levels internally. These OEDs were compared against regulatory exposure estimates, providing an activity-to-exposure ratio (AER) useful for a risk-based ranking strategy. As ToxCast efforts expand (ie, Phase II) beyond food-use pesticides toward a wider chemical domain that lacks exposure and toxicity information, prediction tools become increasingly important. In this study, in vitro hepatic clearance and plasma protein binding were measured to estimate OEDs for a subset of Phase II chemicals. OEDs were compared against high-throughput (HT) exposure predictions generated using probabilistic modeling and Bayesian approaches generated by the U.S. Environmental Protection Agency (EPA) ExpoCast program. This approach incorporated chemical-specific use and national production volume data with biomonitoring data to inform the exposure predictions. This HT exposure modeling approach provided predictions for all Phase II chemicals assessed in this study whereas estimates from regulatory sources were available for only 7% of chemicals. Of the 163 chemicals assessed in this study, 3 or 13 chemicals possessed AERs < 1 or < 100, respectively. Diverse bioactivities across a range of assays and concentrations were also noted across the wider chemical space surveyed. The availability of HT exposure estimation and bioactivity screening tools provides an opportunity to incorporate a risk-based strategy for use in testing prioritization. PMID:26251325
Velez-Suberbie, M Lourdes; Betts, John P J; Walker, Kelly L; Robinson, Colin; Zoro, Barney; Keshavarz-Moore, Eli
2018-01-01
High throughput automated fermentation systems have become a useful tool in early bioprocess development. In this study, we investigated a 24 x 15 mL single use microbioreactor system, ambr 15f, designed for microbial culture. We compared the fed-batch growth and production capabilities of this system for two Escherichia coli strains, BL21 (DE3) and MC4100, and two industrially relevant molecules, hGH and scFv. In addition, different carbon sources were tested using bolus, linear or exponential feeding strategies, showing the capacity of the ambr 15f system to handle automated feeding. We used power per unit volume (P/V) as a scale criterion to compare the ambr 15f with 1 L stirred bioreactors which were previously scaled-up to 20 L with a different biological system, thus showing a potential 1,300 fold scale comparability in terms of both growth and product yield. By exposing the cells grown in the ambr 15f system to a level of shear expected in an industrial centrifuge, we determined that the cells are as robust as those from a bench scale bioreactor. These results provide evidence that the ambr 15f system is an efficient high throughput microbial system that can be used for strain and molecule selection as well as rapid scale-up. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 34:58-68, 2018. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers.
Dawes, Timothy D; Turincio, Rebecca; Jones, Steven W; Rodriguez, Richard A; Gadiagellan, Dhireshan; Thana, Peter; Clark, Kevin R; Gustafson, Amy E; Orren, Linda; Liimatta, Marya; Gross, Daniel P; Maurer, Till; Beresini, Maureen H
2016-02-01
Acoustic droplet ejection (ADE) as a means of transferring library compounds has had a dramatic impact on the way in which high-throughput screening campaigns are conducted in many laboratories. Two Labcyte Echo ADE liquid handlers form the core of the compound transfer operation in our 1536-well based ultra-high-throughput screening (uHTS) system. Use of these instruments has promoted flexibility in compound formatting in addition to minimizing waste and eliminating compound carryover. We describe the use of ADE for the generation of assay-ready plates for primary screening as well as for follow-up dose-response evaluations. Custom software has enabled us to harness the information generated by the ADE instrumentation. Compound transfer via ADE also contributes to the screening process outside of the uHTS system. A second fully automated ADE-based system has been used to augment the capacity of the uHTS system as well as to permit efficient use of previously picked compound aliquots for secondary assay evaluations. Essential to the utility of ADE in the high-throughput screening process is the high quality of the resulting data. Examples of data generated at various stages of high-throughput screening campaigns are provided. Advantages and disadvantages of the use of ADE in high-throughput screening are discussed. © 2015 Society for Laboratory Automation and Screening.
An Automated High-Throughput System to Fractionate Plant Natural Products for Drug Discovery
Tu, Ying; Jeffries, Cynthia; Ruan, Hong; Nelson, Cynthia; Smithson, David; Shelat, Anang A.; Brown, Kristin M.; Li, Xing-Cong; Hester, John P.; Smillie, Troy; Khan, Ikhlas A.; Walker, Larry; Guy, Kip; Yan, Bing
2010-01-01
The development of an automated, high-throughput fractionation procedure to prepare and analyze natural product libraries for drug discovery screening is described. Natural products obtained from plant materials worldwide were extracted and first prefractionated on polyamide solid-phase extraction cartridges to remove polyphenols, followed by high-throughput automated fractionation, drying, weighing, and reformatting for screening and storage. The analysis of fractions with UPLC coupled with MS, PDA and ELSD detectors provides information that facilitates characterization of compounds in active fractions. Screening of a portion of fractions yielded multiple assay-specific hits in several high-throughput cellular screening assays. This procedure modernizes the traditional natural product fractionation paradigm by seamlessly integrating automation, informatics, and multimodal analytical interrogation capabilities. PMID:20232897
Watt, Eric D.; Hornung, Michael W.; Hedge, Joan M.; Judson, Richard S.; Crofton, Kevin M.; Houck, Keith A.; Simmons, Steven O.
2016-01-01
High-throughput screening for potential thyroid-disrupting chemicals requires a system of assays to capture multiple molecular-initiating events (MIEs) that converge on perturbed thyroid hormone (TH) homeostasis. Screening for MIEs specific to TH-disrupting pathways is limited in the U.S. Environmental Protection Agency ToxCast screening assay portfolio. To fill 1 critical screening gap, the Amplex UltraRed-thyroperoxidase (AUR-TPO) assay was developed to identify chemicals that inhibit TPO, as decreased TPO activity reduces TH synthesis. The ToxCast phase I and II chemical libraries, comprised of 1074 unique chemicals, were initially screened using a single, high concentration to identify potential TPO inhibitors. Chemicals positive in the single-concentration screen were retested in concentration-response. Due to high false-positive rates typically observed with loss-of-signal assays such as AUR-TPO, we also employed 2 additional assays in parallel to identify possible sources of nonspecific assay signal loss, enabling stratification of roughly 300 putative TPO inhibitors based upon selective AUR-TPO activity. A cell-free luciferase inhibition assay was used to identify nonspecific enzyme inhibition among the putative TPO inhibitors, and a cytotoxicity assay using a human cell line was used to estimate the cellular tolerance limit. Additionally, the TPO inhibition activities of 150 chemicals were compared between the AUR-TPO and an orthogonal peroxidase oxidation assay using guaiacol as a substrate to confirm the activity profiles of putative TPO inhibitors. This effort represents the most extensive TPO inhibition screening campaign to date and illustrates a tiered screening approach that focuses resources, maximizes assay throughput, and reduces animal use. PMID:26884060
Landry, James P; Fei, Yiyan; Zhu, X D
2011-12-01
Small-molecule compounds remain the major source of therapeutic and preventative drugs. Developing new drugs against a protein target often requires screening large collections of compounds with diverse structures for ligands or ligand fragments that exhibit sufficiently affinity and desirable inhibition effect on the target before further optimization and development. Since the number of small molecule compounds is large, high-throughput screening (HTS) methods are needed. Small-molecule microarrays (SMM) on a solid support in combination with a suitable binding assay form a viable HTS platform. We demonstrate that by combining an oblique-incidence reflectivity difference optical scanner with SMM we can screen 10,000 small-molecule compounds on a single glass slide for protein ligands without fluorescence labeling. Furthermore using such a label-free assay platform we can simultaneously acquire binding curves of a solution-phase protein to over 10,000 immobilized compounds, thus enabling full characterization of protein-ligand interactions over a wide range of affinity constants.
Wasserberg, G; Kirsch, P; Rowton, E D
2014-06-01
A 3-chamber in-line olfactometer designed for use with sand flies is described and tested as a high-throughput method to screen honeys for attractiveness to Phlebotomus papatasi (four geographic isolates), P. duboscqi (two geographic isolates), and Lutzomyia longipalpis maintained in colonies at the Walter Reed Army Institute of Research. A diversity of unifloral honey odors were evaluated as a proxy for the natural floral odors that sand flies may use in orientation to floral sugar sources in the field. In the 3-chamber in-line olfactometer, the choice modules come directly off both sides of the release area instead of angling away as in the Y-tube olfactometer. Of the 25 honeys tested, five had a significant attraction for one or more of the sand fly isolates tested. This olfactometer and high-throughput method has utility for evaluating a diversity of natural materials with unknown complex odor blends that can then be down-selected for further evaluation in wind tunnels and/or field scenarios. © 2014 The Society for Vector Ecology.
21st century tools to prioritize contaminants for monitoring and ...
The webinar focused on ways that ToxCast high throughput screening data and the adverse outcome pathway framework, under development in the CSS program, can be used to prioritize environmental contaminants for monitoring and management. The webinar focused on ways that ToxCast high throughput screening data and the adverse outcome pathway framework, under development in the CSS program, can be used to prioritize environmental contaminants for monitoring and management. The work presented focused on case studies conducted in Region 8, in collaboration with EPA Region 8 and NEIC, as well as other federal (USGS, US FWS) and regional partners (Northern Colorado Plateau Network). The Consortium for Research and Education on Emerging Contaminants (CREEC) is a grass-roots 501(c)(3) non-profit organization comprised of world-class scientists and stakeholders with a shared interest in the source, fate, and physiological effects of contaminants of emerging concern (www.creec.net). As such, they represent an important group of stakeholders with an interest in applying the data, approaches, and tools that are being developed by the CSS program.
Applying 21st century tools to watersheds of the western US ...
The webinar focused on ways that ToxCast high throughput screening data and the adverse outcome pathway framework, under development in the CSS program, can be used to prioritize environmental contaminants for monitoring and management. The webinar focused on ways that ToxCast high throughput screening data and the adverse outcome pathway framework, under development in the CSS program, can be used to prioritize environmental contaminants for monitoring and management. The work presented focused on case studies conducted in Region 8, in collaboration with EPA Region 8 and NEIC, as well as other federal (USGS, US FWS) and regional partners (Northern Colorado Plateau Network). The Consortium for Research and Education on Emerging Contaminants (CREEC) is a grass-roots 501(c)(3) non-profit organization comprised of world-class scientists and stakeholders with a shared interest in the source, fate, and physiological effects of contaminants of emerging concern (www.creec.net). As such, they represent an important group of stakeholders with an interest in applying the data, approaches, and tools that are being developed by the CSS program.
Pyicos: a versatile toolkit for the analysis of high-throughput sequencing data.
Althammer, Sonja; González-Vallinas, Juan; Ballaré, Cecilia; Beato, Miguel; Eyras, Eduardo
2011-12-15
High-throughput sequencing (HTS) has revolutionized gene regulation studies and is now fundamental for the detection of protein-DNA and protein-RNA binding, as well as for measuring RNA expression. With increasing variety and sequencing depth of HTS datasets, the need for more flexible and memory-efficient tools to analyse them is growing. We describe Pyicos, a powerful toolkit for the analysis of mapped reads from diverse HTS experiments: ChIP-Seq, either punctuated or broad signals, CLIP-Seq and RNA-Seq. We prove the effectiveness of Pyicos to select for significant signals and show that its accuracy is comparable and sometimes superior to that of methods specifically designed for each particular type of experiment. Pyicos facilitates the analysis of a variety of HTS datatypes through its flexibility and memory efficiency, providing a useful framework for data integration into models of regulatory genomics. Open-source software, with tutorials and protocol files, is available at http://regulatorygenomics.upf.edu/pyicos or as a Galaxy server at http://regulatorygenomics.upf.edu/galaxy eduardo.eyras@upf.edu Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Potyrailo, Radislav A.; Hassib, Lamyaa
2005-06-01
Multicomponent polymer-based formulations of optical sensor materials are difficult and time consuming to optimize using conventional approaches. To address these challenges, our long-term goal is to determine relationships between sensor formulation and sensor response parameters using new scientific methodologies. As the first step, we have designed and implemented an automated analytical instrumentation infrastructure for combinatorial and high-throughput development of polymeric sensor materials for optical sensors. Our approach is based on the fabrication and performance screening of discrete and gradient sensor arrays. Simultaneous formation of multiple sensor coatings into discrete 4×6, 6×8, and 8×12 element arrays (3-15μL volume per element) and their screening provides not only a well-recognized acceleration in the screening rate, but also considerably reduces or even eliminates sources of variability, which are randomly affecting sensors response during a conventional one-at-a-time sensor coating evaluation. The application of gradient sensor arrays provides additional capabilities for rapid finding of the optimal formulation parameters.
Zhang, Wenli; Fu, Jun; Liu, Jing; Wang, Hailong; Schiwon, Maren; Janz, Sebastian; Schaffarczyk, Lukas; von der Goltz, Lukas; Ehrke-Schulz, Eric; Dörner, Johannes; Solanki, Manish; Boehme, Philip; Bergmann, Thorsten; Lieber, Andre; Lauber, Chris; Dahl, Andreas; Petzold, Andreas; Zhang, Youming; Stewart, A Francis; Ehrhardt, Anja
2017-05-23
Adenoviruses (Ads) are large human-pathogenic double-stranded DNA (dsDNA) viruses presenting an enormous natural diversity associated with a broad variety of diseases. However, only a small fraction of adenoviruses has been explored in basic virology and biomedical research, highlighting the need to develop robust and adaptable methodologies and resources. We developed a method for high-throughput direct cloning and engineering of adenoviral genomes from different sources utilizing advanced linear-linear homologous recombination (LLHR) and linear-circular homologous recombination (LCHR). We describe 34 cloned adenoviral genomes originating from clinical samples, which were characterized by next-generation sequencing (NGS). We anticipate that this recombineering strategy and the engineered adenovirus library will provide an approach to study basic and clinical virology. High-throughput screening (HTS) of the reporter-tagged Ad library in a panel of cell lines including osteosarcoma disease-specific cell lines revealed alternative virus types with enhanced transduction and oncolysis efficiencies. This highlights the usefulness of this resource. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Rocca-Serra, Philippe; Brandizi, Marco; Maguire, Eamonn; Sklyar, Nataliya; Taylor, Chris; Begley, Kimberly; Field, Dawn; Harris, Stephen; Hide, Winston; Hofmann, Oliver; Neumann, Steffen; Sterk, Peter; Tong, Weida; Sansone, Susanna-Assunta
2010-01-01
Summary: The first open source software suite for experimentalists and curators that (i) assists in the annotation and local management of experimental metadata from high-throughput studies employing one or a combination of omics and other technologies; (ii) empowers users to uptake community-defined checklists and ontologies; and (iii) facilitates submission to international public repositories. Availability and Implementation: Software, documentation, case studies and implementations at http://www.isa-tools.org Contact: isatools@googlegroups.com PMID:20679334
Integrating ecology into biotechnology.
McMahon, Katherine D; Martin, Hector Garcia; Hugenholtz, Philip
2007-06-01
New high-throughput culture-independent molecular tools are allowing the scientific community to characterize and understand the microbial communities underpinning environmental biotechnology processes in unprecedented ways. By creatively leveraging these new data sources, microbial ecology has the potential to transition from a purely descriptive to a predictive framework, in which ecological principles are integrated and exploited to engineer systems that are biologically optimized for the desired goal. But to achieve this goal, ecology, engineering and microbiology curricula need to be changed from the very root to better promote interdisciplinarity.
2011-01-01
Shotgun lipidome profiling relies on direct mass spectrometric analysis of total lipid extracts from cells, tissues or organisms and is a powerful tool to elucidate the molecular composition of lipidomes. We present a novel informatics concept of the molecular fragmentation query language implemented within the LipidXplorer open source software kit that supports accurate quantification of individual species of any ionizable lipid class in shotgun spectra acquired on any mass spectrometry platform. PMID:21247462
Homma, Keiichiro; Kambara, Makoto; Yoshida, Toyonobu
2014-04-01
Nanocomposite Si/SiO x powders were produced by plasma spray physical vapor deposition (PS-PVD) at a material throughput of 480 g h -1 . The powders are fundamentally an aggregate of primary ∼20 nm particles, which are composed of a crystalline Si core and SiO x shell structure. This is made possible by complete evaporation of raw SiO powders and subsequent rapid condensation of high temperature SiO x vapors, followed by disproportionation reaction of nucleated SiO x nanoparticles. When CH 4 was additionally introduced to the PS-PVD, the volume of the core Si increases while reducing potentially the SiO x shell thickness as a result of the enhanced SiO reduction, although an unfavorable SiC phase emerges when the C/Si molar ratio is greater than 1. As a result of the increased amount of Si active material and reduced source for irreversible capacity, half-cell batteries made of PS-PVD powders with C/Si = 0.25 have exhibited improved initial efficiency and maintenance of capacity as high as 1000 mAh g -1 after 100 cycles at the same time.
NASA Astrophysics Data System (ADS)
Lee, Hochul; Ebrahimi, Farbod; Amiri, Pedram Khalili; Wang, Kang L.
2017-05-01
A true random number generator based on perpendicularly magnetized voltage-controlled magnetic tunnel junction devices (MRNG) is presented. Unlike MTJs used in memory applications where a stable bit is needed to store information, in this work, the MTJ is intentionally designed with small perpendicular magnetic anisotropy (PMA). This allows one to take advantage of the thermally activated fluctuations of its free layer as a stochastic noise source. Furthermore, we take advantage of the voltage dependence of anisotropy to temporarily change the MTJ state into an unstable state when a voltage is applied. Since the MTJ has two energetically stable states, the final state is randomly chosen by thermal fluctuation. The voltage controlled magnetic anisotropy (VCMA) effect is used to generate the metastable state of the MTJ by lowering its energy barrier. The proposed MRNG achieves a high throughput (32 Gbps) by implementing a 64 ×64 MTJ array into CMOS circuits and executing operations in a parallel manner. Furthermore, the circuit consumes very low energy to generate a random bit (31.5 fJ/bit) due to the high energy efficiency of the voltage-controlled MTJ switching.
Ligand screening systems for human glucose transporters as tools in drug discovery
NASA Astrophysics Data System (ADS)
Schmidl, Sina; Iancu, Cristina V.; Choe, Jun-yong; Oreb, Mislav
2018-05-01
Hexoses are the major source of energy and carbon skeletons for biosynthetic processes in all kingdoms of life. Their cellular uptake is mediated by specialized transporters, including glucose transporters (GLUT, SLC2 gene family). Malfunction or altered expression pattern of GLUTs in humans is associated with several widespread diseases including cancer, diabetes and severe metabolic disorders. Their high relevance in the medical area makes these transporters valuable drug targets and potential biomarkers. Nevertheless, the lack of a suitable high-throughput screening system has impeded the determination of compounds that would enable specific manipulation of GLUTs so far. Availability of structural data on several GLUTs enabled in silico ligand screening, though limited by the fact that only two major conformations of the transporters can be tested. Recently, convenient high-throughput microbial and cell-free screening systems have been developed. These remarkable achievements set the foundation for further and detailed elucidation of the molecular mechanisms of glucose transport and will also lead to great progress in the discovery of GLUT effectors as therapeutic agents. In this mini-review, we focus on recent efforts to identify potential GLUT-targeting drugs, based on a combination of structural biology and different assay systems.
BiNA: A Visual Analytics Tool for Biological Network Data
Gerasch, Andreas; Faber, Daniel; Küntzer, Jan; Niermann, Peter; Kohlbacher, Oliver; Lenhof, Hans-Peter; Kaufmann, Michael
2014-01-01
Interactive visual analysis of biological high-throughput data in the context of the underlying networks is an essential task in modern biomedicine with applications ranging from metabolic engineering to personalized medicine. The complexity and heterogeneity of data sets require flexible software architectures for data analysis. Concise and easily readable graphical representation of data and interactive navigation of large data sets are essential in this context. We present BiNA - the Biological Network Analyzer - a flexible open-source software for analyzing and visualizing biological networks. Highly configurable visualization styles for regulatory and metabolic network data offer sophisticated drawings and intuitive navigation and exploration techniques using hierarchical graph concepts. The generic projection and analysis framework provides powerful functionalities for visual analyses of high-throughput omics data in the context of networks, in particular for the differential analysis and the analysis of time series data. A direct interface to an underlying data warehouse provides fast access to a wide range of semantically integrated biological network databases. A plugin system allows simple customization and integration of new analysis algorithms or visual representations. BiNA is available under the 3-clause BSD license at http://bina.unipax.info/. PMID:24551056
High impact technologies for natural products screening.
Koehn, Frank E
2008-01-01
Natural products have historically been a rich source of lead molecules in drug discovery. However, natural products have been de-emphasized as high throughput screening resources in the recent past, in part because of difficulties in obtaining high quality natural products screening libraries, or in applying modern screening assays to these libraries. In addition, natural products programs based on screening of extract libraries, bioassay-guided isolation, structure elucidation and subsequent production scale-up are challenged to meet the rapid cycle times that are characteristic of the modern HTS approach. Fortunately, new technologies in mass spectrometry, NMR and other spectroscopic techniques can greatly facilitate the first components of the process - namely the efficient creation of high-quality natural products libraries, bimolecular target or cell-based screening, and early hit characterization. The success of any high throughput screening campaign is dependent on the quality of the chemical library. The construction and maintenance of a high quality natural products library, whether based on microbial, plant, marine or other sources is a costly endeavor. The library itself may be composed of samples that are themselves mixtures - such as crude extracts, semi-pure mixtures or single purified natural products. Each of these library designs carries with it distinctive advantages and disadvantages. Crude extract libraries have lower resource requirements for sample preparation, but high requirements for identification of the bioactive constituents. Pre-fractionated libraries can be an effective strategy to alleviate interferences encountered with crude libraries, and may shorten the time needed to identify the active principle. Purified natural product libraries require substantial resources for preparation, but offer the advantage that the hit detection process is reduced to that of synthetic single component libraries. Whether the natural products library consists of crude or partially fractionated mixtures, the library contents should be profiled to identify the known components present - a process known as dereplication. The use of mass spectrometry and HPLC-mass spectrometry together with spectral databases is a powerful tool in the chemometric profiling of bio-sources for natural product production. High throughput, high sensitivity flow NMR is an emerging tool in this area as well. Whether by cell based or biomolecular target based assays, screening of natural product extract libraries continues to furnish novel lead molecules for further drug development, despite challenges in the analysis and prioritization of natural products hits. Spectroscopic techniques are now being used to directly screen natural product and synthetic libraries. Mass spectrometry in the form of methods such as ESI-ICRFTMS, and FACS-MS as well as NMR methods such as SAR by NMR and STD-NMR have been utilized to effectively screen molecular libraries. Overall, emerging advances in mass spectrometry, NMR and other technologies are making it possible to overcome the challenges encountered in screening natural products libraries in today's drug discovery environment. As we apply these technologies and develop them even further, we can look forward to increased impact of natural products in the HTS based drug discovery.
Application of laser-wakefield-based x-ray source to global food security issues
NASA Astrophysics Data System (ADS)
Kieffer, J. C.; Fourmaux, S.; Hallin, E.; Arnison, P.; Brereton, N.; Pitre, F.; Dixon, M.; Tran, N.
2017-05-01
We present the development of a high throughput phase contrast screening system based on LWFA Xray sources for plant imaging. We upgraded the INRS laser-betatron beam line and we illustrate its imaging potential through the innovative development of new tools for addressing issues relevant to global food security. This initiative, led by the Global Institute of Food Security (GIFS) at the U of Saskatchewan, aims to elucidate that part of the function that maps environmental inputs onto specific plant phenotypes. The prospect of correlating phenotypic expression with adaptation to environmental stresses will provide researchers with a new tool to assess breeding programs for crops meant to thrive under the climate extremes.
Sources for Leads: Natural Products and Libraries.
van Herwerden, Eric F; Süssmuth, Roderich D
2016-01-01
Natural products have traditionally been a major source of leads in the drug discovery process. However, the development of high-throughput screening led to an increased interest in synthetic methods that enabled the rapid construction of large libraries of molecules. This resulted in the termination or downscaling of many natural product research programs, but the chemical libraries did not necessarily produce a larger amount of drug leads. On one hand, this chapter explores the current state of natural product research within the drug discovery process. On the other hand it evaluates the efforts made to increase the amount of leads generated from chemical libraries and considers what role natural products could play here.
High-throughput measurements of the optical redox ratio using a commercial microplate reader.
Cannon, Taylor M; Shah, Amy T; Walsh, Alex J; Skala, Melissa C
2015-01-01
There is a need for accurate, high-throughput, functional measures to gauge the efficacy of potential drugs in living cells. As an early marker of drug response in cells, cellular metabolism provides an attractive platform for high-throughput drug testing. Optical techniques can noninvasively monitor NADH and FAD, two autofluorescent metabolic coenzymes. The autofluorescent redox ratio, defined as the autofluorescence intensity of NADH divided by that of FAD, quantifies relative rates of cellular glycolysis and oxidative phosphorylation. However, current microscopy methods for redox ratio quantification are time-intensive and low-throughput, limiting their practicality in drug screening. Alternatively, high-throughput commercial microplate readers quickly measure fluorescence intensities for hundreds of wells. This study found that a commercial microplate reader can differentiate the receptor status of breast cancer cell lines (p < 0.05) based on redox ratio measurements without extrinsic contrast agents. Furthermore, microplate reader redox ratio measurements resolve response (p < 0.05) and lack of response (p > 0.05) in cell lines that are responsive and nonresponsive, respectively, to the breast cancer drug trastuzumab. These studies indicate that the microplate readers can be used to measure the redox ratio in a high-throughput manner and are sensitive enough to detect differences in cellular metabolism that are consistent with microscopy results.
Through-silicon via plating void metrology using focused ion beam mill
NASA Astrophysics Data System (ADS)
Rudack, A. C.; Nadeau, J.; Routh, R.; Young, R. J.
2012-03-01
3D IC integration continues to increase in complexity, employing advanced interconnect technologies such as throughsilicon vias (TSVs), wafer-to-wafer (W2W) bonding, and multi-chip stacking. As always, the challenge with developing new processes is to get fast, effective feedback to the integration engineer. Ideally this data is provided by nondestructive in-line metrology, but this is not always possible. For example, some form of physical cross-sectioning is still the most practical way to detect and characterize TSV copper plating voids. This can be achieved by cleaving, followed by scanning electron microscope (SEM) inspection. A more effective physical cross-sectioning method has been developed using an automated dual-beam focused ion beam (FIB)-SEM system, in which multiple locations can be sectioned and imaged while leaving the wafer intact. This method has been used routinely to assess copper plating voids over the last 24 months at SEMATECH. FIB-SEM feedback has been used to evaluate new plating chemistries, plating recipes, and process tool requalification after downtime. The dualbeam FIB-SEM used for these studies employs a gallium-based liquid metal ion source (LMIS). The overall throughput of relatively large volumes being milled is limited to 3-4 hours per section due to the maximum available beam current of 20 nA. Despite the larger volumetric removal rates of other techniques (e.g., mechanical polishing, broad-ion milling, and laser ablation), the value of localized, site-specific, and artifact-free FIB milling is well appreciated. The challenge, therefore, has been to reap the desired FIB benefits, but at faster volume removal rates. This has led to several system and technology developments for improving the throughput of the FIB technique, the most recent being the introduction of FIBs based on an inductively coupled plasma (ICP) ion source. The ICP source offers much better performance than the LMIS at very high beam currents, enabling more than 1 μA of ion beam current for fast material removal. At a lower current, the LMIS outperforms the ICP source, but imaging resolution below 30 nm has been demonstrated with ICP-based systems. In addition, the ICP source allows a wide range of possible ion species, with Xe currently the milling species of choice, due to its high mass and favorable ion source performance parameters. Using a 1 μA Xe beam will have an overall milling rate for silicon some 20X higher than a Ga beam operating at 65 nA. This paper will compare the benefits already seen using the Ga-based FIB-SEM approach to TSV metrology, with the improvements in throughput and time-to-data obtained by using the faster material removal capabilities of a FIB based on an ICP ion source. Plasma FIB (PFIB) is demonstrated to be a feasible tool for TSV plating void metrology.
Cluster secondary ion mass spectrometry microscope mode mass spectrometry imaging.
Kiss, András; Smith, Donald F; Jungmann, Julia H; Heeren, Ron M A
2013-12-30
Microscope mode imaging for secondary ion mass spectrometry is a technique with the promise of simultaneous high spatial resolution and high-speed imaging of biomolecules from complex surfaces. Technological developments such as new position-sensitive detectors, in combination with polyatomic primary ion sources, are required to exploit the full potential of microscope mode mass spectrometry imaging, i.e. to efficiently push the limits of ultra-high spatial resolution, sample throughput and sensitivity. In this work, a C60 primary source was combined with a commercial mass microscope for microscope mode secondary ion mass spectrometry imaging. The detector setup is a pixelated detector from the Medipix/Timepix family with high-voltage post-acceleration capabilities. The system's mass spectral and imaging performance is tested with various benchmark samples and thin tissue sections. The high secondary ion yield (with respect to 'traditional' monatomic primary ion sources) of the C60 primary ion source and the increased sensitivity of the high voltage detector setup improve microscope mode secondary ion mass spectrometry imaging. The analysis time and the signal-to-noise ratio are improved compared with other microscope mode imaging systems, all at high spatial resolution. We have demonstrated the unique capabilities of a C60 ion microscope with a Timepix detector for high spatial resolution microscope mode secondary ion mass spectrometry imaging. Copyright © 2013 John Wiley & Sons, Ltd.
High-throughput methods for characterizing the mechanical properties of coatings
NASA Astrophysics Data System (ADS)
Siripirom, Chavanin
The characterization of mechanical properties in a combinatorial and high-throughput workflow has been a bottleneck that reduced the speed of the materials development process. High-throughput characterization of the mechanical properties was applied in this research in order to reduce the amount of sample handling and to accelerate the output. A puncture tester was designed and built to evaluate the toughness of materials using an innovative template design coupled with automation. The test is in the form of a circular free-film indentation. A single template contains 12 samples which are tested in a rapid serial approach. Next, the operational principles of a novel parallel dynamic mechanical-thermal analysis instrument were analyzed in detail for potential sources of errors. The test uses a model of a circular bilayer fixed-edge plate deformation. A total of 96 samples can be analyzed simultaneously which provides a tremendous increase in efficiency compared with a conventional dynamic test. The modulus values determined by the system had considerable variation. The errors were observed and improvements to the system were made. A finite element analysis was used to analyze the accuracy given by the closed-form solution with respect to testing geometries, such as thicknesses of the samples. A good control of the thickness of the sample was proven to be crucial to the accuracy and precision of the output. Then, the attempt to correlate the high-throughput experiments and conventional coating testing methods was made. Automated nanoindentation in dynamic mode was found to provide information on the near-surface modulus and could potentially correlate with the pendulum hardness test using the loss tangent component. Lastly, surface characterization of stratified siloxane-polyurethane coatings was carried out with X-ray photoelectron spectroscopy, Rutherford backscattering spectroscopy, transmission electron microscopy, and nanoindentation. The siloxane component segregates to the surface during curing. The distribution of siloxane as a function of thickness into the sample showed differences depending on the formulation parameters. The coatings which had higher siloxane content near the surface were those coatings found to perform well in field tests.
He, Ji; Dai, Xinbin; Zhao, Xuechun
2007-02-09
BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Personal BLAST Navigator (PLAN) is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1) query and target sequence database management, (2) automated high-throughput BLAST searching, (3) indexing and searching of results, (4) filtering results online, (5) managing results of personal interest in favorite categories, (6) automated sequence annotation (such as NCBI NR and ontology-based annotation). PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results. The PLAN web interface is platform-independent, easily configurable and capable of comprehensive expansion, and user-intuitive. PLAN is freely available to academic users at http://bioinfo.noble.org/plan/. The source code for local deployment is provided under free license. Full support on system utilization, installation, configuration and customization are provided to academic users.
He, Ji; Dai, Xinbin; Zhao, Xuechun
2007-01-01
Background BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Results Personal BLAST Navigator (PLAN) is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1) query and target sequence database management, (2) automated high-throughput BLAST searching, (3) indexing and searching of results, (4) filtering results online, (5) managing results of personal interest in favorite categories, (6) automated sequence annotation (such as NCBI NR and ontology-based annotation). PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. Conclusion PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results. The PLAN web interface is platform-independent, easily configurable and capable of comprehensive expansion, and user-intuitive. PLAN is freely available to academic users at . The source code for local deployment is provided under free license. Full support on system utilization, installation, configuration and customization are provided to academic users. PMID:17291345
A high-throughput in vitro ring assay for vasoactivity using magnetic 3D bioprinting
Tseng, Hubert; Gage, Jacob A.; Haisler, William L.; Neeley, Shane K.; Shen, Tsaiwei; Hebel, Chris; Barthlow, Herbert G.; Wagoner, Matthew; Souza, Glauco R.
2016-01-01
Vasoactive liabilities are typically assayed using wire myography, which is limited by its high cost and low throughput. To meet the demand for higher throughput in vitro alternatives, this study introduces a magnetic 3D bioprinting-based vasoactivity assay. The principle behind this assay is the magnetic printing of vascular smooth muscle cells into 3D rings that functionally represent blood vessel segments, whose contraction can be altered by vasodilators and vasoconstrictors. A cost-effective imaging modality employing a mobile device is used to capture contraction with high throughput. The goal of this study was to validate ring contraction as a measure of vasoactivity, using a small panel of known vasoactive drugs. In vitro responses of the rings matched outcomes predicted by in vivo pharmacology, and were supported by immunohistochemistry. Altogether, this ring assay robustly models vasoactivity, which could meet the need for higher throughput in vitro alternatives. PMID:27477945
High-throughput, image-based screening of pooled genetic variant libraries
Emanuel, George; Moffitt, Jeffrey R.; Zhuang, Xiaowei
2018-01-01
Image-based, high-throughput screening of genetic perturbations will advance both biology and biotechnology. We report a high-throughput screening method that allows diverse genotypes and corresponding phenotypes to be imaged in numerous individual cells. We achieve genotyping by introducing barcoded genetic variants into cells and using massively multiplexed FISH to measure the barcodes. We demonstrated this method by screening mutants of the fluorescent protein YFAST, yielding brighter and more photostable YFAST variants. PMID:29083401
Experimental Design for Combinatorial and High Throughput Materials Development
NASA Astrophysics Data System (ADS)
Cawse, James N.
2002-12-01
In the past decade, combinatorial and high throughput experimental methods have revolutionized the pharmaceutical industry, allowing researchers to conduct more experiments in a week than was previously possible in a year. Now high throughput experimentation is rapidly spreading from its origins in the pharmaceutical world to larger industrial research establishments such as GE and DuPont, and even to smaller companies and universities. Consequently, researchers need to know the kinds of problems, desired outcomes, and appropriate patterns for these new strategies. Editor James Cawse's far-reaching study identifies and applies, with specific examples, these important new principles and techniques. Experimental Design for Combinatorial and High Throughput Materials Development progresses from methods that are now standard, such as gradient arrays, to mathematical developments that are breaking new ground. The former will be particularly useful to researchers entering the field, while the latter should inspire and challenge advanced practitioners. The book's contents are contributed by leading researchers in their respective fields. Chapters include: -High Throughput Synthetic Approaches for the Investigation of Inorganic Phase Space -Combinatorial Mapping of Polymer Blends Phase Behavior -Split-Plot Designs -Artificial Neural Networks in Catalyst Development -The Monte Carlo Approach to Library Design and Redesign This book also contains over 200 useful charts and drawings. Industrial chemists, chemical engineers, materials scientists, and physicists working in combinatorial and high throughput chemistry will find James Cawse's study to be an invaluable resource.
TERRA REF: Advancing phenomics with high resolution, open access sensor and genomics data
NASA Astrophysics Data System (ADS)
LeBauer, D.; Kooper, R.; Burnette, M.; Willis, C.
2017-12-01
Automated plant measurement has the potential to improve understanding of genetic and environmental controls on plant traits (phenotypes). The application of sensors and software in the automation of high throughput phenotyping reflects a fundamental shift from labor intensive hand measurements to drone, tractor, and robot mounted sensing platforms. These tools are expected to speed the rate of crop improvement by enabling plant breeders to more accurately select plants with improved yields, resource use efficiency, and stress tolerance. However, there are many challenges facing high throughput phenomics: sensors and platforms are expensive, currently there are few standard methods of data collection and storage, and the analysis of large data sets requires high performance computers and automated, reproducible computing pipelines. To overcome these obstacles and advance the science of high throughput phenomics, the TERRA Phenotyping Reference Platform (TERRA-REF) team is developing an open-access database of high resolution sensor data. TERRA REF is an integrated field and greenhouse phenotyping system that includes: a reference field scanner with fifteen sensors that can generate terrabytes of data each day at mm resolution; UAV, tractor, and fixed field sensing platforms; and an automated controlled-environment scanner. These platforms will enable investigation of diverse sensing modalities, and the investigation of traits under controlled and field environments. It is the goal of TERRA REF to lower the barrier to entry for academic and industry researchers by providing high-resolution data, open source software, and online computing resources. Our project is unique in that all data will be made fully public in November 2018, and is already available to early adopters through the beta-user program. We will describe the datasets and how to use them as well as the databases and computing pipeline and how these can be reused and remixed in other phenomics pipelines. Finally, we will describe the National Data Service workbench, a cloud computing platform that can access the petabyte scale data while supporting reproducible research.
Deciphering the genomic targets of alkylating polyamide conjugates using high-throughput sequencing
Chandran, Anandhakumar; Syed, Junetha; Taylor, Rhys D.; Kashiwazaki, Gengo; Sato, Shinsuke; Hashiya, Kaori; Bando, Toshikazu; Sugiyama, Hiroshi
2016-01-01
Chemically engineered small molecules targeting specific genomic sequences play an important role in drug development research. Pyrrole-imidazole polyamides (PIPs) are a group of molecules that can bind to the DNA minor-groove and can be engineered to target specific sequences. Their biological effects rely primarily on their selective DNA binding. However, the binding mechanism of PIPs at the chromatinized genome level is poorly understood. Herein, we report a method using high-throughput sequencing to identify the DNA-alkylating sites of PIP-indole-seco-CBI conjugates. High-throughput sequencing analysis of conjugate 2 showed highly similar DNA-alkylating sites on synthetic oligos (histone-free DNA) and on human genomes (chromatinized DNA context). To our knowledge, this is the first report identifying alkylation sites across genomic DNA by alkylating PIP conjugates using high-throughput sequencing. PMID:27098039
Rodrigues, Jorge L. M.; Serres, Margrethe H.; Tiedje, James M.
2011-01-01
The use of comparative genomics for the study of different microbiological species has increased substantially as sequence technologies become more affordable. However, efforts to fully link a genotype to its phenotype remain limited to the development of one mutant at a time. In this study, we provided a high-throughput alternative to this limiting step by coupling comparative genomics to the use of phenotype arrays for five sequenced Shewanella strains. Positive phenotypes were obtained for 441 nutrients (C, N, P, and S sources), with N-based compounds being the most utilized for all strains. Many genes and pathways predicted by genome analyses were confirmed with the comparative phenotype assay, and three degradation pathways believed to be missing in Shewanella were confirmed as missing. A number of previously unknown gene products were predicted to be parts of pathways or to have a function, expanding the number of gene targets for future genetic analyses. Ecologically, the comparative high-throughput phenotype analysis provided insights into niche specialization among the five different strains. For example, Shewanella amazonensis strain SB2B, isolated from the Amazon River delta, was capable of utilizing 60 C compounds, whereas Shewanella sp. strain W3-18-1, isolated from deep marine sediment, utilized only 25 of them. In spite of the large number of nutrient sources yielding positive results, our study indicated that except for the N sources, they were not sufficiently informative to predict growth phenotypes from increasing evolutionary distances. Our results indicate the importance of phenotypic evaluation for confirming genome predictions. This strategy will accelerate the functional discovery of genes and provide an ecological framework for microbial genome sequencing projects. PMID:21642407
Development of rapid and sensitive high throughput pharmacologic assays for marine phycotoxins.
Van Dolah, F M; Finley, E L; Haynes, B L; Doucette, G J; Moeller, P D; Ramsdell, J S
1994-01-01
The lack of rapid, high throughput assays is a major obstacle to many aspects of research on marine phycotoxins. Here we describe the application of microplate scintillation technology to develop high throughput assays for several classes of marine phycotoxin based on their differential pharmacologic actions. High throughput "drug discovery" format microplate receptor binding assays developed for brevetoxins/ciguatoxins and for domoic acid are described. Analysis for brevetoxins/ciguatoxins is carried out by binding competition with [3H] PbTx-3 for site 5 on the voltage dependent sodium channel in rat brain synaptosomes. Analysis of domoic acid is based on binding competition with [3H] kainic acid for the kainate/quisqualate glutamate receptor using frog brain synaptosomes. In addition, a high throughput microplate 45Ca flux assay for determination of maitotoxins is described. These microplate assays can be completed within 3 hours, have sensitivities of less than 1 ng, and can analyze dozens of samples simultaneously. The assays have been demonstrated to be useful for assessing algal toxicity and for assay-guided purification of toxins, and are applicable to the detection of biotoxins in seafood.
A Fabry-Perot interferometric imaging spectrometer in LWIR
NASA Astrophysics Data System (ADS)
Zhang, Fang; Gao, Jiaobo; Wang, Nan; Wu, Jianghui; Meng, Hemin; Zhang, Lei; Gao, Shan
2017-02-01
With applications ranging from the desktop to remote sensing, the long wave infrared (LWIR) interferometric spectral imaging system is always with huge volume and large weight. In order to miniaturize and light the instrument, a new method of LWIR spectral imaging system based on a variable gap Fabry-Perot (FP) interferometer is researched. With the system working principle analyzed, theoretically, it is researched that how to make certain the primary parameter, such as, wedge angle of interferometric cavity, f-number of the imaging lens and the relationship between the wedge angle and the modulation of the interferogram. A prototype is developed and a good experimental result of a uniform radiation source, a monochromatic source, is obtained. The research shows that besides high throughput and high spectral resolution, the advantage of miniaturization is also simultaneously achieved in this method.
High-Throughput/High-Content Screening Assays with Engineered Nanomaterials in ToxCast
High-throughput and high-content screens are attractive approaches for prioritizing nanomaterial hazards and informing targeted testing due to the impracticality of using traditional toxicological testing on the large numbers and varieties of nanomaterials. The ToxCast program a...
Moore, Priscilla A; Kery, Vladimir
2009-01-01
High-throughput protein purification is a complex, multi-step process. There are several technical challenges in the course of this process that are not experienced when purifying a single protein. Among the most challenging are the high-throughput protein concentration and buffer exchange, which are not only labor-intensive but can also result in significant losses of purified proteins. We describe two methods of high-throughput protein concentration and buffer exchange: one using ammonium sulfate precipitation and one using micro-concentrating devices based on membrane ultrafiltration. We evaluated the efficiency of both methods on a set of 18 randomly selected purified proteins from Shewanella oneidensis. While both methods provide similar yield and efficiency, the ammonium sulfate precipitation is much less labor intensive and time consuming than the ultrafiltration.
Tiered High-Throughput Screening Approach to Identify ...
High-throughput screening (HTS) for potential thyroid–disrupting chemicals requires a system of assays to capture multiple molecular-initiating events (MIEs) that converge on perturbed thyroid hormone (TH) homeostasis. Screening for MIEs specific to TH-disrupting pathways is limited in the US EPA ToxCast screening assay portfolio. To fill one critical screening gap, the Amplex UltraRed-thyroperoxidase (AUR-TPO) assay was developed to identify chemicals that inhibit TPO, as decreased TPO activity reduces TH synthesis. The ToxCast Phase I and II chemical libraries, comprised of 1,074 unique chemicals, were initially screened using a single, high concentration to identify potential TPO inhibitors. Chemicals positive in the single concentration screen were retested in concentration-response. Due to high false positive rates typically observed with loss-of-signal assays such as AUR-TPO, we also employed two additional assays in parallel to identify possible sources of nonspecific assay signal loss, enabling stratification of roughly 300 putative TPO inhibitors based upon selective AUR-TPO activity. A cell-free luciferase inhibition assay was used to identify nonspecific enzyme inhibition among the putative TPO inhibitors, and a cytotoxicity assay using a human cell line was used to estimate the cellular tolerance limit. Additionally, the TPO inhibition activities of 150 chemicals were compared between the AUR-TPO and an orthogonal peroxidase oxidation assay using
NASA Astrophysics Data System (ADS)
Godino, Neus; Jorde, Felix; Lawlor, Daryl; Jaeger, Magnus; Duschl, Claus
2015-08-01
Microalgae are a promising source of bioactive ingredients for the food, pharmaceutical and cosmetic industries. Every microalgae research group or production facility is facing one major problem regarding the potential contamination of the algal cell with bacteria. Prior to the storage of the microalgae in strain collections or to cultivation in bioreactors, it is necessary to carry out laborious purification procedures to separate the microalgae from the undesired bacterial cells. In this work, we present a disposable microfluidic cartridge for the high-throughput purification of microalgae samples based on inertial microfluidics. Some of the most relevant microalgae strains have a larger size than the relatively small, few micron bacterial cells, so making them distinguishable by size. The inertial microfluidic cartridge was fabricated with inexpensive materials, like pressure sensitive adhesive (PSA) and thin plastic layers, which were patterned using a simple cutting plotter. In spite of fabrication restrictions and the intrinsic difficulties of biological samples, the separation of microalgae from bacteria reached values in excess of 99%, previously only achieved using conventional high-end and high cost lithography methods. Moreover, due to the simple and high-throughput characteristic of the separation, it is possible to concatenate serial purification to exponentially decrease the absolute amount of bacteria in the final purified sample.
GermOnline 4.0 is a genomics gateway for germline development, meiosis and the mitotic cell cycle.
Lardenois, Aurélie; Gattiker, Alexandre; Collin, Olivier; Chalmel, Frédéric; Primig, Michael
2010-01-01
GermOnline 4.0 is a cross-species database portal focusing on high-throughput expression data relevant for germline development, the meiotic cell cycle and mitosis in healthy versus malignant cells. It is thus a source of information for life scientists as well as clinicians who are interested in gene expression and regulatory networks. The GermOnline gateway provides unlimited access to information produced with high-density oligonucleotide microarrays (3'-UTR GeneChips), genome-wide protein-DNA binding assays and protein-protein interaction studies in the context of Ensembl genome annotation. Samples used to produce high-throughput expression data and to carry out genome-wide in vivo DNA binding assays are annotated via the MIAME-compliant Multiomics Information Management and Annotation System (MIMAS 3.0). Furthermore, the Saccharomyces Genomics Viewer (SGV) was developed and integrated into the gateway. SGV is a visualization tool that outputs genome annotation and DNA-strand specific expression data produced with high-density oligonucleotide tiling microarrays (Sc_tlg GeneChips) which cover the complete budding yeast genome on both DNA strands. It facilitates the interpretation of expression levels and transcript structures determined for various cell types cultured under different growth and differentiation conditions. Database URL: www.germonline.org/
GermOnline 4.0 is a genomics gateway for germline development, meiosis and the mitotic cell cycle
Lardenois, Aurélie; Gattiker, Alexandre; Collin, Olivier; Chalmel, Frédéric; Primig, Michael
2010-01-01
GermOnline 4.0 is a cross-species database portal focusing on high-throughput expression data relevant for germline development, the meiotic cell cycle and mitosis in healthy versus malignant cells. It is thus a source of information for life scientists as well as clinicians who are interested in gene expression and regulatory networks. The GermOnline gateway provides unlimited access to information produced with high-density oligonucleotide microarrays (3′-UTR GeneChips), genome-wide protein–DNA binding assays and protein–protein interaction studies in the context of Ensembl genome annotation. Samples used to produce high-throughput expression data and to carry out genome-wide in vivo DNA binding assays are annotated via the MIAME-compliant Multiomics Information Management and Annotation System (MIMAS 3.0). Furthermore, the Saccharomyces Genomics Viewer (SGV) was developed and integrated into the gateway. SGV is a visualization tool that outputs genome annotation and DNA-strand specific expression data produced with high-density oligonucleotide tiling microarrays (Sc_tlg GeneChips) which cover the complete budding yeast genome on both DNA strands. It facilitates the interpretation of expression levels and transcript structures determined for various cell types cultured under different growth and differentiation conditions. Database URL: www.germonline.org/ PMID:21149299
NASA Astrophysics Data System (ADS)
Mok, Aaron T. Y.; Lee, Kelvin C. M.; Wong, Kenneth K. Y.; Tsia, Kevin K.
2018-02-01
Biophysical properties of cells could complement and correlate biochemical markers to characterize a multitude of cellular states. Changes in cell size, dry mass and subcellular morphology, for instance, are relevant to cell-cycle progression which is prevalently evaluated by DNA-targeted fluorescence measurements. Quantitative-phase microscopy (QPM) is among the effective biophysical phenotyping tools that can quantify cell sizes and sub-cellular dry mass density distribution of single cells at high spatial resolution. However, limited camera frame rate and thus imaging throughput makes QPM incompatible with high-throughput flow cytometry - a gold standard in multiparametric cell-based assay. Here we present a high-throughput approach for label-free analysis of cell cycle based on quantitative-phase time-stretch imaging flow cytometry at a throughput of > 10,000 cells/s. Our time-stretch QPM system enables sub-cellular resolution even at high speed, allowing us to extract a multitude (at least 24) of single-cell biophysical phenotypes (from both amplitude and phase images). Those phenotypes can be combined to track cell-cycle progression based on a t-distributed stochastic neighbor embedding (t-SNE) algorithm. Using multivariate analysis of variance (MANOVA) discriminant analysis, cell-cycle phases can also be predicted label-free with high accuracy at >90% in G1 and G2 phase, and >80% in S phase. We anticipate that high throughput label-free cell cycle characterization could open new approaches for large-scale single-cell analysis, bringing new mechanistic insights into complex biological processes including diseases pathogenesis.
Testing of focal plane arrays at the AEDC
NASA Astrophysics Data System (ADS)
Nicholson, Randy A.; Mead, Kimberly D.; Smith, Robert W.
1992-07-01
A facility was developed at the Arnold Engineering Development Center (AEDC) to provide complete radiometric characterization of focal plane arrays (FPAs). The highly versatile facility provides the capability to test single detectors, detector arrays, and hybrid FPAs. The primary component of the AEDC test facility is the Focal Plane Characterization Chamber (FPCC). The FPCC provides a cryogenic, low-background environment for the test focal plane. Focal plane testing in the FPCC includes flood source testing, during which the array is uniformly irradiated with IR radiation, and spot source testing, during which the target radiation is focused onto a single pixel or group of pixels. During flood source testing, performance parameters such as power consumption, responsivity, noise equivalent input, dynamic range, radiometric stability, recovery time, and array uniformity can be assessed. Crosstalk is evaluated during spot source testing. Spectral response testing is performed in a spectral response test station using a three-grating monochromator. Because the chamber can accommodate several types of testing in a single test installation, a high throughput rate and good economy of operation are possible.
Repurposing a Benchtop Centrifuge for High-Throughput Single-Molecule Force Spectroscopy.
Yang, Darren; Wong, Wesley P
2018-01-01
We present high-throughput single-molecule manipulation using a benchtop centrifuge, overcoming limitations common in other single-molecule approaches such as high cost, low throughput, technical difficulty, and strict infrastructure requirements. An inexpensive and compact Centrifuge Force Microscope (CFM) adapted to a commercial centrifuge enables use by nonspecialists, and integration with DNA nanoswitches facilitates both reliable measurements and repeated molecular interrogation. Here, we provide detailed protocols for constructing the CFM, creating DNA nanoswitch samples, and carrying out single-molecule force measurements.
High throughput single cell counting in droplet-based microfluidics.
Lu, Heng; Caen, Ouriel; Vrignon, Jeremy; Zonta, Eleonora; El Harrak, Zakaria; Nizard, Philippe; Baret, Jean-Christophe; Taly, Valérie
2017-05-02
Droplet-based microfluidics is extensively and increasingly used for high-throughput single-cell studies. However, the accuracy of the cell counting method directly impacts the robustness of such studies. We describe here a simple and precise method to accurately count a large number of adherent and non-adherent human cells as well as bacteria. Our microfluidic hemocytometer provides statistically relevant data on large populations of cells at a high-throughput, used to characterize cell encapsulation and cell viability during incubation in droplets.
2016-12-01
AWARD NUMBER: W81XWH-13-1-0371 TITLE: High-Throughput Sequencing of Germline and Tumor From Men with Early- Onset Metastatic Prostate Cancer...DATES COVERED 30 Sep 2013 - 29 Sep 2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER High-Throughput Sequencing of Germline and Tumor From Men with...presenting with metastatic prostate cancer at a young age (before age 60 years). Whole exome sequencing identified a panel of germline variants that have
Comparative toxicity assessment of particulate matter (PM) from different sources will potentially inform the understanding of regional differences in PM-induced cardiac health effects by identifying PM sources linked to highest potency components. Conventional low-throughput in...
Test and Evaluation of WiMAX Performance Using Open-Source Modeling and Simulation Software Tools
2010-12-01
specific needs. For instance, one may seek to maximize the system throughput while maximizing the number of trans- mitted data packets with hard...seeking to maximize the throughput of the system (Yu 2008; Pishdad and Rabiee 2008; Piro et al. 2010; Wongthavarawat and Ganz 2003; Mohammadi, Akl, and...testing environment provides tools to allow for setting up and running test environments over multiple systems (buildbot) and provides classes to
Wang, Shanyun; Wang, Weidong; Liu, Lu; Zhuang, Linjie; Zhao, Siyan; Su, Yu; Li, Yixiao; Wang, Mengzi; Wang, Cheng; Xu, Liya; Zhu, Guibing
2018-05-24
Artificial microbial nitrogen (N) cycle hotspots in the plant-bed/ditch system were developed and investigated based on intact core and slurry assays measurement using isotopic tracing technology, quantitative PCR and high-throughput sequencing. By increasing hydraulic retention time and periodically fluctuating water level in heterogeneous riparian zones, hotspots of anammox, nitrification, denitrification, ammonium (NH 4 + ) oxidation, nitrite (NO 2 - ) oxidation, nitrate (NO 3 - ) reduction and DNRA were all stimulated at the interface sediments, with the abundance and activity being about 1-3 orders of magnitude higher than those in nonhotspots. Isotopic pairing experiments revealed that in microbial hotspots, nitrite sources were higher than the sinks, and both NH 4 + oxidation (55.8%) and NO 3 - reduction (44.2%) provided nitrite for anammox, which accounted for 43.0% of N-loss and 44.4% of NH 4 + removal in riparian zones but did not involve nitrous oxide (N 2 O) emission risks. High-throughput analysis identified that bacterial quorum sensing mediated this anammox hotspot with B.fulgida dominating the anammox community, but it was B. anammoxidans and Jettenia sp. that contributed more to anammox activity. In the nonhotspot zones, the NO 2 - source (NO 3 - reduction dominated) was lower than the sink, limiting the effects on anammox. The in situ N 2 O flux measurement showed that the microbial hotspot had a 27.1% reduced N 2 O emission flux compared with the nonhotspot zones.
A computational genomics pipeline for prokaryotic sequencing projects.
Kislyuk, Andrey O; Katz, Lee S; Agrawal, Sonia; Hagen, Matthew S; Conley, Andrew B; Jayaraman, Pushkala; Nelakuditi, Viswateja; Humphrey, Jay C; Sammons, Scott A; Govil, Dhwani; Mair, Raydel D; Tatti, Kathleen M; Tondella, Maria L; Harcourt, Brian H; Mayer, Leonard W; Jordan, I King
2010-08-01
New sequencing technologies have accelerated research on prokaryotic genomes and have made genome sequencing operations outside major genome sequencing centers routine. However, no off-the-shelf solution exists for the combined assembly, gene prediction, genome annotation and data presentation necessary to interpret sequencing data. The resulting requirement to invest significant resources into custom informatics support for genome sequencing projects remains a major impediment to the accessibility of high-throughput sequence data. We present a self-contained, automated high-throughput open source genome sequencing and computational genomics pipeline suitable for prokaryotic sequencing projects. The pipeline has been used at the Georgia Institute of Technology and the Centers for Disease Control and Prevention for the analysis of Neisseria meningitidis and Bordetella bronchiseptica genomes. The pipeline is capable of enhanced or manually assisted reference-based assembly using multiple assemblers and modes; gene predictor combining; and functional annotation of genes and gene products. Because every component of the pipeline is executed on a local machine with no need to access resources over the Internet, the pipeline is suitable for projects of a sensitive nature. Annotation of virulence-related features makes the pipeline particularly useful for projects working with pathogenic prokaryotes. The pipeline is licensed under the open-source GNU General Public License and available at the Georgia Tech Neisseria Base (http://nbase.biology.gatech.edu/). The pipeline is implemented with a combination of Perl, Bourne Shell and MySQL and is compatible with Linux and other Unix systems.
Amsden, Jason J; Herr, Philip J; Landry, David M W; Kim, William; Vyas, Raul; Parker, Charles B; Kirley, Matthew P; Keil, Adam D; Gilchrist, Kristin H; Radauscher, Erich J; Hall, Stephen D; Carlson, James B; Baldasaro, Nicholas; Stokes, David; Di Dona, Shane T; Russell, Zachary E; Grego, Sonia; Edwards, Steven J; Sperline, Roger P; Denton, M Bonner; Stoner, Brian R; Gehm, Michael E; Glass, Jeffrey T
2018-02-01
Despite many potential applications, miniature mass spectrometers have had limited adoption in the field due to the tradeoff between throughput and resolution that limits their performance relative to laboratory instruments. Recently, a solution to this tradeoff has been demonstrated by using spatially coded apertures in magnetic sector mass spectrometers, enabling throughput and signal-to-background improvements of greater than an order of magnitude with no loss of resolution. This paper describes a proof of concept demonstration of a cycloidal coded aperture miniature mass spectrometer (C-CAMMS) demonstrating use of spatially coded apertures in a cycloidal sector mass analyzer for the first time. C-CAMMS also incorporates a miniature carbon nanotube (CNT) field emission electron ionization source and a capacitive transimpedance amplifier (CTIA) ion array detector. Results confirm the cycloidal mass analyzer's compatibility with aperture coding. A >10× increase in throughput was achieved without loss of resolution compared with a single slit instrument. Several areas where additional improvement can be realized are identified. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Amsden, Jason J.; Herr, Philip J.; Landry, David M. W.; Kim, William; Vyas, Raul; Parker, Charles B.; Kirley, Matthew P.; Keil, Adam D.; Gilchrist, Kristin H.; Radauscher, Erich J.; Hall, Stephen D.; Carlson, James B.; Baldasaro, Nicholas; Stokes, David; Di Dona, Shane T.; Russell, Zachary E.; Grego, Sonia; Edwards, Steven J.; Sperline, Roger P.; Denton, M. Bonner; Stoner, Brian R.; Gehm, Michael E.; Glass, Jeffrey T.
2018-02-01
Despite many potential applications, miniature mass spectrometers have had limited adoption in the field due to the tradeoff between throughput and resolution that limits their performance relative to laboratory instruments. Recently, a solution to this tradeoff has been demonstrated by using spatially coded apertures in magnetic sector mass spectrometers, enabling throughput and signal-to-background improvements of greater than an order of magnitude with no loss of resolution. This paper describes a proof of concept demonstration of a cycloidal coded aperture miniature mass spectrometer (C-CAMMS) demonstrating use of spatially coded apertures in a cycloidal sector mass analyzer for the first time. C-CAMMS also incorporates a miniature carbon nanotube (CNT) field emission electron ionization source and a capacitive transimpedance amplifier (CTIA) ion array detector. Results confirm the cycloidal mass analyzer's compatibility with aperture coding. A >10× increase in throughput was achieved without loss of resolution compared with a single slit instrument. Several areas where additional improvement can be realized are identified.
Fixed target matrix for femtosecond time-resolved and in situ serial micro-crystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, C.; Marx, A.; Epp, S. W.
We present a crystallography chip enabling in situ room temperature crystallography at microfocus synchrotron beamlines and X-ray free-electron laser (X-FEL) sources. Compared to other in situ approaches, we observe extremely low background and high diffraction data quality. The chip design is robust and allows fast and efficient loading of thousands of small crystals. The ability to load a large number of protein crystals, at room temperature and with high efficiency, into prescribed positions enables high throughput automated serial crystallography with microfocus synchrotron beamlines. In addition, we demonstrate the application of this chip for femtosecond time-resolved serial crystallography at the Linacmore » Coherent Light Source (LCLS, Menlo Park, California, USA). As a result, the chip concept enables multiple images to be acquired from each crystal, allowing differential detection of changes in diffraction intensities in order to obtain high signal-to-noise and fully exploit the time resolution capabilities of XFELs.« less
Fixed target matrix for femtosecond time-resolved and in situ serial micro-crystallography
Mueller, C.; Marx, A.; Epp, S. W.; ...
2015-08-18
We present a crystallography chip enabling in situ room temperature crystallography at microfocus synchrotron beamlines and X-ray free-electron laser (X-FEL) sources. Compared to other in situ approaches, we observe extremely low background and high diffraction data quality. The chip design is robust and allows fast and efficient loading of thousands of small crystals. The ability to load a large number of protein crystals, at room temperature and with high efficiency, into prescribed positions enables high throughput automated serial crystallography with microfocus synchrotron beamlines. In addition, we demonstrate the application of this chip for femtosecond time-resolved serial crystallography at the Linacmore » Coherent Light Source (LCLS, Menlo Park, California, USA). As a result, the chip concept enables multiple images to be acquired from each crystal, allowing differential detection of changes in diffraction intensities in order to obtain high signal-to-noise and fully exploit the time resolution capabilities of XFELs.« less
High-throughput sequencing methods to study neuronal RNA-protein interactions.
Ule, Jernej
2009-12-01
UV-cross-linking and RNase protection, combined with high-throughput sequencing, have provided global maps of RNA sites bound by individual proteins or ribosomes. Using a stringent purification protocol, UV-CLIP (UV-cross-linking and immunoprecipitation) was able to identify intronic and exonic sites bound by splicing regulators in mouse brain tissue. Ribosome profiling has been used to quantify ribosome density on budding yeast mRNAs under different environmental conditions. Post-transcriptional regulation in neurons requires high spatial and temporal precision, as is evident from the role of localized translational control in synaptic plasticity. It remains to be seen if the high-throughput methods can be applied quantitatively to study the dynamics of RNP (ribonucleoprotein) remodelling in specific neuronal populations during the neurodegenerative process. It is certain, however, that applications of new biochemical techniques followed by high-throughput sequencing will continue to provide important insights into the mechanisms of neuronal post-transcriptional regulation.
High-throughput and high-content screens are attractive approaches for prioritizing nanomaterial hazards and informing targeted testing due to the impracticality of using traditional toxicological testing on the large numbers and varieties of nanomaterials. The ToxCast program a...
OH absorption spectroscopy in a flame using spatial heterodyne spectroscopy
NASA Astrophysics Data System (ADS)
Bartula, Renata J.; Ghandhi, Jaal B.; Sanders, Scott T.; Mierkiewicz, Edwin J.; Roesler, Fred L.; Harlander, John M.
2007-12-01
We demonstrate measurements of OH absorption spectra in the post-flame zone of a McKenna burner using spatial heterodyne spectroscopy (SHS). SHS permits high-resolution, high-throughput measurements. In this case the spectra span ~308-310 nm with a resolution of 0.03 nm, even though an extended source (extent of ~2×10-7 m2 rad2) was used. The high spectral resolution is important for interpreting spectra when multiple absorbers are present for inferring accurate gas temperatures from measured spectra and for monitoring weak absorbers. The present measurement paves the way for absorption spectroscopy by SHS in practical combustion devices, such as reciprocating and gas-turbine engines.
Draveling, C; Ren, L; Haney, P; Zeisse, D; Qoronfleh, M W
2001-07-01
The revolution in genomics and proteomics is having a profound impact on drug discovery. Today's protein scientist demands a faster, easier, more reliable way to purify proteins. A high capacity, high-throughput new technology has been developed in Perbio Sciences for affinity protein purification. This technology utilizes selected chromatography media that are dehydrated to form uniform aggregates. The SwellGel aggregates will instantly rehydrate upon addition of the protein sample, allowing purification and direct performance of multiple assays in a variety of formats. SwellGel technology has greater stability and is easier to handle than standard wet chromatography resins. The microplate format of this technology provides high-capacity, high-throughput features, recovering milligram quantities of protein suitable for high-throughput screening or biophysical/structural studies. Data will be presented applying SwellGel technology to recombinant 6x His-tagged protein and glutathione-S-transferase (GST) fusion protein purification. Copyright 2001 Academic Press.
NASA Astrophysics Data System (ADS)
Mondal, Sudip; Hegarty, Evan; Martin, Chris; Gökçe, Sertan Kutal; Ghorashian, Navid; Ben-Yakar, Adela
2016-10-01
Next generation drug screening could benefit greatly from in vivo studies, using small animal models such as Caenorhabditis elegans for hit identification and lead optimization. Current in vivo assays can operate either at low throughput with high resolution or with low resolution at high throughput. To enable both high-throughput and high-resolution imaging of C. elegans, we developed an automated microfluidic platform. This platform can image 15 z-stacks of ~4,000 C. elegans from 96 different populations using a large-scale chip with a micron resolution in 16 min. Using this platform, we screened ~100,000 animals of the poly-glutamine aggregation model on 25 chips. We tested the efficacy of ~1,000 FDA-approved drugs in improving the aggregation phenotype of the model and identified four confirmed hits. This robust platform now enables high-content screening of various C. elegans disease models at the speed and cost of in vitro cell-based assays.
The ToxCast Dashboard helps users examine high-throughput assay data to inform chemical safety decisions. To date, it has data on over 9,000 chemicals and information from more than 1,000 high-throughput assay endpoint components.
The ToxCast Dashboard helps users examine high-throughput assay data to inform chemical safety decisions. To date, it has data on over 9,000 chemicals and information from more than 1,000 high-throughput assay endpoint components.
Yang, Wanneng; Guo, Zilong; Huang, Chenglong; Duan, Lingfeng; Chen, Guoxing; Jiang, Ni; Fang, Wei; Feng, Hui; Xie, Weibo; Lian, Xingming; Wang, Gongwei; Luo, Qingming; Zhang, Qifa; Liu, Qian; Xiong, Lizhong
2014-01-01
Even as the study of plant genomics rapidly develops through the use of high-throughput sequencing techniques, traditional plant phenotyping lags far behind. Here we develop a high-throughput rice phenotyping facility (HRPF) to monitor 13 traditional agronomic traits and 2 newly defined traits during the rice growth period. Using genome-wide association studies (GWAS) of the 15 traits, we identify 141 associated loci, 25 of which contain known genes such as the Green Revolution semi-dwarf gene, SD1. Based on a performance evaluation of the HRPF and GWAS results, we demonstrate that high-throughput phenotyping has the potential to replace traditional phenotyping techniques and can provide valuable gene identification information. The combination of the multifunctional phenotyping tools HRPF and GWAS provides deep insights into the genetic architecture of important traits. PMID:25295980
Continuous-wave deep ultraviolet sources for resonance Raman explosive sensing
NASA Astrophysics Data System (ADS)
Yellampalle, Balakishore; Martin, Robert; Sluch, Mikhail; McCormick, William; Ice, Robert; Lemoff, Brian
2015-05-01
A promising approach to stand-off detection of explosive traces is using resonance Raman spectroscopy with Deepultraviolet (DUV) light. The DUV region offers two main advantages: strong explosive signatures due to resonant and λ- 4 enhancement of Raman cross-section, and lack of fluorescence and solar background. For DUV Raman spectroscopy, continuous-wave (CW) or quasi-CW lasers are preferable to high peak powered pulsed lasers because Raman saturation phenomena and sample damage can be avoided. In this work we present a very compact DUV source that produces greater than 1 mw of CW optical power. The source has high optical-to-optical conversion efficiency, greater than 5 %, as it is based on second harmonic generation (SHG) of a blue/green laser source using a nonlinear crystal placed in an external resonant enhancement cavity. The laser system is extremely compact, lightweight, and can be battery powered. Using two such sources, one each at 236.5 nm and 257.5 nm, we are building a second generation explosive detection system called Dual-Excitation-Wavelength Resonance-Raman Detector (DEWRRED-II). The DEWRRED-II system also includes a compact dual-band high throughput DUV spectrometer, and a highly-sensitive detection algorithm. The DEWRRED technique exploits the DUV excitation wavelength dependence of Raman signal strength, arising from complex interplay of resonant enhancement, self-absorption and laser penetration depth. We show sensor measurements from explosives/precursor materials at different standoff distances.
A high-quality annotated transcriptome of swine peripheral blood
USDA-ARS?s Scientific Manuscript database
Background: High throughput gene expression profiling assays of peripheral blood are widely used in biomedicine, as well as in animal genetics and physiology research. Accurate, comprehensive, and precise interpretation of such high throughput assays relies on well-characterized reference genomes an...
Impact of Roadway Stormwater Runoff on Microbial Contamination in the Receiving Stream.
Wyckoff, Kristen N; Chen, Si; Steinman, Andrew J; He, Qiang
2017-09-01
Stormwater runoff from roadways has increasingly become a regulatory concern for water pollution control. Recent work has suggested roadway stormwater runoff as a potential source of microbial pollutants. The objective of this study was to determine the impact of roadway runoff on the microbiological quality of receiving streams. Microbiological quality of roadway stormwater runoff and the receiving stream was monitored during storm events with both cultivation-dependent fecal bacteria enumeration and cultivation-independent high-throughput sequencing techniques. Enumeration of total coliforms as a measure of fecal microbial pollution found consistently lower total coliform counts in roadway runoff than those in the stream water, suggesting that roadway runoff was not a major contributor of microbial pollutants to the receiving stream. Further characterization of the microbial community in the stormwater samples by 16S ribosomal RNA gene-based high-throughput amplicon sequencing revealed significant differences in the microbial composition of stormwater runoff from the roadways and the receiving stream. The differences in microbial composition between the roadway runoff and stream water demonstrate that roadway runoff did not appear to have a major influence on the stream in terms of microbiological quality. Thus, results from both fecal bacteria enumeration and high-throughput amplicon sequencing techniques were consistent that roadway stormwater runoff was not the primary contributor of microbial loading to the stream. Further studies of additional watersheds with distinct characteristics are needed to validate these findings. Understanding gained in this study could support the development of more effective strategies for stormwater management in sensitive watersheds. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Guo, Baoshan; Lei, Cheng; Kobayashi, Hirofumi; Ito, Takuro; Yalikun, Yaxiaer; Jiang, Yiyue; Tanaka, Yo; Ozeki, Yasuyuki; Goda, Keisuke
2017-05-01
The development of reliable, sustainable, and economical sources of alternative fuels to petroleum is required to tackle the global energy crisis. One such alternative is microalgal biofuel, which is expected to play a key role in reducing the detrimental effects of global warming as microalgae absorb atmospheric CO 2 via photosynthesis. Unfortunately, conventional analytical methods only provide population-averaged lipid amounts and fail to characterize a diverse population of microalgal cells with single-cell resolution in a non-invasive and interference-free manner. Here high-throughput label-free single-cell screening of lipid-producing microalgal cells with optofluidic time-stretch quantitative phase microscopy was demonstrated. In particular, Euglena gracilis, an attractive microalgal species that produces wax esters (suitable for biodiesel and aviation fuel after refinement), within lipid droplets was investigated. The optofluidic time-stretch quantitative phase microscope is based on an integration of a hydrodynamic-focusing microfluidic chip, an optical time-stretch quantitative phase microscope, and a digital image processor equipped with machine learning. As a result, it provides both the opacity and phase maps of every single cell at a high throughput of 10,000 cells/s, enabling accurate cell classification without the need for fluorescent staining. Specifically, the dataset was used to characterize heterogeneous populations of E. gracilis cells under two different culture conditions (nitrogen-sufficient and nitrogen-deficient) and achieve the cell classification with an error rate of only 2.15%. The method holds promise as an effective analytical tool for microalgae-based biofuel production. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
GeneSCF: a real-time based functional enrichment tool with support for multiple organisms.
Subhash, Santhilal; Kanduri, Chandrasekhar
2016-09-13
High-throughput technologies such as ChIP-sequencing, RNA-sequencing, DNA sequencing and quantitative metabolomics generate a huge volume of data. Researchers often rely on functional enrichment tools to interpret the biological significance of the affected genes from these high-throughput studies. However, currently available functional enrichment tools need to be updated frequently to adapt to new entries from the functional database repositories. Hence there is a need for a simplified tool that can perform functional enrichment analysis by using updated information directly from the source databases such as KEGG, Reactome or Gene Ontology etc. In this study, we focused on designing a command-line tool called GeneSCF (Gene Set Clustering based on Functional annotations), that can predict the functionally relevant biological information for a set of genes in a real-time updated manner. It is designed to handle information from more than 4000 organisms from freely available prominent functional databases like KEGG, Reactome and Gene Ontology. We successfully employed our tool on two of published datasets to predict the biologically relevant functional information. The core features of this tool were tested on Linux machines without the need for installation of more dependencies. GeneSCF is more reliable compared to other enrichment tools because of its ability to use reference functional databases in real-time to perform enrichment analysis. It is an easy-to-integrate tool with other pipelines available for downstream analysis of high-throughput data. More importantly, GeneSCF can run multiple gene lists simultaneously on different organisms thereby saving time for the users. Since the tool is designed to be ready-to-use, there is no need for any complex compilation and installation procedures.
Upgrading a high-throughput spectrometer for high-frequency (<400 kHz) measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishizawa, T., E-mail: nishizawa@wisc.edu; Nornberg, M. D.; Den Hartog, D. J.
2016-11-15
The upgraded spectrometer used for charge exchange recombination spectroscopy on the Madison Symmetric Torus resolves emission fluctuations up to 400 kHz. The transimpedance amplifier’s cutoff frequency was increased based upon simulations comparing the change in the measured photon counts for time-dynamic signals. We modeled each signal-processing stage of the diagnostic and scanned the filtering frequency to quantify the uncertainty in the photon counting rate. This modeling showed that uncertainties can be calculated based on assuming each amplification stage is a Poisson process and by calibrating the photon counting rate with a DC light source to address additional variation.
Continuous cryopump with a device for regenerating the cryosurface
Foster, C.A.
1988-02-16
A high throughput continuous cryopump is provided. The cryopump incorporates an improved method for regenerating the cryopumping surface while the pump is in continuous operation. The regeneration of the cryopumping surface does not thermally cycle the pump, and to this end a small chamber connected to a secondary pumping source serves to contain and exhaust frost removed from the cryopumping surface during such regeneration. The frost is exhausted at a rate substantially independent of the speed of the cryopump which enhances the capability of the pump to achieve a high compression ratio and allow the pump to operate continuously while the cryopumping surface is being regenerated. 8 figs.
Upgrading a high-throughput spectrometer for high-frequency (<400 kHz) measurements
NASA Astrophysics Data System (ADS)
Nishizawa, T.; Nornberg, M. D.; Den Hartog, D. J.; Craig, D.
2016-11-01
The upgraded spectrometer used for charge exchange recombination spectroscopy on the Madison Symmetric Torus resolves emission fluctuations up to 400 kHz. The transimpedance amplifier's cutoff frequency was increased based upon simulations comparing the change in the measured photon counts for time-dynamic signals. We modeled each signal-processing stage of the diagnostic and scanned the filtering frequency to quantify the uncertainty in the photon counting rate. This modeling showed that uncertainties can be calculated based on assuming each amplification stage is a Poisson process and by calibrating the photon counting rate with a DC light source to address additional variation.
Neutron reflecting supermirror structure
Wood, James L.
1992-01-01
An improved neutron reflecting supermirror structure comprising a plurality of stacked sets of bilayers of neutron reflecting materials. The improved neutron reflecting supermirror structure is adapted to provide extremely good performance at high incidence angles, i.e. up to four time the critical angle of standard neutron mirror structures. The reflection of neutrons striking the supermirror structure at a high critical angle provides enhanced neutron throughput, and hence more efficient and economical use of neutron sources. One layer of each set of bilayers consist of titanium, and the second layer of each set of bilayers consist of an alloy of nickel with carbon interstitially present in the nickel alloy.
de Muinck, Eric J; Trosvik, Pål; Gilfillan, Gregor D; Hov, Johannes R; Sundaram, Arvind Y M
2017-07-06
Advances in sequencing technologies and bioinformatics have made the analysis of microbial communities almost routine. Nonetheless, the need remains to improve on the techniques used for gathering such data, including increasing throughput while lowering cost and benchmarking the techniques so that potential sources of bias can be better characterized. We present a triple-index amplicon sequencing strategy to sequence large numbers of samples at significantly lower c ost and in a shorter timeframe compared to existing methods. The design employs a two-stage PCR protocol, incorpo rating three barcodes to each sample, with the possibility to add a fourth-index. It also includes heterogeneity spacers to overcome low complexity issues faced when sequencing amplicons on Illumina platforms. The library preparation method was extensively benchmarked through analysis of a mock community in order to assess biases introduced by sample indexing, number of PCR cycles, and template concentration. We further evaluated the method through re-sequencing of a standardized environmental sample. Finally, we evaluated our protocol on a set of fecal samples from a small cohort of healthy adults, demonstrating good performance in a realistic experimental setting. Between-sample variation was mainly related to batch effects, such as DNA extraction, while sample indexing was also a significant source of bias. PCR cycle number strongly influenced chimera formation and affected relative abundance estimates of species with high GC content. Libraries were sequenced using the Illumina HiSeq and MiSeq platforms to demonstrate that this protocol is highly scalable to sequence thousands of samples at a very low cost. Here, we provide the most comprehensive study of performance and bias inherent to a 16S rRNA gene amplicon sequencing method to date. Triple-indexing greatly reduces the number of long custom DNA oligos required for library preparation, while the inclusion of variable length heterogeneity spacers minimizes the need for PhiX spike-in. This design results in a significant cost reduction of highly multiplexed amplicon sequencing. The biases we characterize highlight the need for highly standardized protocols. Reassuringly, we find that the biological signal is a far stronger structuring factor than the various sources of bias.
Polonchuk, Liudmila
2014-01-01
Patch-clamping is a powerful technique for investigating the ion channel function and regulation. However, its low throughput hampered profiling of large compound series in early drug development. Fortunately, automation has revolutionized the area of experimental electrophysiology over the past decade. Whereas the first automated patch-clamp instruments using the planar patch-clamp technology demonstrated rather a moderate throughput, few second-generation automated platforms recently launched by various companies have significantly increased ability to form a high number of high-resistance seals. Among them is SyncroPatch(®) 96 (Nanion Technologies GmbH, Munich, Germany), a fully automated giga-seal patch-clamp system with the highest throughput on the market. By recording from up to 96 cells simultaneously, the SyncroPatch(®) 96 allows to substantially increase throughput without compromising data quality. This chapter describes features of the innovative automated electrophysiology system and protocols used for a successful transfer of the established hERG assay to this high-throughput automated platform.
HIGH THROUGHPUT ASSESSMENTS OF CONVENTIONAL AND ALTERNATIVE COMPOUNDS
High throughput approaches for quantifying chemical hazard, exposure, and sustainability have the potential to dramatically impact the pace and nature of risk assessments. Integrated evaluation strategies developed at the US EPA incorporate inherency,bioactivity,bioavailability, ...
GiNA, an efficient and high-throughput software for horticultural phenotyping
USDA-ARS?s Scientific Manuscript database
Traditional methods for trait phenotyping have been a bottleneck for research in many crop species due to their intensive labor, high cost, complex implementation, lack of reproducibility and propensity to subjective bias. Recently, multiple high-throughput phenotyping platforms have been developed,...
High-throughput quantification of hydroxyproline for determination of collagen.
Hofman, Kathleen; Hall, Bronwyn; Cleaver, Helen; Marshall, Susan
2011-10-15
An accurate and high-throughput assay for collagen is essential for collagen research and development of collagen products. Hydroxyproline is routinely assayed to provide a measurement for collagen quantification. The time required for sample preparation using acid hydrolysis and neutralization prior to assay is what limits the current method for determining hydroxyproline. This work describes the conditions of alkali hydrolysis that, when combined with the colorimetric assay defined by Woessner, provide a high-throughput, accurate method for the measurement of hydroxyproline. Copyright © 2011 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wall, Andrew J.; Capo, Rosemary C.; Stewart, Brian W.
2016-09-22
This technical report presents the details of the Sr column configuration and the high-throughput Sr separation protocol. Data showing the performance of the method as well as the best practices for optimizing Sr isotope analysis by MC-ICP-MS is presented. Lastly, this report offers tools for data handling and data reduction of Sr isotope results from the Thermo Scientific Neptune software to assist in data quality assurance, which help avoid issues of data glut associated with high sample throughput rapid analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hakala, Jacqueline Alexandra
2016-11-22
This technical report presents the details of the Sr column configuration and the high-throughput Sr separation protocol. Data showing the performance of the method as well as the best practices for optimizing Sr isotope analysis by MC-ICP-MS is presented. Lastly, this report offers tools for data handling and data reduction of Sr isotope results from the Thermo Scientific Neptune software to assist in data quality assurance, which help avoid issues of data glut associated with high sample throughput rapid analysis.
A Memory Efficient Network Encryption Scheme
NASA Astrophysics Data System (ADS)
El-Fotouh, Mohamed Abo; Diepold, Klaus
In this paper, we studied the two widely used encryption schemes in network applications. Shortcomings have been found in both schemes, as these schemes consume either more memory to gain high throughput or low memory with low throughput. The need has aroused for a scheme that has low memory requirements and in the same time possesses high speed, as the number of the internet users increases each day. We used the SSM model [1], to construct an encryption scheme based on the AES. The proposed scheme possesses high throughput together with low memory requirements.
HTP-NLP: A New NLP System for High Throughput Phenotyping.
Schlegel, Daniel R; Crowner, Chris; Lehoullier, Frank; Elkin, Peter L
2017-01-01
Secondary use of clinical data for research requires a method to quickly process the data so that researchers can quickly extract cohorts. We present two advances in the High Throughput Phenotyping NLP system which support the aim of truly high throughput processing of clinical data, inspired by a characterization of the linguistic properties of such data. Semantic indexing to store and generalize partially-processed results and the use of compositional expressions for ungrammatical text are discussed, along with a set of initial timing results for the system.
Laser diode combining for free space optical communication
NASA Technical Reports Server (NTRS)
Mecherle, G. Stephen
1986-01-01
The maximization of photon delivery to a distant collector in free space optical communications systems calls for a laser diode-combining technique employing wavelength and/or polarization as the bases of its operation. Design considerations for such a combiner encompass high throughput efficiency, diffraction-limited angular divergence, and reasonable volume constraints. Combiners are presently found to require a generalized Strehl ratio concept which includes relative source misalignment; diffraction grating combiners may have a limited number of laser sources which can meet spectral requirements. Methods for the incorporation of a combiner into a communication system are compared. Power combining is concluded to be the best tradeoff of performance and complexity for all systems, except those that are severely limited by either background radiation or component bandwidth.
The condition-dependent transcriptional network in Escherichia coli.
Lemmens, Karen; De Bie, Tijl; Dhollander, Thomas; Monsieurs, Pieter; De Moor, Bart; Collado-Vides, Julio; Engelen, Kristof; Marchal, Kathleen
2009-03-01
Thanks to the availability of high-throughput omics data, bioinformatics approaches are able to hypothesize thus-far undocumented genetic interactions. However, due to the amount of noise in these data, inferences based on a single data source are often unreliable. A popular approach to overcome this problem is to integrate different data sources. In this study, we describe DISTILLER, a novel framework for data integration that simultaneously analyzes microarray and motif information to find modules that consist of genes that are co-expressed in a subset of conditions, and their corresponding regulators. By applying our method on publicly available data, we evaluated the condition-specific transcriptional network of Escherichia coli. DISTILLER confirmed 62% of 736 interactions described in RegulonDB, and 278 novel interactions were predicted.
NASA Astrophysics Data System (ADS)
Kudoh, Eisuke; Ito, Haruki; Wang, Zhisen; Adachi, Fumiyuki
In mobile communication systems, high speed packet data services are demanded. In the high speed data transmission, throughput degrades severely due to severe inter-path interference (IPI). Recently, we proposed a random transmit power control (TPC) to increase the uplink throughput of DS-CDMA packet mobile communications. In this paper, we apply IPI cancellation in addition to the random TPC. We derive the numerical expression of the received signal-to-interference plus noise power ratio (SINR) and introduce IPI cancellation factor. We also derive the numerical expression of system throughput when IPI is cancelled ideally to compare with the Monte Carlo numerically evaluated system throughput. Then we evaluate, by Monte-Carlo numerical computation method, the combined effect of random TPC and IPI cancellation on the uplink throughput of DS-CDMA packet mobile communications.
The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science
Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo
2008-01-01
The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570
OpenMS: a flexible open-source software platform for mass spectrometry data analysis.
Röst, Hannes L; Sachsenberg, Timo; Aiche, Stephan; Bielow, Chris; Weisser, Hendrik; Aicheler, Fabian; Andreotti, Sandro; Ehrlich, Hans-Christian; Gutenbrunner, Petra; Kenar, Erhan; Liang, Xiao; Nahnsen, Sven; Nilse, Lars; Pfeuffer, Julianus; Rosenberger, George; Rurik, Marc; Schmitt, Uwe; Veit, Johannes; Walzer, Mathias; Wojnar, David; Wolski, Witold E; Schilling, Oliver; Choudhary, Jyoti S; Malmström, Lars; Aebersold, Ruedi; Reinert, Knut; Kohlbacher, Oliver
2016-08-30
High-resolution mass spectrometry (MS) has become an important tool in the life sciences, contributing to the diagnosis and understanding of human diseases, elucidating biomolecular structural information and characterizing cellular signaling networks. However, the rapid growth in the volume and complexity of MS data makes transparent, accurate and reproducible analysis difficult. We present OpenMS 2.0 (http://www.openms.de), a robust, open-source, cross-platform software specifically designed for the flexible and reproducible analysis of high-throughput MS data. The extensible OpenMS software implements common mass spectrometric data processing tasks through a well-defined application programming interface in C++ and Python and through standardized open data formats. OpenMS additionally provides a set of 185 tools and ready-made workflows for common mass spectrometric data processing tasks, which enable users to perform complex quantitative mass spectrometric analyses with ease.
SeaQuaKE: Sea-optimized Quantum Key Exchange
2015-01-01
of photon pairs in both polarization [3] and time-bin [4] degrees of freedom simultaneously. Entanglement analysis components in both the...greater throughput per entangled photon pair compared to alternative sources that encode in only a Photon -pair source Time-bin entanglement ...Polarization Entanglement & Pair Generation Hyperentangled Photon Pair Source •Wavelength availability • Power • Pulse rate Time-bin Mux • Waveguide vs
Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)
High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...
Palazzotto, Emilia; Weber, Tilmann
2018-04-12
Natural products produced by microorganisms represent the main source of bioactive molecules. The development of high-throughput (omics) techniques have importantly contributed to the renaissance of new antibiotic discovery increasing our understanding of complex mechanisms controlling the expression of biosynthetic gene clusters (BGCs) encoding secondary metabolites. In this context this review highlights recent progress in the use and integration of 'omics' approaches with focuses on genomics, transcriptomics, proteomics metabolomics meta-omics and combined omics as powerful strategy to discover new antibiotics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Dynamic VM Provisioning for TORQUE in a Cloud Environment
NASA Astrophysics Data System (ADS)
Zhang, S.; Boland, L.; Coddington, P.; Sevior, M.
2014-06-01
Cloud computing, also known as an Infrastructure-as-a-Service (IaaS), is attracting more interest from the commercial and educational sectors as a way to provide cost-effective computational infrastructure. It is an ideal platform for researchers who must share common resources but need to be able to scale up to massive computational requirements for specific periods of time. This paper presents the tools and techniques developed to allow the open source TORQUE distributed resource manager and Maui cluster scheduler to dynamically integrate OpenStack cloud resources into existing high throughput computing clusters.
BioImageXD: an open, general-purpose and high-throughput image-processing platform.
Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J
2012-06-28
BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.
2014-06-01
high-throughput method has utility for evaluating a diversity of natural materials with unknown complex odor blends that can then be down-selected for...method has utility for evaluating a diversity of natural materials with unknown complex odor blends that can then be down-selected for further...leishmaniasis. Lancet 366: 1561-1577. Petts, S.L., Y. Tang, and R.D. Ward. 1997. Nectar from a wax plant, Hoya sp., as a carbohydrate source for
Solar fuels photoanode materials discovery by integrating high-throughput theory and experiment
Yan, Qimin; Yu, Jie; Suram, Santosh K.; ...
2017-03-06
The limited number of known low-band-gap photoelectrocatalytic materials poses a significant challenge for the generation of chemical fuels from sunlight. Here, using high-throughput ab initio theory with experiments in an integrated workflow, we find eight ternary vanadate oxide photoanodes in the target band-gap range (1.2-2.8 eV). Detailed analysis of these vanadate compounds reveals the key role of VO 4 structural motifs and electronic band-edge character in efficient photoanodes, initiating a genome for such materials and paving the way for a broadly applicable high-throughput-discovery and materials-by-design feedback loop. Considerably expanding the number of known photoelectrocatalysts for water oxidation, our study establishesmore » ternary metal vanadates as a prolific class of photoanodematerials for generation of chemical fuels from sunlight and demonstrates our high-throughput theory-experiment pipeline as a prolific approach to materials discovery.« less
Microfluidics for cell-based high throughput screening platforms - A review.
Du, Guansheng; Fang, Qun; den Toonder, Jaap M J
2016-01-15
In the last decades, the basic techniques of microfluidics for the study of cells such as cell culture, cell separation, and cell lysis, have been well developed. Based on cell handling techniques, microfluidics has been widely applied in the field of PCR (Polymerase Chain Reaction), immunoassays, organ-on-chip, stem cell research, and analysis and identification of circulating tumor cells. As a major step in drug discovery, high-throughput screening allows rapid analysis of thousands of chemical, biochemical, genetic or pharmacological tests in parallel. In this review, we summarize the application of microfluidics in cell-based high throughput screening. The screening methods mentioned in this paper include approaches using the perfusion flow mode, the droplet mode, and the microarray mode. We also discuss the future development of microfluidic based high throughput screening platform for drug discovery. Copyright © 2015 Elsevier B.V. All rights reserved.
Solar fuels photoanode materials discovery by integrating high-throughput theory and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Qimin; Yu, Jie; Suram, Santosh K.
The limited number of known low-band-gap photoelectrocatalytic materials poses a significant challenge for the generation of chemical fuels from sunlight. Here, using high-throughput ab initio theory with experiments in an integrated workflow, we find eight ternary vanadate oxide photoanodes in the target band-gap range (1.2-2.8 eV). Detailed analysis of these vanadate compounds reveals the key role of VO 4 structural motifs and electronic band-edge character in efficient photoanodes, initiating a genome for such materials and paving the way for a broadly applicable high-throughput-discovery and materials-by-design feedback loop. Considerably expanding the number of known photoelectrocatalysts for water oxidation, our study establishesmore » ternary metal vanadates as a prolific class of photoanodematerials for generation of chemical fuels from sunlight and demonstrates our high-throughput theory-experiment pipeline as a prolific approach to materials discovery.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Martin L.; Choi, C. L.; Hattrick-Simpers, J. R.
The Materials Genome Initiative, a national effort to introduce new materials into the market faster and at lower cost, has made significant progress in computational simulation and modeling of materials. To build on this progress, a large amount of experimental data for validating these models, and informing more sophisticated ones, will be required. High-throughput experimentation generates large volumes of experimental data using combinatorial materials synthesis and rapid measurement techniques, making it an ideal experimental complement to bring the Materials Genome Initiative vision to fruition. This paper reviews the state-of-the-art results, opportunities, and challenges in high-throughput experimentation for materials design. Asmore » a result, a major conclusion is that an effort to deploy a federated network of high-throughput experimental (synthesis and characterization) tools, which are integrated with a modern materials data infrastructure, is needed.« less
Development and Validation of an Automated High-Throughput System for Zebrafish In Vivo Screenings
Virto, Juan M.; Holgado, Olaia; Diez, Maria; Izpisua Belmonte, Juan Carlos; Callol-Massot, Carles
2012-01-01
The zebrafish is a vertebrate model compatible with the paradigms of drug discovery. The small size and transparency of zebrafish embryos make them amenable for the automation necessary in high-throughput screenings. We have developed an automated high-throughput platform for in vivo chemical screenings on zebrafish embryos that includes automated methods for embryo dispensation, compound delivery, incubation, imaging and analysis of the results. At present, two different assays to detect cardiotoxic compounds and angiogenesis inhibitors can be automatically run in the platform, showing the versatility of the system. A validation of these two assays with known positive and negative compounds, as well as a screening for the detection of unknown anti-angiogenic compounds, have been successfully carried out in the system developed. We present a totally automated platform that allows for high-throughput screenings in a vertebrate organism. PMID:22615792
BiQ Analyzer HT: locus-specific analysis of DNA methylation by high-throughput bisulfite sequencing
Lutsik, Pavlo; Feuerbach, Lars; Arand, Julia; Lengauer, Thomas; Walter, Jörn; Bock, Christoph
2011-01-01
Bisulfite sequencing is a widely used method for measuring DNA methylation in eukaryotic genomes. The assay provides single-base pair resolution and, given sufficient sequencing depth, its quantitative accuracy is excellent. High-throughput sequencing of bisulfite-converted DNA can be applied either genome wide or targeted to a defined set of genomic loci (e.g. using locus-specific PCR primers or DNA capture probes). Here, we describe BiQ Analyzer HT (http://biq-analyzer-ht.bioinf.mpi-inf.mpg.de/), a user-friendly software tool that supports locus-specific analysis and visualization of high-throughput bisulfite sequencing data. The software facilitates the shift from time-consuming clonal bisulfite sequencing to the more quantitative and cost-efficient use of high-throughput sequencing for studying locus-specific DNA methylation patterns. In addition, it is useful for locus-specific visualization of genome-wide bisulfite sequencing data. PMID:21565797
Yeow, Jonathan; Joshi, Sanket; Chapman, Robert; Boyer, Cyrille Andre Jean Marie
2018-04-25
Translating controlled/living radical polymerization (CLRP) from batch to the high throughput production of polymer libraries presents several challenges in terms of both polymer synthesis and characterization. Although recently there have been significant advances in the field of low volume, high throughput CLRP, techniques able to simultaneously monitor multiple polymerizations in an "online" manner have not yet been developed. Here, we report our discovery that 5,10,15,20-tetraphenyl-21H,23H-porphine zinc (ZnTPP) is a self-reporting photocatalyst that can mediate PET-RAFT polymerization as well as report on monomer conversion via changes in its fluorescence properties. This enables the use of a microplate reader to conduct high throughput "online" monitoring of PET-RAFT polymerizations performed directly in 384-well, low volume microtiter plates. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Green, Martin L.; Choi, C. L.; Hattrick-Simpers, J. R.; ...
2017-03-28
The Materials Genome Initiative, a national effort to introduce new materials into the market faster and at lower cost, has made significant progress in computational simulation and modeling of materials. To build on this progress, a large amount of experimental data for validating these models, and informing more sophisticated ones, will be required. High-throughput experimentation generates large volumes of experimental data using combinatorial materials synthesis and rapid measurement techniques, making it an ideal experimental complement to bring the Materials Genome Initiative vision to fruition. This paper reviews the state-of-the-art results, opportunities, and challenges in high-throughput experimentation for materials design. Asmore » a result, a major conclusion is that an effort to deploy a federated network of high-throughput experimental (synthesis and characterization) tools, which are integrated with a modern materials data infrastructure, is needed.« less
High throughput vacuum chemical epitaxy
NASA Astrophysics Data System (ADS)
Fraas, L. M.; Malocsay, E.; Sundaram, V.; Baird, R. W.; Mao, B. Y.; Lee, G. Y.
1990-10-01
We have developed a vacuum chemical epitaxy (VCE) reactor which avoids the use of arsine and allows multiple wafers to be coated at one time. Our vacuum chemical epitaxy reactor closely resembles a molecular beam epitaxy system in that wafers are loaded into a stainless steel vacuum chamber through a load chamber. Also as in MBE, arsenic vapors are supplied as reactant by heating solid arsenic sources thereby avoiding the use of arsine. However, in our VCE reactor, a large number of wafers are coated at one time in a vacuum system by the substitution of Group III alkyl sources for the elemental metal sources traditionally used in MBE. Higher wafer throughput results because in VCE, the metal-alkyl sources for Ga, Al, and dopants can be mixed at room temperature and distributed uniformly though a large area injector to multiple substrates as a homogeneous array of mixed element molecular beams. The VCE reactor that we have built and that we shall describe here uniformly deposits films on 7 inch diameter substrate platters. Each platter contains seven two inch or three 3 inch diameter wafers. The load chamber contains up to nine platters. The vacuum chamber is equipped with two VCE growth zones and two arsenic ovens, one per growth zone. Finally, each oven has a 1 kg arsenic capacity. As of this writing, mirror smooth GaAs films have been grown at up to 4 μm/h growth rate on multiple wafers with good thickness uniformity. The background doping is p-type with a typical hole concentration and mobility of 1 × 10 16/cm 3 and 350 cm 2/V·s. This background doping level is low enough for the fabrication of MESFETs, solar cells, and photocathodes as well as other types of devices. We have fabricated MESFET devices using VCE-grown epi wafers with peak extrinsic transconductance as high as 210 mS/mm for a threshold voltage of - 3 V and a 0.6 μm gate length. We have also recently grown AlGaAs epi layers with up to 80% aluminum using TEAl as the aluminum alkyl source. The AlGaAs layer thickness and aluminum content uniformity appear excellent.
High Throughput Genotoxicity Profiling of the US EPA ToxCast Chemical Library
A key aim of the ToxCast project is to investigate modern molecular and genetic high content and high throughput screening (HTS) assays, along with various computational tools to supplement and perhaps replace traditional assays for evaluating chemical toxicity. Genotoxicity is a...
Stepping into the omics era: Opportunities and challenges for biomaterials science and engineering☆
Rabitz, Herschel; Welsh, William J.; Kohn, Joachim; de Boer, Jan
2016-01-01
The research paradigm in biomaterials science and engineering is evolving from using low-throughput and iterative experimental designs towards high-throughput experimental designs for materials optimization and the evaluation of materials properties. Computational science plays an important role in this transition. With the emergence of the omics approach in the biomaterials field, referred to as materiomics, high-throughput approaches hold the promise of tackling the complexity of materials and understanding correlations between material properties and their effects on complex biological systems. The intrinsic complexity of biological systems is an important factor that is often oversimplified when characterizing biological responses to materials and establishing property-activity relationships. Indeed, in vitro tests designed to predict in vivo performance of a given biomaterial are largely lacking as we are not able to capture the biological complexity of whole tissues in an in vitro model. In this opinion paper, we explain how we reached our opinion that converging genomics and materiomics into a new field would enable a significant acceleration of the development of new and improved medical devices. The use of computational modeling to correlate high-throughput gene expression profiling with high throughput combinatorial material design strategies would add power to the analysis of biological effects induced by material properties. We believe that this extra layer of complexity on top of high-throughput material experimentation is necessary to tackle the biological complexity and further advance the biomaterials field. PMID:26876875
Besaratinia, Ahmad; Li, Haiqing; Yoon, Jae-In; Zheng, Albert; Gao, Hanlin; Tommasi, Stella
2012-01-01
Many carcinogens leave a unique mutational fingerprint in the human genome. These mutational fingerprints manifest as specific types of mutations often clustering at certain genomic loci in tumor genomes from carcinogen-exposed individuals. To develop a high-throughput method for detecting the mutational fingerprint of carcinogens, we have devised a cost-, time- and labor-effective strategy, in which the widely used transgenic Big Blue® mouse mutation detection assay is made compatible with the Roche/454 Genome Sequencer FLX Titanium next-generation sequencing technology. As proof of principle, we have used this novel method to establish the mutational fingerprints of three prominent carcinogens with varying mutagenic potencies, including sunlight ultraviolet radiation, 4-aminobiphenyl and secondhand smoke that are known to be strong, moderate and weak mutagens, respectively. For verification purposes, we have compared the mutational fingerprints of these carcinogens obtained by our newly developed method with those obtained by parallel analyses using the conventional low-throughput approach, that is, standard mutation detection assay followed by direct DNA sequencing using a capillary DNA sequencer. We demonstrate that this high-throughput next-generation sequencing-based method is highly specific and sensitive to detect the mutational fingerprints of the tested carcinogens. The method is reproducible, and its accuracy is comparable with that of the currently available low-throughput method. In conclusion, this novel method has the potential to move the field of carcinogenesis forward by allowing high-throughput analysis of mutations induced by endogenous and/or exogenous genotoxic agents. PMID:22735701
Besaratinia, Ahmad; Li, Haiqing; Yoon, Jae-In; Zheng, Albert; Gao, Hanlin; Tommasi, Stella
2012-08-01
Many carcinogens leave a unique mutational fingerprint in the human genome. These mutational fingerprints manifest as specific types of mutations often clustering at certain genomic loci in tumor genomes from carcinogen-exposed individuals. To develop a high-throughput method for detecting the mutational fingerprint of carcinogens, we have devised a cost-, time- and labor-effective strategy, in which the widely used transgenic Big Blue mouse mutation detection assay is made compatible with the Roche/454 Genome Sequencer FLX Titanium next-generation sequencing technology. As proof of principle, we have used this novel method to establish the mutational fingerprints of three prominent carcinogens with varying mutagenic potencies, including sunlight ultraviolet radiation, 4-aminobiphenyl and secondhand smoke that are known to be strong, moderate and weak mutagens, respectively. For verification purposes, we have compared the mutational fingerprints of these carcinogens obtained by our newly developed method with those obtained by parallel analyses using the conventional low-throughput approach, that is, standard mutation detection assay followed by direct DNA sequencing using a capillary DNA sequencer. We demonstrate that this high-throughput next-generation sequencing-based method is highly specific and sensitive to detect the mutational fingerprints of the tested carcinogens. The method is reproducible, and its accuracy is comparable with that of the currently available low-throughput method. In conclusion, this novel method has the potential to move the field of carcinogenesis forward by allowing high-throughput analysis of mutations induced by endogenous and/or exogenous genotoxic agents.
Yennawar, Neela H; Fecko, Julia A; Showalter, Scott A; Bevilacqua, Philip C
2016-01-01
Many labs have conventional calorimeters where denaturation and binding experiments are setup and run one at a time. While these systems are highly informative to biopolymer folding and ligand interaction, they require considerable manual intervention for cleaning and setup. As such, the throughput for such setups is limited typically to a few runs a day. With a large number of experimental parameters to explore including different buffers, macromolecule concentrations, temperatures, ligands, mutants, controls, replicates, and instrument tests, the need for high-throughput automated calorimeters is on the rise. Lower sample volume requirements and reduced user intervention time compared to the manual instruments have improved turnover of calorimetry experiments in a high-throughput format where 25 or more runs can be conducted per day. The cost and efforts to maintain high-throughput equipment typically demands that these instruments be housed in a multiuser core facility. We describe here the steps taken to successfully start and run an automated biological calorimetry facility at Pennsylvania State University. Scientists from various departments at Penn State including Chemistry, Biochemistry and Molecular Biology, Bioengineering, Biology, Food Science, and Chemical Engineering are benefiting from this core facility. Samples studied include proteins, nucleic acids, sugars, lipids, synthetic polymers, small molecules, natural products, and virus capsids. This facility has led to higher throughput of data, which has been leveraged into grant support, attracting new faculty hire and has led to some exciting publications. © 2016 Elsevier Inc. All rights reserved.
Aggregating Data for Computational Toxicology Applications ...
Computational toxicology combines data from high-throughput test methods, chemical structure analyses and other biological domains (e.g., genes, proteins, cells, tissues) with the goals of predicting and understanding the underlying mechanistic causes of chemical toxicity and for predicting toxicity of new chemicals and products. A key feature of such approaches is their reliance on knowledge extracted from large collections of data and data sets in computable formats. The U.S. Environmental Protection Agency (EPA) has developed a large data resource called ACToR (Aggregated Computational Toxicology Resource) to support these data-intensive efforts. ACToR comprises four main repositories: core ACToR (chemical identifiers and structures, and summary data on hazard, exposure, use, and other domains), ToxRefDB (Toxicity Reference Database, a compilation of detailed in vivo toxicity data from guideline studies), ExpoCastDB (detailed human exposure data from observational studies of selected chemicals), and ToxCastDB (data from high-throughput screening programs, including links to underlying biological information related to genes and pathways). The EPA DSSTox (Distributed Structure-Searchable Toxicity) program provides expert-reviewed chemical structures and associated information for these and other high-interest public inventories. Overall, the ACToR system contains information on about 400,000 chemicals from 1100 different sources. The entire system is built usi
High Throughput Heuristics for Prioritizing Human Exposure to ...
The risk posed to human health by any of the thousands of untested anthropogenic chemicals in our environment is a function of both the potential hazard presented by the chemical, and the possibility of being exposed. Without the capacity to make quantitative, albeit uncertain, forecasts of exposure, the putative risk of adverse health effect from a chemical cannot be evaluated. We used Bayesian methodology to infer ranges of exposure intakes that are consistent with biomarkers of chemical exposures identified in urine samples from the U.S. population by the National Health and Nutrition Examination Survey (NHANES). We perform linear regression on inferred exposure for demographic subsets of NHANES demarked by age, gender, and weight using high throughput chemical descriptors gleaned from databases and chemical structure-based calculators. We find that five of these descriptors are capable of explaining roughly 50% of the variability across chemicals for all the demographic groups examined, including children aged 6-11. For the thousands of chemicals with no other source of information, this approach allows rapid and efficient prediction of average exposure intake of environmental chemicals. The methods described by this manuscript provide a highly improved methodology for HTS of human exposure to environmental chemicals. The manuscript includes a ranking of 7785 environmental chemicals with respect to potential human exposure, including most of the Tox21 in vit
Bartram, Jack; Mountjoy, Edward; Brooks, Tony; Hancock, Jeremy; Williamson, Helen; Wright, Gary; Moppett, John; Goulden, Nick; Hubank, Mike
2016-07-01
High-throughput sequencing (HTS) (next-generation sequencing) of the rearranged Ig and T-cell receptor genes promises to be less expensive and more sensitive than current methods of monitoring minimal residual disease (MRD) in patients with acute lymphoblastic leukemia. However, the adoption of new approaches by clinical laboratories requires careful evaluation of all potential sources of error and the development of strategies to ensure the highest accuracy. Timely and efficient clinical use of HTS platforms will depend on combining multiple samples (multiplexing) in each sequencing run. Here we examine the Ig heavy-chain gene HTS on the Illumina MiSeq platform for MRD. We identify errors associated with multiplexing that could potentially impact the accuracy of MRD analysis. We optimize a strategy that combines high-purity, sequence-optimized oligonucleotides, dual indexing, and an error-aware demultiplexing approach to minimize errors and maximize sensitivity. We present a probability-based, demultiplexing pipeline Error-Aware Demultiplexer that is suitable for all MiSeq strategies and accurately assigns samples to the correct identifier without excessive loss of data. Finally, using controls quantified by digital PCR, we show that HTS-MRD can accurately detect as few as 1 in 10(6) copies of specific leukemic MRD. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ponce de Leon, Philip J.; Hill, Frances A.; Heubel, Eric V.; Velásquez-García, Luis F.
2015-06-01
We report the design, fabrication, and characterization of planar arrays of externally-fed silicon electrospinning emitters for high-throughput generation of polymer nanofibers. Arrays with as many as 225 emitters and with emitter density as large as 100 emitters cm-2 were characterized using a solution of dissolved PEO in water and ethanol. Devices with emitter density as high as 25 emitters cm-2 deposit uniform imprints comprising fibers with diameters on the order of a few hundred nanometers. Mass flux rates as high as 417 g hr-1 m-2 were measured, i.e., four times the reported production rate of the leading commercial free-surface electrospinning sources. Throughput increases with increasing array size at constant emitter density, suggesting the design can be scaled up with no loss of productivity. Devices with emitter density equal to 100 emitters cm-2 fail to generate fibers but uniformly generate electrosprayed droplets. For the arrays tested, the largest measured mass flux resulted from arrays with larger emitter separation operating at larger bias voltages, indicating the strong influence of electrical field enhancement on the performance of the devices. Incorporation of a ground electrode surrounding the array tips helps equalize the emitter field enhancement across the array as well as control the spread of the imprints over larger distances.
Fridén, Markus; Ducrozet, Frederic; Middleton, Brian; Antonsson, Madeleine; Bredberg, Ulf; Hammarlund-Udenaes, Margareta
2009-06-01
New, more efficient methods of estimating unbound drug concentrations in the central nervous system (CNS) combine the amount of drug in whole brain tissue samples measured by conventional methods with in vitro estimates of the unbound brain volume of distribution (V(u,brain)). Although the brain slice method is the most reliable in vitro method for measuring V(u,brain), it has not previously been adapted for the needs of drug discovery research. The aim of this study was to increase the throughput and optimize the experimental conditions of this method. Equilibrium of drug between the buffer and the brain slice within the 4 to 5 h of incubation is a fundamental requirement. However, it is difficult to meet this requirement for many of the extensively binding, lipophilic compounds in drug discovery programs. In this study, the dimensions of the incubation vessel and mode of stirring influenced the equilibration time, as did the amount of brain tissue per unit of buffer volume. The use of cassette experiments for investigating V(u,brain) in a linear drug concentration range increased the throughput of the method. The V(u,brain) for the model compounds ranged from 4 to 3000 ml . g brain(-1), and the sources of variability are discussed. The optimized setup of the brain slice method allows precise, robust estimation of V(u,brain) for drugs with diverse properties, including highly lipophilic compounds. This is a critical step forward for the implementation of relevant measurements of CNS exposure in the drug discovery setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acquaah-Mensah, George K.; Taylor, Ronald C.
Microarray data have been a valuable resource for identifying transcriptional regulatory relationships among genes. As an example, brain region-specific transcriptional regulatory events have the potential of providing etiological insights into Alzheimer Disease (AD). However, there is often a paucity of suitable brain-region specific expression data obtained via microarrays or other high throughput means. The Allen Brain Atlas in situ hybridization (ISH) data sets (Jones et al., 2009) represent a potentially valuable alternative source of high-throughput brain region-specific gene expression data for such purposes. In this study, Allen BrainAtlasmouse ISH data in the hippocampal fields were extracted, focusing on 508 genesmore » relevant to neurodegeneration. Transcriptional regulatory networkswere learned using three high-performing network inference algorithms. Only 17% of regulatory edges from a network reverse-engineered based on brain region-specific ISH data were also found in a network constructed upon gene expression correlations inmousewhole brain microarrays, thus showing the specificity of gene expression within brain sub-regions. Furthermore, the ISH data-based networks were used to identify instructive transcriptional regulatory relationships. Ncor2, Sp3 and Usf2 form a unique three-party regulatory motif, potentially affecting memory formation pathways. Nfe2l1, Egr1 and Usf2 emerge among regulators of genes involved in AD (e.g. Dhcr24, Aplp2, Tia1, Pdrx1, Vdac1, andSyn2). Further, Nfe2l1, Egr1 and Usf2 are sensitive to dietary factors and could be among links between dietary influences and genes in the AD etiology. Thus, this approach of harnessing brain region-specific ISH data represents a rare opportunity for gleaning unique etiological insights for diseases such as AD.« less
Yu, Sheng; Liao, Katherine P; Shaw, Stanley Y; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi
2015-09-01
Analysis of narrative (text) data from electronic health records (EHRs) can improve population-scale phenotyping for clinical and genetic research. Currently, selection of text features for phenotyping algorithms is slow and laborious, requiring extensive and iterative involvement by domain experts. This paper introduces a method to develop phenotyping algorithms in an unbiased manner by automatically extracting and selecting informative features, which can be comparable to expert-curated ones in classification accuracy. Comprehensive medical concepts were collected from publicly available knowledge sources in an automated, unbiased fashion. Natural language processing (NLP) revealed the occurrence patterns of these concepts in EHR narrative notes, which enabled selection of informative features for phenotype classification. When combined with additional codified features, a penalized logistic regression model was trained to classify the target phenotype. The authors applied our method to develop algorithms to identify patients with rheumatoid arthritis and coronary artery disease cases among those with rheumatoid arthritis from a large multi-institutional EHR. The area under the receiver operating characteristic curves (AUC) for classifying RA and CAD using models trained with automated features were 0.951 and 0.929, respectively, compared to the AUCs of 0.938 and 0.929 by models trained with expert-curated features. Models trained with NLP text features selected through an unbiased, automated procedure achieved comparable or slightly higher accuracy than those trained with expert-curated features. The majority of the selected model features were interpretable. The proposed automated feature extraction method, generating highly accurate phenotyping algorithms with improved efficiency, is a significant step toward high-throughput phenotyping. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Havener, Charles
It is rapidly being realized that many X-ray astronomical investigations are being affected in one way or another by charge exchange emission. Metal abundance measurements in supernova remnants and in outflows from star-forming galaxies need to be corrected for this additional process, and all X-ray observations of low surface brightness objects, such as the outskirts of clusters, galactic halos, the intergalactic medium, and plasma emission from hot interstellar gas are seriously compromised by a highly variable and largely unpredictable foreground from the exchange of solar wind ions on interstellar neutrals within the Solar system. At the same time, charge exchange provides a new sensitivity to mixing at interfaces between hot and cold gas, including direct measurements of relative velocities. The new generation of facilities with microcalorimeter detectors, starting with Astro-H in 2015, will provide the energy resolution and throughput for extended sources required to take advantage of this process. But analysis requires accurate partial cross sections for the production of individual lines, and even the most sophisticated of current charge exchange models do not do this with adequate precision. We propose an inexpensive modification of the Wisconsin high-throughput XQC microcalorimeter instrument so that it can be used on the merged beam facility at Oak Ridge to make direct measurement of lines of interest from collisions between an assortment of heavy ions with neutral atomic hydrogen. In this beam-beam system, the entire range of astrophysically interesting relative velocities can be investigated. We will work closely with modelers to use these results to tune their models to give accurate results for additional ions.
I describe research on high throughput exposure and toxicokinetics. These tools provide context for data generated by high throughput toxicity screening to allow risk-based prioritization of thousands of chemicals.
MIPHENO: Data normalization for high throughput metabolic analysis.
High throughput methodologies such as microarrays, mass spectrometry and plate-based small molecule screens are increasingly used to facilitate discoveries from gene function to drug candidate identification. These large-scale experiments are typically carried out over the course...
High-Throughput Pharmacokinetics for Environmental Chemicals (SOT)
High throughput screening (HTS) promises to allow prioritization of thousands of environmental chemicals with little or no in vivo information. For bioactivity identified by HTS, toxicokinetic (TK) models are essential to predict exposure thresholds below which no significant bio...
Gore, Brooklin
2018-02-01
This presentation includes a brief background on High Throughput Computing, correlating gene transcription factors, optical mapping, genotype to phenotype mapping via QTL analysis, and current work on next gen sequencing.
IoT for Real-Time Measurement of High-Throughput Liquid Dispensing in Laboratory Environments.
Shumate, Justin; Baillargeon, Pierre; Spicer, Timothy P; Scampavia, Louis
2018-04-01
Critical to maintaining quality control in high-throughput screening is the need for constant monitoring of liquid-dispensing fidelity. Traditional methods involve operator intervention with gravimetric analysis to monitor the gross accuracy of full plate dispenses, visual verification of contents, or dedicated weigh stations on screening platforms that introduce potential bottlenecks and increase the plate-processing cycle time. We present a unique solution using open-source hardware, software, and 3D printing to automate dispenser accuracy determination by providing real-time dispense weight measurements via a network-connected precision balance. This system uses an Arduino microcontroller to connect a precision balance to a local network. By integrating the precision balance as an Internet of Things (IoT) device, it gains the ability to provide real-time gravimetric summaries of dispensing, generate timely alerts when problems are detected, and capture historical dispensing data for future analysis. All collected data can then be accessed via a web interface for reviewing alerts and dispensing information in real time or remotely for timely intervention of dispense errors. The development of this system also leveraged 3D printing to rapidly prototype sensor brackets, mounting solutions, and component enclosures.
Csiszar, Susan A; Meyer, David E; Dionisio, Kathie L; Egeghy, Peter; Isaacs, Kristin K; Price, Paul S; Scanlon, Kelly A; Tan, Yu-Mei; Thomas, Kent; Vallero, Daniel; Bare, Jane C
2016-11-01
Life Cycle Assessment (LCA) is a decision-making tool that accounts for multiple impacts across the life cycle of a product or service. This paper presents a conceptual framework to integrate human health impact assessment with risk screening approaches to extend LCA to include near-field chemical sources (e.g., those originating from consumer products and building materials) that have traditionally been excluded from LCA. A new generation of rapid human exposure modeling and high-throughput toxicity testing is transforming chemical risk prioritization and provides an opportunity for integration of screening-level risk assessment (RA) with LCA. The combined LCA and RA approach considers environmental impacts of products alongside risks to human health, which is consistent with regulatory frameworks addressing RA within a sustainability mindset. A case study is presented to juxtapose LCA and risk screening approaches for a chemical used in a consumer product. The case study demonstrates how these new risk screening tools can be used to inform toxicity impact estimates in LCA and highlights needs for future research. The framework provides a basis for developing tools and methods to support decision making on the use of chemicals in products.
GenomicTools: a computational platform for developing high-throughput analytics in genomics.
Tsirigos, Aristotelis; Haiminen, Niina; Bilal, Erhan; Utro, Filippo
2012-01-15
Recent advances in sequencing technology have resulted in the dramatic increase of sequencing data, which, in turn, requires efficient management of computational resources, such as computing time, memory requirements as well as prototyping of computational pipelines. We present GenomicTools, a flexible computational platform, comprising both a command-line set of tools and a C++ API, for the analysis and manipulation of high-throughput sequencing data such as DNA-seq, RNA-seq, ChIP-seq and MethylC-seq. GenomicTools implements a variety of mathematical operations between sets of genomic regions thereby enabling the prototyping of computational pipelines that can address a wide spectrum of tasks ranging from pre-processing and quality control to meta-analyses. Additionally, the GenomicTools platform is designed to analyze large datasets of any size by minimizing memory requirements. In practical applications, where comparable, GenomicTools outperforms existing tools in terms of both time and memory usage. The GenomicTools platform (version 2.0.0) was implemented in C++. The source code, documentation, user manual, example datasets and scripts are available online at http://code.google.com/p/ibm-cbc-genomic-tools.
ProbCD: enrichment analysis accounting for categorization uncertainty.
Vêncio, Ricardo Z N; Shmulevich, Ilya
2007-10-12
As in many other areas of science, systems biology makes extensive use of statistical association and significance estimates in contingency tables, a type of categorical data analysis known in this field as enrichment (also over-representation or enhancement) analysis. In spite of efforts to create probabilistic annotations, especially in the Gene Ontology context, or to deal with uncertainty in high throughput-based datasets, current enrichment methods largely ignore this probabilistic information since they are mainly based on variants of the Fisher Exact Test. We developed an open-source R-based software to deal with probabilistic categorical data analysis, ProbCD, that does not require a static contingency table. The contingency table for the enrichment problem is built using the expectation of a Bernoulli Scheme stochastic process given the categorization probabilities. An on-line interface was created to allow usage by non-programmers and is available at: http://xerad.systemsbiology.net/ProbCD/. We present an analysis framework and software tools to address the issue of uncertainty in categorical data analysis. In particular, concerning the enrichment analysis, ProbCD can accommodate: (i) the stochastic nature of the high-throughput experimental techniques and (ii) probabilistic gene annotation.
A reproducible approach to high-throughput biological data acquisition and integration
Rahnavard, Gholamali; Waldron, Levi; McIver, Lauren; Shafquat, Afrah; Franzosa, Eric A.; Miropolsky, Larissa; Sweeney, Christopher
2015-01-01
Modern biological research requires rapid, complex, and reproducible integration of multiple experimental results generated both internally and externally (e.g., from public repositories). Although large systematic meta-analyses are among the most effective approaches both for clinical biomarker discovery and for computational inference of biomolecular mechanisms, identifying, acquiring, and integrating relevant experimental results from multiple sources for a given study can be time-consuming and error-prone. To enable efficient and reproducible integration of diverse experimental results, we developed a novel approach for standardized acquisition and analysis of high-throughput and heterogeneous biological data. This allowed, first, novel biomolecular network reconstruction in human prostate cancer, which correctly recovered and extended the NFκB signaling pathway. Next, we investigated host-microbiome interactions. In less than an hour of analysis time, the system retrieved data and integrated six germ-free murine intestinal gene expression datasets to identify the genes most influenced by the gut microbiota, which comprised a set of immune-response and carbohydrate metabolism processes. Finally, we constructed integrated functional interaction networks to compare connectivity of peptide secretion pathways in the model organisms Escherichia coli, Bacillus subtilis, and Pseudomonas aeruginosa. PMID:26157642
A transmission imaging spectrograph and microfabricated channel system for DNA analysis.
Simpson, J W; Ruiz-Martinez, M C; Mulhern, G T; Berka, J; Latimer, D R; Ball, J A; Rothberg, J M; Went, G T
2000-01-01
In this paper we present the development of a DNA analysis system using a microfabricated channel device and a novel transmission imaging spectrograph which can be efficiently incorporated into a high throughput genomics facility for both sizing and sequencing of DNA fragments. The device contains 48 channels etched on a glass substrate. The channels are sealed with a flat glass plate which also provides a series of apertures for sample loading and contact with buffer reservoirs. Samples can be easily loaded in volumes up to 640 nL without band broadening because of an efficient electrokinetic stacking at the electrophoresis channel entrance. The system uses a dual laser excitation source and a highly sensitive charge-coupled device (CCD) detector allowing for simultaneous detection of many fluorescent dyes. The sieving matrices for the separation of single-stranded DNA fragments are polymerized in situ in denaturing buffer systems. Examples of separation of single-stranded DNA fragments up to 500 bases in length are shown, including accurate sizing of GeneCalling fragments, and sequencing samples prepared with a reduced amount of dye terminators. An increase in sample throughput has been achieved by color multiplexing.
Anslan, Sten; Bahram, Mohammad; Hiiesalu, Indrek; Tedersoo, Leho
2017-11-01
High-throughput sequencing methods have become a routine analysis tool in environmental sciences as well as in public and private sector. These methods provide vast amount of data, which need to be analysed in several steps. Although the bioinformatics may be applied using several public tools, many analytical pipelines allow too few options for the optimal analysis for more complicated or customized designs. Here, we introduce PipeCraft, a flexible and handy bioinformatics pipeline with a user-friendly graphical interface that links several public tools for analysing amplicon sequencing data. Users are able to customize the pipeline by selecting the most suitable tools and options to process raw sequences from Illumina, Pacific Biosciences, Ion Torrent and Roche 454 sequencing platforms. We described the design and options of PipeCraft and evaluated its performance by analysing the data sets from three different sequencing platforms. We demonstrated that PipeCraft is able to process large data sets within 24 hr. The graphical user interface and the automated links between various bioinformatics tools enable easy customization of the workflow. All analytical steps and options are recorded in log files and are easily traceable. © 2017 John Wiley & Sons Ltd.
Pyicos: a versatile toolkit for the analysis of high-throughput sequencing data
Althammer, Sonja; González-Vallinas, Juan; Ballaré, Cecilia; Beato, Miguel; Eyras, Eduardo
2011-01-01
Motivation: High-throughput sequencing (HTS) has revolutionized gene regulation studies and is now fundamental for the detection of protein–DNA and protein–RNA binding, as well as for measuring RNA expression. With increasing variety and sequencing depth of HTS datasets, the need for more flexible and memory-efficient tools to analyse them is growing. Results: We describe Pyicos, a powerful toolkit for the analysis of mapped reads from diverse HTS experiments: ChIP-Seq, either punctuated or broad signals, CLIP-Seq and RNA-Seq. We prove the effectiveness of Pyicos to select for significant signals and show that its accuracy is comparable and sometimes superior to that of methods specifically designed for each particular type of experiment. Pyicos facilitates the analysis of a variety of HTS datatypes through its flexibility and memory efficiency, providing a useful framework for data integration into models of regulatory genomics. Availability: Open-source software, with tutorials and protocol files, is available at http://regulatorygenomics.upf.edu/pyicos or as a Galaxy server at http://regulatorygenomics.upf.edu/galaxy Contact: eduardo.eyras@upf.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21994224
Han, Xiaoping; Chen, Haide; Huang, Daosheng; Chen, Huidong; Fei, Lijiang; Cheng, Chen; Huang, He; Yuan, Guo-Cheng; Guo, Guoji
2018-04-05
Human pluripotent stem cells (hPSCs) provide powerful models for studying cellular differentiations and unlimited sources of cells for regenerative medicine. However, a comprehensive single-cell level differentiation roadmap for hPSCs has not been achieved. We use high throughput single-cell RNA-sequencing (scRNA-seq), based on optimized microfluidic circuits, to profile early differentiation lineages in the human embryoid body system. We present a cellular-state landscape for hPSC early differentiation that covers multiple cellular lineages, including neural, muscle, endothelial, stromal, liver, and epithelial cells. Through pseudotime analysis, we construct the developmental trajectories of these progenitor cells and reveal the gene expression dynamics in the process of cell differentiation. We further reprogram primed H9 cells into naïve-like H9 cells to study the cellular-state transition process. We find that genes related to hemogenic endothelium development are enriched in naïve-like H9. Functionally, naïve-like H9 show higher potency for differentiation into hematopoietic lineages than primed cells. Our single-cell analysis reveals the cellular-state landscape of hPSC early differentiation, offering new insights that can be harnessed for optimization of differentiation protocols.
USDA-ARS?s Scientific Manuscript database
Field-based high-throughput phenotyping is an emerging approach to characterize difficult, time-sensitive plant traits in relevant growing conditions. Proximal sensing carts have been developed as an alternative platform to more costly high-clearance tractors for phenotyping dynamic traits in the fi...
High-throughput profiling and analysis of plant responses over time to abiotic stress
USDA-ARS?s Scientific Manuscript database
Energy sorghum (Sorghum bicolor (L.) Moench) is a rapidly growing, high-biomass, annual crop prized for abiotic stress tolerance. Measuring genotype-by-environment (G x E) interactions remains a progress bottleneck. High throughput phenotyping within controlled environments has been proposed as a po...
ToxCast Workflow: High-throughput screening assay data processing, analysis and management (SOT)
US EPA’s ToxCast program is generating data in high-throughput screening (HTS) and high-content screening (HCS) assays for thousands of environmental chemicals, for use in developing predictive toxicity models. Currently the ToxCast screening program includes over 1800 unique c...
A high-throughput multiplex method adapted for GMO detection.
Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique
2008-12-24
A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.
Lesiak, Ashton D; Cody, Robert B; Ubukata, Masaaki; Musah, Rabi A
2016-03-01
We demonstrate the utility of direct analysis in real time ionization coupled with high resolution time-of-flight mass spectrometry (DART-HRTOFMS) in revealing the adulteration of commercially available Sceletium tortuosum, a mind-altering plant-based drug commonly known as Kanna. Accurate masses consistent with alkaloids previously isolated from S. tortuosum plant material enabled identification of the products as Kanna, and in-source collision-induced dissociation (CID) confirmed the presence of one of these alkaloids, hordenine, while simultaneously revealing the presence of an adulterant. The stimulant ephedrine, which has been banned in herbal products and supplements, was confirmed to be present in a sample through the use of in-source CID. High-throughput DART-HRTOFMS was shown to be a powerful tool to not only screen plant-based drugs of abuse for psychotropic alkaloids, but also to reveal the presence of scheduled substances and adulterants. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Ren, Kangning; Liang, Qionglin; Mu, Xuan; Luo, Guoan; Wang, Yiming
2009-03-07
A novel miniaturized, portable fluorescence detection system for capillary array electrophoresis (CAE) on a microfluidic chip was developed, consisting of a scanning light-emitting diode (LED) light source and a single point photoelectric sensor. Without charge coupled detector (CCD), lens, fibers and moving parts, the system was extremely simplified. Pulsed driving of the LED significantly increased the sensitivity, and greatly reduced the power consumption and photobleaching effect. The highly integrated system was robust and easy to use. All the advantages realized the concept of a portable micro-total analysis system (micro-TAS), which could work on a single universal serial bus (USB) port. Compared with traditional CAE detecting systems, the current system could scan the radial capillary array with high scanning rate. An 8-channel CAE of fluorescein isothiocyanate (FITC) labeled arginine (Arg) on chip was demonstrated with this system, resulting in a limit of detection (LOD) of 640 amol.
RIPiT-Seq: A high-throughput approach for footprinting RNA:protein complexes
Singh, Guramrit; Ricci, Emiliano P.; Moore, Melissa J.
2013-01-01
Development of high-throughput approaches to map the RNA interaction sites of individual RNA binding proteins (RBPs) transcriptome-wide is rapidly transforming our understanding of post-transcriptional gene regulatory mechanisms. Here we describe a ribonucleoprotein (RNP) footprinting approach we recently developed for identifying occupancy sites of both individual RBPs and multi-subunit RNP complexes. RNA:protein immunoprecipitation in tandem (RIPiT) yields highly specific RNA footprints of cellular RNPs isolated via two sequential purifications; the resulting RNA footprints can then be identified by high-throughput sequencing (Seq). RIPiT-Seq is broadly applicable to all RBPs regardless of their RNA binding mode and thus provides a means to map the RNA binding sites of RBPs with poor inherent ultraviolet (UV) crosslinkability. Further, among current high-throughput approaches, RIPiT has the unique capacity to differentiate binding sites of RNPs with overlapping protein composition. It is therefore particularly suited for studying dynamic RNP assemblages whose composition evolves as gene expression proceeds. PMID:24096052
Li, Xiaofei; Wu, Yuhua; Li, Jun; Li, Yunjing; Long, Likun; Li, Feiwu; Wu, Gang
2015-01-05
The rapid increase in the number of genetically modified (GM) varieties has led to a demand for high-throughput methods to detect genetically modified organisms (GMOs). We describe a new dynamic array-based high throughput method to simultaneously detect 48 targets in 48 samples on a Fludigm system. The test targets included species-specific genes, common screening elements, most of the Chinese-approved GM events, and several unapproved events. The 48 TaqMan assays successfully amplified products from both single-event samples and complex samples with a GMO DNA amount of 0.05 ng, and displayed high specificity. To improve the sensitivity of detection, a preamplification step for 48 pooled targets was added to enrich the amount of template before performing dynamic chip assays. This dynamic chip-based method allowed the synchronous high-throughput detection of multiple targets in multiple samples. Thus, it represents an efficient, qualitative method for GMO multi-detection.
Li, Xiaofei; Wu, Yuhua; Li, Jun; Li, Yunjing; Long, Likun; Li, Feiwu; Wu, Gang
2015-01-01
The rapid increase in the number of genetically modified (GM) varieties has led to a demand for high-throughput methods to detect genetically modified organisms (GMOs). We describe a new dynamic array-based high throughput method to simultaneously detect 48 targets in 48 samples on a Fludigm system. The test targets included species-specific genes, common screening elements, most of the Chinese-approved GM events, and several unapproved events. The 48 TaqMan assays successfully amplified products from both single-event samples and complex samples with a GMO DNA amount of 0.05 ng, and displayed high specificity. To improve the sensitivity of detection, a preamplification step for 48 pooled targets was added to enrich the amount of template before performing dynamic chip assays. This dynamic chip-based method allowed the synchronous high-throughput detection of multiple targets in multiple samples. Thus, it represents an efficient, qualitative method for GMO multi-detection. PMID:25556930
A high-throughput label-free nanoparticle analyser.
Fraikin, Jean-Luc; Teesalu, Tambet; McKenney, Christopher M; Ruoslahti, Erkki; Cleland, Andrew N
2011-05-01
Synthetic nanoparticles and genetically modified viruses are used in a range of applications, but high-throughput analytical tools for the physical characterization of these objects are needed. Here we present a microfluidic analyser that detects individual nanoparticles and characterizes complex, unlabelled nanoparticle suspensions. We demonstrate the detection, concentration analysis and sizing of individual synthetic nanoparticles in a multicomponent mixture with sufficient throughput to analyse 500,000 particles per second. We also report the rapid size and titre analysis of unlabelled bacteriophage T7 in both salt solution and mouse blood plasma, using just ~1 × 10⁻⁶ l of analyte. Unexpectedly, in the native blood plasma we discover a large background of naturally occurring nanoparticles with a power-law size distribution. The high-throughput detection capability, scalable fabrication and simple electronics of this instrument make it well suited for diverse applications.
High Performance Computing Modernization Program Kerberos Throughput Test Report
2017-10-26
functionality as Kerberos plugins. The pre -release production kit was used in these tests to compare against the current release kit. YubiKey support...HPCMP Kerberos Throughput Test Report 3 2. THROUGHPUT TESTING 2.1 Testing Components Throughput testing was done to determine the benefits of the pre ...both the current release kit and the pre -release production kit for a total of 378 individual tests in order to note any improvements. Based on work
Metabolomics Approach for Toxicity Screening of Volatile Substances
In 2007 the National Research Council envisioned the need for inexpensive, high throughput, cell based toxicity testing methods relevant to human health. High Throughput Screening (HTS) in vitro screening approaches have addressed these problems by using robotics. However, the ch...
AOPs & Biomarkers: Bridging High Throughput Screening and Regulatory Decision Making.
As high throughput screening (HTS) approaches play a larger role in toxicity testing, computational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models for this purpose are becoming increasingly more sophisticated...
New High Throughput Methods to Estimate Chemical Exposure
EPA has made many recent advances in high throughput bioactivity testing. However, concurrent advances in rapid, quantitative prediction of human and ecological exposures have been lacking, despite the clear importance of both measures for a risk-based approach to prioritizing an...
Fully Bayesian Analysis of High-throughput Targeted Metabolomics Assays
High-throughput metabolomic assays that allow simultaneous targeted screening of hundreds of metabolites have recently become available in kit form. Such assays provide a window into understanding changes to biochemical pathways due to chemical exposure or disease, and are usefu...
Leulliot, Nicolas; Trésaugues, Lionel; Bremang, Michael; Sorel, Isabelle; Ulryck, Nathalie; Graille, Marc; Aboulfath, Ilham; Poupon, Anne; Liger, Dominique; Quevillon-Cheruel, Sophie; Janin, Joël; van Tilbeurgh, Herman
2005-06-01
Crystallization has long been regarded as one of the major bottlenecks in high-throughput structural determination by X-ray crystallography. Structural genomics projects have addressed this issue by using robots to set up automated crystal screens using nanodrop technology. This has moved the bottleneck from obtaining the first crystal hit to obtaining diffraction-quality crystals, as crystal optimization is a notoriously slow process that is difficult to automatize. This article describes the high-throughput optimization strategies used in the Yeast Structural Genomics project, with selected successful examples.
Towards sensitive, high-throughput, biomolecular assays based on fluorescence lifetime
NASA Astrophysics Data System (ADS)
Ioanna Skilitsi, Anastasia; Turko, Timothé; Cianfarani, Damien; Barre, Sophie; Uhring, Wilfried; Hassiepen, Ulrich; Léonard, Jérémie
2017-09-01
Time-resolved fluorescence detection for robust sensing of biomolecular interactions is developed by implementing time-correlated single photon counting in high-throughput conditions. Droplet microfluidics is used as a promising platform for the very fast handling of low-volume samples. We illustrate the potential of this very sensitive and cost-effective technology in the context of an enzymatic activity assay based on fluorescently-labeled biomolecules. Fluorescence lifetime detection by time-correlated single photon counting is shown to enable reliable discrimination between positive and negative control samples at a throughput as high as several hundred samples per second.