Sample records for parallel high throughput

  1. High-Throughput Industrial Coatings Research at The Dow Chemical Company.

    PubMed

    Kuo, Tzu-Chi; Malvadkar, Niranjan A; Drumright, Ray; Cesaretti, Richard; Bishop, Matthew T

    2016-09-12

    At The Dow Chemical Company, high-throughput research is an active area for developing new industrial coatings products. Using the principles of automation (i.e., using robotic instruments), parallel processing (i.e., prepare, process, and evaluate samples in parallel), and miniaturization (i.e., reduce sample size), high-throughput tools for synthesizing, formulating, and applying coating compositions have been developed at Dow. In addition, high-throughput workflows for measuring various coating properties, such as cure speed, hardness development, scratch resistance, impact toughness, resin compatibility, pot-life, surface defects, among others have also been developed in-house. These workflows correlate well with the traditional coatings tests, but they do not necessarily mimic those tests. The use of such high-throughput workflows in combination with smart experimental designs allows accelerated discovery and commercialization.

  2. Role of APOE Isoforms in the Pathogenesis of TBI induced Alzheimer’s Disease

    DTIC Science & Technology

    2016-10-01

    deletion, APOE targeted replacement, complex breeding, CCI model optimization, mRNA library generation, high throughput massive parallel sequencing...demonstrate that the lack of Abca1 increases amyloid plaques and decreased APOE protein levels in AD-model mice. In this proposal we will test the hypothesis...injury, inflammatory reaction, transcriptome, high throughput massive parallel sequencing, mRNA-seq., behavioral testing, memory impairment, recovery 3

  3. TCP Throughput Profiles Using Measurements over Dedicated Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata

    Wide-area data transfers in high-performance computing infrastructures are increasingly being carried over dynamically provisioned dedicated network connections that provide high capacities with no competing traffic. We present extensive TCP throughput measurements and time traces over a suite of physical and emulated 10 Gbps connections with 0-366 ms round-trip times (RTTs). Contrary to the general expectation, they show significant statistical and temporal variations, in addition to the overall dependencies on the congestion control mechanism, buffer size, and the number of parallel streams. We analyze several throughput profiles that have highly desirable concave regions wherein the throughput decreases slowly with RTTs, inmore » stark contrast to the convex profiles predicted by various TCP analytical models. We present a generic throughput model that abstracts the ramp-up and sustainment phases of TCP flows, which provides insights into qualitative trends observed in measurements across TCP variants: (i) slow-start followed by well-sustained throughput leads to concave regions; (ii) large buffers and multiple parallel streams expand the concave regions in addition to improving the throughput; and (iii) stable throughput dynamics, indicated by a smoother Poincare map and smaller Lyapunov exponents, lead to wider concave regions. These measurements and analytical results together enable us to select a TCP variant and its parameters for a given connection to achieve high throughput with statistical guarantees.« less

  4. High-throughput sequence alignment using Graphics Processing Units

    PubMed Central

    Schatz, Michael C; Trapnell, Cole; Delcher, Arthur L; Varshney, Amitabh

    2007-01-01

    Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs) in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU. PMID:18070356

  5. A 0.13-µm implementation of 5 Gb/s and 3-mW folded parallel architecture for AES algorithm

    NASA Astrophysics Data System (ADS)

    Rahimunnisa, K.; Karthigaikumar, P.; Kirubavathy, J.; Jayakumar, J.; Kumar, S. Suresh

    2014-02-01

    A new architecture for encrypting and decrypting the confidential data using Advanced Encryption Standard algorithm is presented in this article. This structure combines the folded structure with parallel architecture to increase the throughput. The whole architecture achieved high throughput with less power. The proposed architecture is implemented in 0.13-µm Complementary metal-oxide-semiconductor (CMOS) technology. The proposed structure is compared with different existing structures, and from the result it is proved that the proposed structure gives higher throughput and less power compared to existing works.

  6. Experiments and Analyses of Data Transfers Over Wide-Area Dedicated Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata

    Dedicated wide-area network connections are increasingly employed in high-performance computing and big data scenarios. One might expect the performance and dynamics of data transfers over such connections to be easy to analyze due to the lack of competing traffic. However, non-linear transport dynamics and end-system complexities (e.g., multi-core hosts and distributed filesystems) can in fact make analysis surprisingly challenging. We present extensive measurements of memory-to-memory and disk-to-disk file transfers over 10 Gbps physical and emulated connections with 0–366 ms round trip times (RTTs). For memory-to-memory transfers, profiles of both TCP and UDT throughput as a function of RTT show concavemore » and convex regions; large buffer sizes and more parallel flows lead to wider concave regions, which are highly desirable. TCP and UDT both also display complex throughput dynamics, as indicated by their Poincare maps and Lyapunov exponents. For disk-to-disk transfers, we determine that high throughput can be achieved via a combination of parallel I/O threads, parallel network threads, and direct I/O mode. Our measurements also show that Lustre filesystems can be mounted over long-haul connections using LNet routers, although challenges remain in jointly optimizing file I/O and transport method parameters to achieve peak throughput.« less

  7. High-throughput measurements of biochemical responses using the plate::vision multimode 96 minilens array reader.

    PubMed

    Huang, Kuo-Sen; Mark, David; Gandenberger, Frank Ulrich

    2006-01-01

    The plate::vision is a high-throughput multimode reader capable of reading absorbance, fluorescence, fluorescence polarization, time-resolved fluorescence, and luminescence. Its performance has been shown to be quite comparable with other readers. When the reader is integrated into the plate::explorer, an ultrahigh-throughput screening system with event-driven software and parallel plate-handling devices, it becomes possible to run complicated assays with kinetic readouts in high-density microtiter plate formats for high-throughput screening. For the past 5 years, we have used the plate::vision and the plate::explorer to run screens and have generated more than 30 million data points. Their throughput, performance, and robustness have speeded up our drug discovery process greatly.

  8. A high throughput array microscope for the mechanical characterization of biomaterials

    NASA Astrophysics Data System (ADS)

    Cribb, Jeremy; Osborne, Lukas D.; Hsiao, Joe Ping-Lin; Vicci, Leandra; Meshram, Alok; O'Brien, E. Tim; Spero, Richard Chasen; Taylor, Russell; Superfine, Richard

    2015-02-01

    In the last decade, the emergence of high throughput screening has enabled the development of novel drug therapies and elucidated many complex cellular processes. Concurrently, the mechanobiology community has developed tools and methods to show that the dysregulation of biophysical properties and the biochemical mechanisms controlling those properties contribute significantly to many human diseases. Despite these advances, a complete understanding of the connection between biomechanics and disease will require advances in instrumentation that enable parallelized, high throughput assays capable of probing complex signaling pathways, studying biology in physiologically relevant conditions, and capturing specimen and mechanical heterogeneity. Traditional biophysical instruments are unable to meet this need. To address the challenge of large-scale, parallelized biophysical measurements, we have developed an automated array high-throughput microscope system that utilizes passive microbead diffusion to characterize mechanical properties of biomaterials. The instrument is capable of acquiring data on twelve-channels simultaneously, where each channel in the system can independently drive two-channel fluorescence imaging at up to 50 frames per second. We employ this system to measure the concentration-dependent apparent viscosity of hyaluronan, an essential polymer found in connective tissue and whose expression has been implicated in cancer progression.

  9. Multispot single-molecule FRET: High-throughput analysis of freely diffusing molecules

    PubMed Central

    Panzeri, Francesco

    2017-01-01

    We describe an 8-spot confocal setup for high-throughput smFRET assays and illustrate its performance with two characteristic experiments. First, measurements on a series of freely diffusing doubly-labeled dsDNA samples allow us to demonstrate that data acquired in multiple spots in parallel can be properly corrected and result in measured sample characteristics consistent with those obtained with a standard single-spot setup. We then take advantage of the higher throughput provided by parallel acquisition to address an outstanding question about the kinetics of the initial steps of bacterial RNA transcription. Our real-time kinetic analysis of promoter escape by bacterial RNA polymerase confirms results obtained by a more indirect route, shedding additional light on the initial steps of transcription. Finally, we discuss the advantages of our multispot setup, while pointing potential limitations of the current single laser excitation design, as well as analysis challenges and their solutions. PMID:28419142

  10. GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit

    PubMed Central

    Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R.; Smith, Jeremy C.; Kasson, Peter M.; van der Spoel, David; Hess, Berk; Lindahl, Erik

    2013-01-01

    Motivation: Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Results: Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. Availability: GROMACS is an open source and free software available from http://www.gromacs.org. Contact: erik.lindahl@scilifelab.se Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23407358

  11. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    PubMed

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  12. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    PubMed Central

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  13. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    PubMed

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  14. Multinode acoustic focusing for parallel flow cytometry

    PubMed Central

    Piyasena, Menake E.; Suthanthiraraj, Pearlson P. Austin; Applegate, Robert W.; Goumas, Andrew M.; Woods, Travis A.; López, Gabriel P.; Graves, Steven W.

    2012-01-01

    Flow cytometry can simultaneously measure and analyze multiple properties of single cells or particles with high sensitivity and precision. Yet, conventional flow cytometers have fundamental limitations with regards to analyzing particles larger than about 70 microns, analyzing at flow rates greater than a few hundred microliters per minute, and providing analysis rates greater than 50,000 per second. To overcome these limits, we have developed multi-node acoustic focusing flow cells that can position particles (as small as a red blood cell and as large as 107 microns in diameter) into as many as 37 parallel flow streams. We demonstrate the potential of such flow cells for the development of high throughput, parallel flow cytometers by precision focusing of flow cytometry alignment microspheres, red blood cells, and the analysis of CD4+ cellular immunophenotyping assay. This approach will have significant impact towards the creation of high throughput flow cytometers for rare cell detection applications (e.g. circulating tumor cells), applications requiring large particle analysis, and high volume flow cytometry. PMID:22239072

  15. Mapper: high throughput maskless lithography

    NASA Astrophysics Data System (ADS)

    Kuiper, V.; Kampherbeek, B. J.; Wieland, M. J.; de Boer, G.; ten Berge, G. F.; Boers, J.; Jager, R.; van de Peut, T.; Peijster, J. J. M.; Slot, E.; Steenbrink, S. W. H. K.; Teepen, T. F.; van Veen, A. H. V.

    2009-01-01

    Maskless electron beam lithography, or electron beam direct write, has been around for a long time in the semiconductor industry and was pioneered from the mid-1960s onwards. This technique has been used for mask writing applications as well as device engineering and in some cases chip manufacturing. However because of its relatively low throughput compared to optical lithography, electron beam lithography has never been the mainstream lithography technology. To extend optical lithography double patterning, as a bridging technology, and EUV lithography are currently explored. Irrespective of the technical viability of both approaches, one thing seems clear. They will be expensive [1]. MAPPER Lithography is developing a maskless lithography technology based on massively-parallel electron-beam writing with high speed optical data transport for switching the electron beams. In this way optical columns can be made with a throughput of 10-20 wafers per hour. By clustering several of these columns together high throughputs can be realized in a small footprint. This enables a highly cost-competitive alternative to double patterning and EUV alternatives. In 2007 MAPPER obtained its Proof of Lithography milestone by exposing in its Demonstrator 45 nm half pitch structures with 110 electron beams in parallel, where all the beams where individually switched on and off [2]. In 2008 MAPPER has taken a next step in its development by building several tools. A new platform has been designed and built which contains a 300 mm wafer stage, a wafer handler and an electron beam column with 110 parallel electron beams. This manuscript describes the first patterning results with this 300 mm platform.

  16. Real-time Full-spectral Imaging and Affinity Measurements from 50 Microfluidic Channels using Nanohole Surface Plasmon Resonance†

    PubMed Central

    Lee, Si Hoon; Lindquist, Nathan C.; Wittenberg, Nathan J.; Jordan, Luke R.; Oh, Sang-Hyun

    2012-01-01

    With recent advances in high-throughput proteomics and systems biology, there is a growing demand for new instruments that can precisely quantify a wide range of receptor-ligand binding kinetics in a high-throughput fashion. Here we demonstrate a surface plasmon resonance (SPR) imaging spectroscopy instrument capable of extracting binding kinetics and affinities from 50 parallel microfluidic channels simultaneously. The instrument utilizes large-area (~cm2) metallic nanohole arrays as SPR sensing substrates and combines a broadband light source, a high-resolution imaging spectrometer and a low-noise CCD camera to extract spectral information from every channel in real time with a refractive index resolution of 7.7 × 10−6. To demonstrate the utility of our instrument for quantifying a wide range of biomolecular interactions, each parallel microfluidic channel is coated with a biomimetic supported lipid membrane containing ganglioside (GM1) receptors. The binding kinetics of cholera toxin b (CTX-b) to GM1 are then measured in a single experiment from 50 channels. By combining the highly parallel microfluidic device with large-area periodic nanohole array chips, our SPR imaging spectrometer system enables high-throughput, label-free, real-time SPR biosensing, and its full-spectral imaging capability combined with nanohole arrays could enable integration of SPR imaging with concurrent surface-enhanced Raman spectroscopy. PMID:22895607

  17. Line-Focused Optical Excitation of Parallel Acoustic Focused Sample Streams for High Volumetric and Analytical Rate Flow Cytometry.

    PubMed

    Kalb, Daniel M; Fencl, Frank A; Woods, Travis A; Swanson, August; Maestas, Gian C; Juárez, Jaime J; Edwards, Bruce S; Shreve, Andrew P; Graves, Steven W

    2017-09-19

    Flow cytometry provides highly sensitive multiparameter analysis of cells and particles but has been largely limited to the use of a single focused sample stream. This limits the analytical rate to ∼50K particles/s and the volumetric rate to ∼250 μL/min. Despite the analytical prowess of flow cytometry, there are applications where these rates are insufficient, such as rare cell analysis in high cellular backgrounds (e.g., circulating tumor cells and fetal cells in maternal blood), detection of cells/particles in large dilute samples (e.g., water quality, urine analysis), or high-throughput screening applications. Here we report a highly parallel acoustic flow cytometer that uses an acoustic standing wave to focus particles into 16 parallel analysis points across a 2.3 mm wide optical flow cell. A line-focused laser and wide-field collection optics are used to excite and collect the fluorescence emission of these parallel streams onto a high-speed camera for analysis. With this instrument format and fluorescent microsphere standards, we obtain analysis rates of 100K/s and flow rates of 10 mL/min, while maintaining optical performance comparable to that of a commercial flow cytometer. The results with our initial prototype instrument demonstrate that the integration of key parallelizable components, including the line-focused laser, particle focusing using multinode acoustic standing waves, and a spatially arrayed detector, can increase analytical and volumetric throughputs by orders of magnitude in a compact, simple, and cost-effective platform. Such instruments will be of great value to applications in need of high-throughput yet sensitive flow cytometry analysis.

  18. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  19. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer.

    PubMed

    Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome

    2014-04-25

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.

  20. High throughput screening of particle conditioning operations: I. System design and method development.

    PubMed

    Noyes, Aaron; Huffman, Ben; Godavarti, Ranga; Titchener-Hooker, Nigel; Coffman, Jonathan; Sunasara, Khurram; Mukhopadhyay, Tarit

    2015-08-01

    The biotech industry is under increasing pressure to decrease both time to market and development costs. Simultaneously, regulators are expecting increased process understanding. High throughput process development (HTPD) employs small volumes, parallel processing, and high throughput analytics to reduce development costs and speed the development of novel therapeutics. As such, HTPD is increasingly viewed as integral to improving developmental productivity and deepening process understanding. Particle conditioning steps such as precipitation and flocculation may be used to aid the recovery and purification of biological products. In this first part of two articles, we describe an ultra scale-down system (USD) for high throughput particle conditioning (HTPC) composed of off-the-shelf components. The apparatus is comprised of a temperature-controlled microplate with magnetically driven stirrers and integrated with a Tecan liquid handling robot. With this system, 96 individual reaction conditions can be evaluated in parallel, including downstream centrifugal clarification. A comprehensive suite of high throughput analytics enables measurement of product titer, product quality, impurity clearance, clarification efficiency, and particle characterization. HTPC at the 1 mL scale was evaluated with fermentation broth containing a vaccine polysaccharide. The response profile was compared with the Pilot-scale performance of a non-geometrically similar, 3 L reactor. An engineering characterization of the reactors and scale-up context examines theoretical considerations for comparing this USD system with larger scale stirred reactors. In the second paper, we will explore application of this system to industrially relevant vaccines and test different scale-up heuristics. © 2015 Wiley Periodicals, Inc.

  1. A review of snapshot multidimensional optical imaging: measuring photon tags in parallel

    PubMed Central

    Gao, Liang; Wang, Lihong V.

    2015-01-01

    Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photons’ spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisition—also dubbed snapshot imaging—has a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications. PMID:27134340

  2. Combinatorial and high-throughput approaches in polymer science

    NASA Astrophysics Data System (ADS)

    Zhang, Huiqi; Hoogenboom, Richard; Meier, Michael A. R.; Schubert, Ulrich S.

    2005-01-01

    Combinatorial and high-throughput approaches have become topics of great interest in the last decade due to their potential ability to significantly increase research productivity. Recent years have witnessed a rapid extension of these approaches in many areas of the discovery of new materials including pharmaceuticals, inorganic materials, catalysts and polymers. This paper mainly highlights our progress in polymer research by using an automated parallel synthesizer, microwave synthesizer and ink-jet printer. The equipment and methodologies in our experiments, the high-throughput experimentation of different polymerizations (such as atom transfer radical polymerization, cationic ring-opening polymerization and emulsion polymerization) and the automated matrix-assisted laser desorption/ionization time-of-flight mass spectroscopy (MALDI-TOF MS) sample preparation are described.

  3. Optimizing SIEM Throughput on the Cloud Using Parallelization.

    PubMed

    Alam, Masoom; Ihsan, Asif; Khan, Muazzam A; Javaid, Qaisar; Khan, Abid; Manzoor, Jawad; Akhundzada, Adnan; Khan, Muhammad Khurram; Farooq, Sajid

    2016-01-01

    Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage.

  4. High-volume production of single and compound emulsions in a microfluidic parallelization arrangement coupled with coaxial annular world-to-chip interfaces.

    PubMed

    Nisisako, Takasi; Ando, Takuya; Hatsuzawa, Takeshi

    2012-09-21

    This study describes a microfluidic platform with coaxial annular world-to-chip interfaces for high-throughput production of single and compound emulsion droplets, having controlled sizes and internal compositions. The production module consists of two distinct elements: a planar square chip on which many copies of a microfluidic droplet generator (MFDG) are arranged circularly, and a cubic supporting module with coaxial annular channels for supplying fluids evenly to the inlets of the mounted chip, assembled from blocks with cylinders and holes. Three-dimensional flow was simulated to evaluate the distribution of flow velocity in the coaxial multiple annular channels. By coupling a 1.5 cm × 1.5 cm microfluidic chip with parallelized 144 MFDGs and a supporting module with two annular channels, for example, we could produce simple oil-in-water (O/W) emulsion droplets having a mean diameter of 90.7 μm and a coefficient of variation (CV) of 2.2% at a throughput of 180.0 mL h(-1). Furthermore, we successfully demonstrated high-throughput production of Janus droplets, double emulsions and triple emulsions, by coupling 1.5 cm × 1.5 cm - 4.5 cm × 4.5 cm microfluidic chips with parallelized 32-128 MFDGs of various geometries and supporting modules with 3-4 annular channels.

  5. Microfluidics for cell-based high throughput screening platforms - A review.

    PubMed

    Du, Guansheng; Fang, Qun; den Toonder, Jaap M J

    2016-01-15

    In the last decades, the basic techniques of microfluidics for the study of cells such as cell culture, cell separation, and cell lysis, have been well developed. Based on cell handling techniques, microfluidics has been widely applied in the field of PCR (Polymerase Chain Reaction), immunoassays, organ-on-chip, stem cell research, and analysis and identification of circulating tumor cells. As a major step in drug discovery, high-throughput screening allows rapid analysis of thousands of chemical, biochemical, genetic or pharmacological tests in parallel. In this review, we summarize the application of microfluidics in cell-based high throughput screening. The screening methods mentioned in this paper include approaches using the perfusion flow mode, the droplet mode, and the microarray mode. We also discuss the future development of microfluidic based high throughput screening platform for drug discovery. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Enabling inspection solutions for future mask technologies through the development of massively parallel E-Beam inspection

    NASA Astrophysics Data System (ADS)

    Malloy, Matt; Thiel, Brad; Bunday, Benjamin D.; Wurm, Stefan; Jindal, Vibhu; Mukhtar, Maseeh; Quoi, Kathy; Kemen, Thomas; Zeidler, Dirk; Eberle, Anna Lena; Garbowski, Tomasz; Dellemann, Gregor; Peters, Jan Hendrik

    2015-09-01

    The new device architectures and materials being introduced for sub-10nm manufacturing, combined with the complexity of multiple patterning and the need for improved hotspot detection strategies, have pushed current wafer inspection technologies to their limits. In parallel, gaps in mask inspection capability are growing as new generations of mask technologies are developed to support these sub-10nm wafer manufacturing requirements. In particular, the challenges associated with nanoimprint and extreme ultraviolet (EUV) mask inspection require new strategies that enable fast inspection at high sensitivity. The tradeoffs between sensitivity and throughput for optical and e-beam inspection are well understood. Optical inspection offers the highest throughput and is the current workhorse of the industry for both wafer and mask inspection. E-beam inspection offers the highest sensitivity but has historically lacked the throughput required for widespread adoption in the manufacturing environment. It is unlikely that continued incremental improvements to either technology will meet tomorrow's requirements, and therefore a new inspection technology approach is required; one that combines the high-throughput performance of optical with the high-sensitivity capabilities of e-beam inspection. To support the industry in meeting these challenges SUNY Poly SEMATECH has evaluated disruptive technologies that can meet the requirements for high volume manufacturing (HVM), for both the wafer fab [1] and the mask shop. Highspeed massively parallel e-beam defect inspection has been identified as the leading candidate for addressing the key gaps limiting today's patterned defect inspection techniques. As of late 2014 SUNY Poly SEMATECH completed a review, system analysis, and proof of concept evaluation of multiple e-beam technologies for defect inspection. A champion approach has been identified based on a multibeam technology from Carl Zeiss. This paper includes a discussion on the need for high-speed e-beam inspection and then provides initial imaging results from EUV masks and wafers from 61 and 91 beam demonstration systems. Progress towards high resolution and consistent intentional defect arrays (IDA) is also shown.

  7. Transparent Nanopore Cavity Arrays Enable Highly Parallelized Optical Studies of Single Membrane Proteins on Chip.

    PubMed

    Diederichs, Tim; Nguyen, Quoc Hung; Urban, Michael; Tampé, Robert; Tornow, Marc

    2018-06-13

    Membrane proteins involved in transport processes are key targets for pharmaceutical research and industry. Despite continuous improvements and new developments in the field of electrical readouts for the analysis of transport kinetics, a well-suited methodology for high-throughput characterization of single transporters with nonionic substrates and slow turnover rates is still lacking. Here, we report on a novel architecture of silicon chips with embedded nanopore microcavities, based on a silicon-on-insulator technology for high-throughput optical readouts. Arrays containing more than 14 000 inverted-pyramidal cavities of 50 femtoliter volumes and 80 nm circular pore openings were constructed via high-resolution electron-beam lithography in combination with reactive ion etching and anisotropic wet etching. These cavities feature both, an optically transparent bottom and top cap. Atomic force microscopy analysis reveals an overall extremely smooth chip surface, particularly in the vicinity of the nanopores, which exhibits well-defined edges. Our unprecedented transparent chip design provides parallel and independent fluorescent readout of both cavities and buffer reservoir for unbiased single-transporter recordings. Spreading of large unilamellar vesicles with efficiencies up to 96% created nanopore-supported lipid bilayers, which are stable for more than 1 day. A high lipid mobility in the supported membrane was determined by fluorescent recovery after photobleaching. Flux kinetics of α-hemolysin were characterized at single-pore resolution with a rate constant of 0.96 ± 0.06 × 10 -3 s -1 . Here, we deliver an ideal chip platform for pharmaceutical research, which features high parallelism and throughput, synergistically combined with single-transporter resolution.

  8. A simple dual online ultra-high pressure liquid chromatography system (sDO-UHPLC) for high throughput proteome analysis.

    PubMed

    Lee, Hangyeore; Mun, Dong-Gi; Bae, Jingi; Kim, Hokeun; Oh, Se Yeon; Park, Young Soo; Lee, Jae-Hyuk; Lee, Sang-Won

    2015-08-21

    We report a new and simple design of a fully automated dual-online ultra-high pressure liquid chromatography system. The system employs only two nano-volume switching valves (a two-position four port valve and a two-position ten port valve) that direct solvent flows from two binary nano-pumps for parallel operation of two analytical columns and two solid phase extraction (SPE) columns. Despite the simple design, the sDO-UHPLC offers many advantageous features that include high duty cycle, back flushing sample injection for fast and narrow zone sample injection, online desalting, high separation resolution and high intra/inter-column reproducibility. This system was applied to analyze proteome samples not only in high throughput deep proteome profiling experiments but also in high throughput MRM experiments.

  9. High Throughput Optical Lithography by Scanning a Massive Array of Bowtie Aperture Antennas at Near-Field

    DTIC Science & Technology

    2015-11-03

    scale optical projection system powered by spatial light modulators, such as digital micro-mirror device ( DMD ). Figure 4 shows the parallel lithography ...1Scientific RepoRts | 5:16192 | DOi: 10.1038/srep16192 www.nature.com/scientificreports High throughput optical lithography by scanning a massive...array of bowtie aperture antennas at near-field X. Wen1,2,3,*, A. Datta1,*, L. M. Traverso1, L. Pan1, X. Xu1 & E. E. Moon4 Optical lithography , the

  10. Microscale High-Throughput Experimentation as an Enabling Technology in Drug Discovery: Application in the Discovery of (Piperidinyl)pyridinyl-1H-benzimidazole Diacylglycerol Acyltransferase 1 Inhibitors.

    PubMed

    Cernak, Tim; Gesmundo, Nathan J; Dykstra, Kevin; Yu, Yang; Wu, Zhicai; Shi, Zhi-Cai; Vachal, Petr; Sperbeck, Donald; He, Shuwen; Murphy, Beth Ann; Sonatore, Lisa; Williams, Steven; Madeira, Maria; Verras, Andreas; Reiter, Maud; Lee, Claire Heechoon; Cuff, James; Sherer, Edward C; Kuethe, Jeffrey; Goble, Stephen; Perrotto, Nicholas; Pinto, Shirly; Shen, Dong-Ming; Nargund, Ravi; Balkovec, James; DeVita, Robert J; Dreher, Spencer D

    2017-05-11

    Miniaturization and parallel processing play an important role in the evolution of many technologies. We demonstrate the application of miniaturized high-throughput experimentation methods to resolve synthetic chemistry challenges on the frontlines of a lead optimization effort to develop diacylglycerol acyltransferase (DGAT1) inhibitors. Reactions were performed on ∼1 mg scale using glass microvials providing a miniaturized high-throughput experimentation capability that was used to study a challenging S N Ar reaction. The availability of robust synthetic chemistry conditions discovered in these miniaturized investigations enabled the development of structure-activity relationships that ultimately led to the discovery of soluble, selective, and potent inhibitors of DGAT1.

  11. On Data Transfers Over Wide-Area Dedicated Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang

    Dedicated wide-area network connections are employed in big data and high-performance computing scenarios, since the absence of cross-traffic promises to make it easier to analyze and optimize data transfers over them. However, nonlinear transport dynamics and end-system complexity due to multi-core hosts and distributed file systems make these tasks surprisingly challenging. We present an overview of methods to analyze memory and disk file transfers using extensive measurements over 10 Gbps physical and emulated connections with 0–366 ms round trip times (RTTs). For memory transfers, we derive performance profiles of TCP and UDT throughput as a function of RTT, which showmore » concave regions in contrast to entirely convex regions predicted by previous models. These highly desirable concave regions can be expanded by utilizing large buffers and more parallel flows. We also present Poincar´e maps and Lyapunov exponents of TCP and UDT throughputtraces that indicate complex throughput dynamics. For disk file transfers, we show that throughput can be optimized using a combination of parallel I/O and network threads under direct I/O mode. Our initial throughput measurements of Lustre filesystems mounted over long-haul connections using LNet routers show convex profiles indicative of I/O limits.« less

  12. Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis

    PubMed Central

    2012-01-01

    Background The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Results Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. Conclusions By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand. PMID:22276739

  13. Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis.

    PubMed

    Tu, Jing; Ge, Qinyu; Wang, Shengqin; Wang, Lei; Sun, Beili; Yang, Qi; Bai, Yunfei; Lu, Zuhong

    2012-01-25

    The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand.

  14. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.

    PubMed

    Ferreira, Miguel; Roma, Nuno; Russo, Luis M S

    2014-05-30

    HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.

  15. Optimizing SIEM Throughput on the Cloud Using Parallelization

    PubMed Central

    Alam, Masoom; Ihsan, Asif; Javaid, Qaisar; Khan, Abid; Manzoor, Jawad; Akhundzada, Adnan; Khan, M Khurram; Farooq, Sajid

    2016-01-01

    Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage. PMID:27851762

  16. An Overview of High-performance Parallel Big Data transfers over multiple network channels with Transport Layer Security (TLS) and TLS plus Perfect Forward Secrecy (PFS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Chin; Corttrell, R. A.

    This Technical Note provides an overview of high-performance parallel Big Data transfers with and without encryption for data in-transit over multiple network channels. It shows that with the parallel approach, it is feasible to carry out high-performance parallel "encrypted" Big Data transfers without serious impact to throughput. But other impacts, e.g. the energy-consumption part should be investigated. It also explains our rationales of using a statistics-based approach for gaining understanding from test results and for improving the system. The presentation is of high-level nature. Nevertheless, at the end we will pose some questions and identify potentially fruitful directions for futuremore » work.« less

  17. A review of the theory, methods and recent applications of high-throughput single-cell droplet microfluidics

    NASA Astrophysics Data System (ADS)

    Lagus, Todd P.; Edd, Jon F.

    2013-03-01

    Most cell biology experiments are performed in bulk cell suspensions where cell secretions become diluted and mixed in a contiguous sample. Confinement of single cells to small, picoliter-sized droplets within a continuous phase of oil provides chemical isolation of each cell, creating individual microreactors where rare cell qualities are highlighted and otherwise undetectable signals can be concentrated to measurable levels. Recent work in microfluidics has yielded methods for the encapsulation of cells in aqueous droplets and hydrogels at kilohertz rates, creating the potential for millions of parallel single-cell experiments. However, commercial applications of high-throughput microdroplet generation and downstream sensing and actuation methods are still emerging for cells. Using fluorescence-activated cell sorting (FACS) as a benchmark for commercially available high-throughput screening, this focused review discusses the fluid physics of droplet formation, methods for cell encapsulation in liquids and hydrogels, sensors and actuators and notable biological applications of high-throughput single-cell droplet microfluidics.

  18. Massively Parallel Rogue Cell Detection Using Serial Time-Encoded Amplified Microscopy of Inertially Ordered Cells in High Throughput Flow

    DTIC Science & Technology

    2011-08-01

    further chemical analysis of the cells. While in our proof-of-concept demonstration, we showed high- throughput screening of budding yeast and...of 8.0 mW/cm2 through the transparency mask for 90 seconds. The wafer was baked again at 95°C for 4 minutes then developed in SU-8 developer...sonicated in isopropanol for 5 minutes, sonicated in deionized H2O for 5 minutes, and baked at 65°C for at least 30 minutes. Holes were punched

  19. Rapid determination of enantiomeric excess: a focus on optical approaches.

    PubMed

    Leung, Diana; Kang, Sung Ok; Anslyn, Eric V

    2012-01-07

    High-throughput screening (HTS) methods are becoming increasingly essential in discovering chiral catalysts or auxiliaries for asymmetric transformations due to the advent of parallel synthesis and combinatorial chemistry. Both parallel synthesis and combinatorial chemistry can lead to the exploration of a range of structural candidates and reaction conditions as a means to obtain the highest enantiomeric excess (ee) of a desired transformation. One current bottleneck in these approaches to asymmetric reactions is the determination of ee, which has led researchers to explore a wide range of HTS techniques. To be truly high-throughput, it has been proposed that a technique that can analyse a thousand or more samples per day is needed. Many of the current approaches to this goal are based on optical methods because they allow for a rapid determination of ee due to quick data collection and their parallel analysis capabilities. In this critical review these techniques are reviewed with a discussion of their respective advantages and drawbacks, and with a contrast to chromatographic methods (180 references). This journal is © The Royal Society of Chemistry 2012

  20. High-throughput Titration of Luciferase-expressing Recombinant Viruses

    PubMed Central

    Garcia, Vanessa; Krishnan, Ramya; Davis, Colin; Batenchuk, Cory; Le Boeuf, Fabrice; Abdelbary, Hesham; Diallo, Jean-Simon

    2014-01-01

    Standard plaque assays to determine infectious viral titers can be time consuming, are not amenable to a high volume of samples, and cannot be done with viruses that do not form plaques. As an alternative to plaque assays, we have developed a high-throughput titration method that allows for the simultaneous titration of a high volume of samples in a single day. This approach involves infection of the samples with a Firefly luciferase tagged virus, transfer of the infected samples onto an appropriate permissive cell line, subsequent addition of luciferin, reading of plates in order to obtain luminescence readings, and finally the conversion from luminescence to viral titers. The assessment of cytotoxicity using a metabolic viability dye can be easily incorporated in the workflow in parallel and provide valuable information in the context of a drug screen. This technique provides a reliable, high-throughput method to determine viral titers as an alternative to a standard plaque assay. PMID:25285536

  1. Thin-Film Material Science and Processing | Materials Science | NREL

    Science.gov Websites

    , a prime example of this research is thin-film photovoltaics (PV). Thin films are important because have developed a quantitative high-throughput technique that can measure many barriers in parallel with

  2. From genes to protein mechanics on a chip.

    PubMed

    Otten, Marcus; Ott, Wolfgang; Jobst, Markus A; Milles, Lukas F; Verdorfer, Tobias; Pippig, Diana A; Nash, Michael A; Gaub, Hermann E

    2014-11-01

    Single-molecule force spectroscopy enables mechanical testing of individual proteins, but low experimental throughput limits the ability to screen constructs in parallel. We describe a microfluidic platform for on-chip expression, covalent surface attachment and measurement of single-molecule protein mechanical properties. A dockerin tag on each protein molecule allowed us to perform thousands of pulling cycles using a single cohesin-modified cantilever. The ability to synthesize and mechanically probe protein libraries enables high-throughput mechanical phenotyping.

  3. Bifrost: a Modular Python/C++ Framework for Development of High-Throughput Data Analysis Pipelines

    NASA Astrophysics Data System (ADS)

    Cranmer, Miles; Barsdell, Benjamin R.; Price, Danny C.; Garsden, Hugh; Taylor, Gregory B.; Dowell, Jayce; Schinzel, Frank; Costa, Timothy; Greenhill, Lincoln J.

    2017-01-01

    Large radio interferometers have data rates that render long-term storage of raw correlator data infeasible, thus motivating development of real-time processing software. For high-throughput applications, processing pipelines are challenging to design and implement. Motivated by science efforts with the Long Wavelength Array, we have developed Bifrost, a novel Python/C++ framework that eases the development of high-throughput data analysis software by packaging algorithms as black box processes in a directed graph. This strategy to modularize code allows astronomers to create parallelism without code adjustment. Bifrost uses CPU/GPU ’circular memory’ data buffers that enable ready introduction of arbitrary functions into the processing path for ’streams’ of data, and allow pipelines to automatically reconfigure in response to astrophysical transient detection or input of new observing settings. We have deployed and tested Bifrost at the latest Long Wavelength Array station, in Sevilleta National Wildlife Refuge, NM, where it handles throughput exceeding 10 Gbps per CPU core.

  4. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER

    PubMed Central

    2014-01-01

    Background HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar’s striped processing pattern with Intel SSE2 instruction set extension. Results A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. Conclusions The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model’s size. PMID:24884826

  5. Parallelization of Catalytic Packed-Bed Microchannels with Pressure-Drop Microstructures for Gas-Liquid Multiphase Reactions

    NASA Astrophysics Data System (ADS)

    Murakami, Sunao; Ohtaki, Kenichiro; Matsumoto, Sohei; Inoue, Tomoya

    2012-06-01

    High-throughput and stable treatments are required to achieve the practical production of chemicals with microreactors. However, the flow maldistribution to the paralleled microchannels has been a critical problem in achieving the productive use of multichannel microreactors for multiphase flow conditions. In this study, we newly designed and fabricated a glass four-channel catalytic packed-bed microreactor for the scale-up of gas-liquid multiphase chemical reactions. We embedded microstructures generating high pressure losses at the upstream side of each packed bed, and experimentally confirmed the efficacy of the microstructures in decreasing the maldistribution of the gas-liquid flow to the parallel microchannels.

  6. A catalog of putative adverse outcome pathways (AOPs) that ...

    EPA Pesticide Factsheets

    A number of putative AOPs for several distinct MIEs of thyroid disruption have been formulated for amphibian metamorphosis and fish swim bladder inflation. These have been entered into the AOP knowledgebase on the OECD WIKI. The EDSP has been actively advancing high-throughput screening for chemical activity toward estrogen, androgen and thyroid targets. However, it has been recently identified that coverage for thyroid-related targets is lagging behind estrogen and androgen assay coverage. As thyroid-related medium-high throughput assays are actively being developed for inclusion in the ToxCast chemical screening program, a parallel effort is underway to characterize putative adverse outcome pathways (AOPs) specific to these thyroid-related targets. This effort is intended to provide biological and ecological context that will enhance the utility of ToxCast high throughput screening data for hazard identification.

  7. Design of a dataway processor for a parallel image signal processing system

    NASA Astrophysics Data System (ADS)

    Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    1995-04-01

    Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.

  8. Assaying gene function by growth competition experiment.

    PubMed

    Merritt, Joshua; Edwards, Jeremy S

    2004-07-01

    High-throughput screening and analysis is one of the emerging paradigms in biotechnology. In particular, high-throughput methods are essential in the field of functional genomics because of the vast amount of data generated in recent and ongoing genome sequencing efforts. In this report we discuss integrated functional analysis methodologies which incorporate both a growth competition component and a highly parallel assay used to quantify results of the growth competition. Several applications of the two most widely used technologies in the field, i.e., transposon mutagenesis and deletion strain library growth competition, and individual applications of several developing or less widely reported technologies are presented.

  9. Targeted Capture and High-Throughput Sequencing Using Molecular Inversion Probes (MIPs).

    PubMed

    Cantsilieris, Stuart; Stessman, Holly A; Shendure, Jay; Eichler, Evan E

    2017-01-01

    Molecular inversion probes (MIPs) in combination with massively parallel DNA sequencing represent a versatile, yet economical tool for targeted sequencing of genomic DNA. Several thousand genomic targets can be selectively captured using long oligonucleotides containing unique targeting arms and universal linkers. The ability to append sequencing adaptors and sample-specific barcodes allows large-scale pooling and subsequent high-throughput sequencing at relatively low cost per sample. Here, we describe a "wet bench" protocol detailing the capture and subsequent sequencing of >2000 genomic targets from 192 samples, representative of a single lane on the Illumina HiSeq 2000 platform.

  10. BarraCUDA - a fast short read sequence aligner using graphics processing units

    PubMed Central

    2012-01-01

    Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net PMID:22244497

  11. High-Throughput, Motility-Based Sorter for Microswimmers such as C. elegans

    PubMed Central

    Yuan, Jinzhou; Zhou, Jessie; Raizen, David M.; Bau, Haim H.

    2015-01-01

    Animal motility varies with genotype, disease, aging, and environmental conditions. In many studies, it is desirable to carry out high throughput motility-based sorting to isolate rare animals for, among other things, forward genetic screens to identify genetic pathways that regulate phenotypes of interest. Many commonly used screening processes are labor-intensive, lack sensitivity, and require extensive investigator training. Here, we describe a sensitive, high throughput, automated, motility-based method for sorting nematodes. Our method is implemented in a simple microfluidic device capable of sorting thousands of animals per hour per module, and is amenable to parallelism. The device successfully enriches for known C. elegans motility mutants. Furthermore, using this device, we isolate low-abundance mutants capable of suppressing the somnogenic effects of the flp-13 gene, which regulates C. elegans sleep. By performing genetic complementation tests, we demonstrate that our motility-based sorting device efficiently isolates mutants for the same gene identified by tedious visual inspection of behavior on an agar surface. Therefore, our motility-based sorter is capable of performing high throughput gene discovery approaches to investigate fundamental biological processes. PMID:26008643

  12. X-ray transparent microfluidic chips for high-throughput screening and optimization of in meso membrane protein crystallization

    PubMed Central

    Schieferstein, Jeremy M.; Pawate, Ashtamurthy S.; Wan, Frank; Sheraden, Paige N.; Broecker, Jana; Ernst, Oliver P.; Gennis, Robert B.

    2017-01-01

    Elucidating and clarifying the function of membrane proteins ultimately requires atomic resolution structures as determined most commonly by X-ray crystallography. Many high impact membrane protein structures have resulted from advanced techniques such as in meso crystallization that present technical difficulties for the set-up and scale-out of high-throughput crystallization experiments. In prior work, we designed a novel, low-throughput X-ray transparent microfluidic device that automated the mixing of protein and lipid by diffusion for in meso crystallization trials. Here, we report X-ray transparent microfluidic devices for high-throughput crystallization screening and optimization that overcome the limitations of scale and demonstrate their application to the crystallization of several membrane proteins. Two complementary chips are presented: (1) a high-throughput screening chip to test 192 crystallization conditions in parallel using as little as 8 nl of membrane protein per well and (2) a crystallization optimization chip to rapidly optimize preliminary crystallization hits through fine-gradient re-screening. We screened three membrane proteins for new in meso crystallization conditions, identifying several preliminary hits that we tested for X-ray diffraction quality. Further, we identified and optimized the crystallization condition for a photosynthetic reaction center mutant and solved its structure to a resolution of 3.5 Å. PMID:28469762

  13. Role of APOE Isoforms in the Pathogenesis of TBI Induced Alzheimer’s Disease

    DTIC Science & Technology

    2015-10-01

    global deletion, APOE targeted replacement, complex breeding, CCI model optimization, mRNA library generation, high throughput massive parallel ...ATP binding cassette transporter A1 (ABCA1) is a lipid transporter that controls the generation of HDL in plasma and ApoE-containing lipoproteins in... parallel sequencing, mRNA-seq, behavioral testing, mem- ory impairement, recovery. 3 Overall Project Summary During the reported period, we have been able

  14. AdiosStMan: Parallelizing Casacore Table Data System using Adaptive IO System

    NASA Astrophysics Data System (ADS)

    Wang, R.; Harris, C.; Wicenec, A.

    2016-07-01

    In this paper, we investigate the Casacore Table Data System (CTDS) used in the casacore and CASA libraries, and methods to parallelize it. CTDS provides a storage manager plugin mechanism for third-party developers to design and implement their own CTDS storage managers. Having this in mind, we looked into various storage backend techniques that can possibly enable parallel I/O for CTDS by implementing new storage managers. After carrying on benchmarks showing the excellent parallel I/O throughput of the Adaptive IO System (ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then applied the CASA MSTransform frequency split task to verify the ADIOS Storage Manager. We also ran a series of performance tests to examine the I/O throughput in a massively parallel scenario.

  15. High-throughput strategies for the discovery and engineering of enzymes for biocatalysis.

    PubMed

    Jacques, Philippe; Béchet, Max; Bigan, Muriel; Caly, Delphine; Chataigné, Gabrielle; Coutte, François; Flahaut, Christophe; Heuson, Egon; Leclère, Valérie; Lecouturier, Didier; Phalip, Vincent; Ravallec, Rozenn; Dhulster, Pascal; Froidevaux, Rénato

    2017-02-01

    Innovations in novel enzyme discoveries impact upon a wide range of industries for which biocatalysis and biotransformations represent a great challenge, i.e., food industry, polymers and chemical industry. Key tools and technologies, such as bioinformatics tools to guide mutant library design, molecular biology tools to create mutants library, microfluidics/microplates, parallel miniscale bioreactors and mass spectrometry technologies to create high-throughput screening methods and experimental design tools for screening and optimization, allow to evolve the discovery, development and implementation of enzymes and whole cells in (bio)processes. These technological innovations are also accompanied by the development and implementation of clean and sustainable integrated processes to meet the growing needs of chemical, pharmaceutical, environmental and biorefinery industries. This review gives an overview of the benefits of high-throughput screening approach from the discovery and engineering of biocatalysts to cell culture for optimizing their production in integrated processes and their extraction/purification.

  16. High throughput optical lithography by scanning a massive array of bowtie aperture antennas at near-field

    PubMed Central

    Wen, X.; Datta, A.; Traverso, L. M.; Pan, L.; Xu, X.; Moon, E. E.

    2015-01-01

    Optical lithography, the enabling process for defining features, has been widely used in semiconductor industry and many other nanotechnology applications. Advances of nanotechnology require developments of high-throughput optical lithography capabilities to overcome the optical diffraction limit and meet the ever-decreasing device dimensions. We report our recent experimental advancements to scale up diffraction unlimited optical lithography in a massive scale using the near field nanolithography capabilities of bowtie apertures. A record number of near-field optical elements, an array of 1,024 bowtie antenna apertures, are simultaneously employed to generate a large number of patterns by carefully controlling their working distances over the entire array using an optical gap metrology system. Our experimental results reiterated the ability of using massively-parallel near-field devices to achieve high-throughput optical nanolithography, which can be promising for many important nanotechnology applications such as computation, data storage, communication, and energy. PMID:26525906

  17. High-throughput cultivation and screening platform for unicellular phototrophs.

    PubMed

    Tillich, Ulrich M; Wolter, Nick; Schulze, Katja; Kramer, Dan; Brödel, Oliver; Frohme, Marcus

    2014-09-16

    High-throughput cultivation and screening methods allow a parallel, miniaturized and cost efficient processing of many samples. These methods however, have not been generally established for phototrophic organisms such as microalgae or cyanobacteria. In this work we describe and test high-throughput methods with the model organism Synechocystis sp. PCC6803. The required technical automation for these processes was achieved with a Tecan Freedom Evo 200 pipetting robot. The cultivation was performed in 2.2 ml deepwell microtiter plates within a cultivation chamber outfitted with programmable shaking conditions, variable illumination, variable temperature, and an adjustable CO2 atmosphere. Each microtiter-well within the chamber functions as a separate cultivation vessel with reproducible conditions. The automated measurement of various parameters such as growth, full absorption spectrum, chlorophyll concentration, MALDI-TOF-MS, as well as a novel vitality measurement protocol, have already been established and can be monitored during cultivation. Measurement of growth parameters can be used as inputs for the system to allow for periodic automatic dilutions and therefore a semi-continuous cultivation of hundreds of cultures in parallel. The system also allows the automatic generation of mid and long term backups of cultures to repeat experiments or to retrieve strains of interest. The presented platform allows for high-throughput cultivation and screening of Synechocystis sp. PCC6803. The platform should be usable for many phototrophic microorganisms as is, and be adaptable for even more. A variety of analyses are already established and the platform is easily expandable both in quality, i.e. with further parameters to screen for additional targets and in quantity, i.e. size or number of processed samples.

  18. Multi-petascale highly efficient parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less

  19. Theory and implementation of a very high throughput true random number generator in field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less

  20. Theory and implementation of a very high throughput true random number generator in field programmable gate array.

    PubMed

    Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao

    2016-04-01

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.

  1. A high-throughput next-generation sequencing-based method for detecting the mutational fingerprint of carcinogens

    PubMed Central

    Besaratinia, Ahmad; Li, Haiqing; Yoon, Jae-In; Zheng, Albert; Gao, Hanlin; Tommasi, Stella

    2012-01-01

    Many carcinogens leave a unique mutational fingerprint in the human genome. These mutational fingerprints manifest as specific types of mutations often clustering at certain genomic loci in tumor genomes from carcinogen-exposed individuals. To develop a high-throughput method for detecting the mutational fingerprint of carcinogens, we have devised a cost-, time- and labor-effective strategy, in which the widely used transgenic Big Blue® mouse mutation detection assay is made compatible with the Roche/454 Genome Sequencer FLX Titanium next-generation sequencing technology. As proof of principle, we have used this novel method to establish the mutational fingerprints of three prominent carcinogens with varying mutagenic potencies, including sunlight ultraviolet radiation, 4-aminobiphenyl and secondhand smoke that are known to be strong, moderate and weak mutagens, respectively. For verification purposes, we have compared the mutational fingerprints of these carcinogens obtained by our newly developed method with those obtained by parallel analyses using the conventional low-throughput approach, that is, standard mutation detection assay followed by direct DNA sequencing using a capillary DNA sequencer. We demonstrate that this high-throughput next-generation sequencing-based method is highly specific and sensitive to detect the mutational fingerprints of the tested carcinogens. The method is reproducible, and its accuracy is comparable with that of the currently available low-throughput method. In conclusion, this novel method has the potential to move the field of carcinogenesis forward by allowing high-throughput analysis of mutations induced by endogenous and/or exogenous genotoxic agents. PMID:22735701

  2. A high-throughput next-generation sequencing-based method for detecting the mutational fingerprint of carcinogens.

    PubMed

    Besaratinia, Ahmad; Li, Haiqing; Yoon, Jae-In; Zheng, Albert; Gao, Hanlin; Tommasi, Stella

    2012-08-01

    Many carcinogens leave a unique mutational fingerprint in the human genome. These mutational fingerprints manifest as specific types of mutations often clustering at certain genomic loci in tumor genomes from carcinogen-exposed individuals. To develop a high-throughput method for detecting the mutational fingerprint of carcinogens, we have devised a cost-, time- and labor-effective strategy, in which the widely used transgenic Big Blue mouse mutation detection assay is made compatible with the Roche/454 Genome Sequencer FLX Titanium next-generation sequencing technology. As proof of principle, we have used this novel method to establish the mutational fingerprints of three prominent carcinogens with varying mutagenic potencies, including sunlight ultraviolet radiation, 4-aminobiphenyl and secondhand smoke that are known to be strong, moderate and weak mutagens, respectively. For verification purposes, we have compared the mutational fingerprints of these carcinogens obtained by our newly developed method with those obtained by parallel analyses using the conventional low-throughput approach, that is, standard mutation detection assay followed by direct DNA sequencing using a capillary DNA sequencer. We demonstrate that this high-throughput next-generation sequencing-based method is highly specific and sensitive to detect the mutational fingerprints of the tested carcinogens. The method is reproducible, and its accuracy is comparable with that of the currently available low-throughput method. In conclusion, this novel method has the potential to move the field of carcinogenesis forward by allowing high-throughput analysis of mutations induced by endogenous and/or exogenous genotoxic agents.

  3. Improving Data Transfer Throughput with Direct Search Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balaprakash, Prasanna; Morozov, Vitali; Kettimuthu, Rajkumar

    2016-01-01

    Improving data transfer throughput over high-speed long-distance networks has become increasingly difficult. Numerous factors such as nondeterministic congestion, dynamics of the transfer protocol, and multiuser and multitask source and destination endpoints, as well as interactions among these factors, contribute to this difficulty. A promising approach to improving throughput consists in using parallel streams at the application layer.We formulate and solve the problem of choosing the number of such streams from a mathematical optimization perspective. We propose the use of direct search methods, a class of easy-to-implement and light-weight mathematical optimization algorithms, to improve the performance of data transfers by dynamicallymore » adapting the number of parallel streams in a manner that does not require domain expertise, instrumentation, analytical models, or historic data. We apply our method to transfers performed with the GridFTP protocol, and illustrate the effectiveness of the proposed algorithm when used within Globus, a state-of-the-art data transfer tool, on productionWAN links and servers. We show that when compared to user default settings our direct search methods can achieve up to 10x performance improvement under certain conditions. We also show that our method can overcome performance degradation due to external compute and network load on source end points, a common scenario at high performance computing facilities.« less

  4. Ultra-high throughput detection of single cell β-galactosidase activity in droplets using micro-optical lens array

    NASA Astrophysics Data System (ADS)

    Lim, Jiseok; Vrignon, Jérémy; Gruner, Philipp; Karamitros, Christos S.; Konrad, Manfred; Baret, Jean-Christophe

    2013-11-01

    We demonstrate the use of a hybrid microfluidic-micro-optical system for the screening of enzymatic activity at the single cell level. Escherichia coli β-galactosidase activity is revealed by a fluorogenic assay in 100 pl droplets. Individual droplets containing cells are screened by measuring their fluorescence signal using a high-speed camera. The measurement is parallelized over 100 channels equipped with microlenses and analyzed by image processing. A reinjection rate of 1 ml of emulsion per minute was reached corresponding to more than 105 droplets per second, an analytical throughput larger than those obtained using flow cytometry.

  5. High-throughput sequencing of forensic genetic samples using punches of FTA cards with buccal swabs.

    PubMed

    Kampmann, Marie-Louise; Buchard, Anders; Børsting, Claus; Morling, Niels

    2016-01-01

    Here, we demonstrate that punches from buccal swab samples preserved on FTA cards can be used for high-throughput DNA sequencing, also known as massively parallel sequencing (MPS). We typed 44 reference samples with the HID-Ion AmpliSeq Identity Panel using washed 1.2 mm punches from FTA cards with buccal swabs and compared the results with those obtained with DNA extracted using the EZ1 DNA Investigator Kit. Concordant profiles were obtained for all samples. Our protocol includes simple punch, wash, and PCR steps, reducing cost and hands-on time in the laboratory. Furthermore, it facilitates automation of DNA sequencing.

  6. From Genes to Protein Mechanics on a Chip

    PubMed Central

    Milles, Lukas F.; Verdorfer, Tobias; Pippig, Diana A.; Nash, Michael A.; Gaub, Hermann E.

    2014-01-01

    Single-molecule force spectroscopy enables mechanical testing of individual proteins, however low experimental throughput limits the ability to screen constructs in parallel. We describe a microfluidic platform for on-chip protein expression and measurement of single-molecule mechanical properties. We constructed microarrays of proteins covalently attached to a chip surface, and found that a single cohesin-modified cantilever that bound to the terminal dockerin-tag of each protein remained stable over thousands of pulling cycles. The ability to synthesize and mechanically probe protein libraries presents new opportunities for high-throughput mechanical phenotyping. PMID:25194847

  7. Preparation of Protein Samples for NMR Structure, Function, and Small Molecule Screening Studies

    PubMed Central

    Acton, Thomas B.; Xiao, Rong; Anderson, Stephen; Aramini, James; Buchwald, William A.; Ciccosanti, Colleen; Conover, Ken; Everett, John; Hamilton, Keith; Huang, Yuanpeng Janet; Janjua, Haleema; Kornhaber, Gregory; Lau, Jessica; Lee, Dong Yup; Liu, Gaohua; Maglaqui, Melissa; Ma, Lichung; Mao, Lei; Patel, Dayaban; Rossi, Paolo; Sahdev, Seema; Shastry, Ritu; Swapna, G.V.T.; Tang, Yeufeng; Tong, Saichiu; Wang, Dongyan; Wang, Huang; Zhao, Li; Montelione, Gaetano T.

    2014-01-01

    In this chapter, we concentrate on the production of high quality protein samples for NMR studies. In particular, we provide an in-depth description of recent advances in the production of NMR samples and their synergistic use with recent advancements in NMR hardware. We describe the protein production platform of the Northeast Structural Genomics Consortium, and outline our high-throughput strategies for producing high quality protein samples for nuclear magnetic resonance (NMR) studies. Our strategy is based on the cloning, expression and purification of 6X-His-tagged proteins using T7-based Escherichia coli systems and isotope enrichment in minimal media. We describe 96-well ligation-independent cloning and analytical expression systems, parallel preparative scale fermentation, and high-throughput purification protocols. The 6X-His affinity tag allows for a similar two-step purification procedure implemented in a parallel high-throughput fashion that routinely results in purity levels sufficient for NMR studies (> 97% homogeneity). Using this platform, the protein open reading frames of over 17,500 different targeted proteins (or domains) have been cloned as over 28,000 constructs. Nearly 5,000 of these proteins have been purified to homogeneity in tens of milligram quantities (see Summary Statistics, http://nesg.org/statistics.html), resulting in more than 950 new protein structures, including more than 400 NMR structures, deposited in the Protein Data Bank. The Northeast Structural Genomics Consortium pipeline has been effective in producing protein samples of both prokaryotic and eukaryotic origin. Although this paper describes our entire pipeline for producing isotope-enriched protein samples, it focuses on the major updates introduced during the last 5 years (Phase 2 of the National Institute of General Medical Sciences Protein Structure Initiative). Our advanced automated and/or parallel cloning, expression, purification, and biophysical screening technologies are suitable for implementation in a large individual laboratory or by a small group of collaborating investigators for structural biology, functional proteomics, ligand screening and structural genomics research. PMID:21371586

  8. Agarose droplet microfluidics for highly parallel and efficient single molecule emulsion PCR.

    PubMed

    Leng, Xuefei; Zhang, Wenhua; Wang, Chunming; Cui, Liang; Yang, Chaoyong James

    2010-11-07

    An agarose droplet method was developed for highly parallel and efficient single molecule emulsion PCR. The method capitalizes on the unique thermoresponsive sol-gel switching property of agarose for highly efficient DNA amplification and amplicon trapping. Uniform agarose solution droplets generated via a microfluidic chip serve as robust and inert nanolitre PCR reactors for single copy DNA molecule amplification. After PCR, agarose droplets are gelated to form agarose beads, trapping all amplicons in each reactor to maintain the monoclonality of each droplet. This method does not require cocapsulation of primer labeled microbeads, allows high throughput generation of uniform droplets and enables high PCR efficiency, making it a promising platform for many single copy genetic studies.

  9. Continuous inertial microparticle and blood cell separation in straight channels with local microstructures.

    PubMed

    Wu, Zhenlong; Chen, Yu; Wang, Moran; Chung, Aram J

    2016-02-07

    Fluid inertia which has conventionally been neglected in microfluidics has been gaining much attention for particle and cell manipulation because inertia-based methods inherently provide simple, passive, precise and high-throughput characteristics. Particularly, the inertial approach has been applied to blood separation for various biomedical research studies mainly using spiral microchannels. For higher throughput, parallelization is essential; however, it is difficult to realize using spiral channels because of their large two dimensional layouts. In this work, we present a novel inertial platform for continuous sheathless particle and blood cell separation in straight microchannels containing microstructures. Microstructures within straight channels exert secondary flows to manipulate particle positions similar to Dean flow in curved channels but with higher controllability. Through a balance between inertial lift force and microstructure-induced secondary flow, we deterministically position microspheres and cells based on their sizes to be separated downstream. Using our inertial platform, we successfully sorted microparticles and fractionized blood cells with high separation efficiencies, high purities and high throughputs. The inertial separation platform developed here can be operated to process diluted blood with a throughput of 10.8 mL min(-1)via radially arrayed single channels with one inlet and two rings of outlets.

  10. Multi-step high-throughput conjugation platform for the development of antibody-drug conjugates.

    PubMed

    Andris, Sebastian; Wendeler, Michaela; Wang, Xiangyang; Hubbuch, Jürgen

    2018-07-20

    Antibody-drug conjugates (ADCs) form a rapidly growing class of biopharmaceuticals which attracts a lot of attention throughout the industry due to its high potential for cancer therapy. They combine the specificity of a monoclonal antibody (mAb) and the cell-killing capacity of highly cytotoxic small molecule drugs. Site-specific conjugation approaches involve a multi-step process for covalent linkage of antibody and drug via a linker. Despite the range of parameters that have to be investigated, high-throughput methods are scarcely used so far in ADC development. In this work an automated high-throughput platform for a site-specific multi-step conjugation process on a liquid-handling station is presented by use of a model conjugation system. A high-throughput solid-phase buffer exchange was successfully incorporated for reagent removal by utilization of a batch cation exchange step. To ensure accurate screening of conjugation parameters, an intermediate UV/Vis-based concentration determination was established including feedback to the process. For conjugate characterization, a high-throughput compatible reversed-phase chromatography method with a runtime of 7 min and no sample preparation was developed. Two case studies illustrate the efficient use for mapping the operating space of a conjugation process. Due to the degree of automation and parallelization, the platform is capable of significantly reducing process development efforts and material demands and shorten development timelines for antibody-drug conjugates. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Genome sequencing in microfabricated high-density picolitre reactors.

    PubMed

    Margulies, Marcel; Egholm, Michael; Altman, William E; Attiya, Said; Bader, Joel S; Bemben, Lisa A; Berka, Jan; Braverman, Michael S; Chen, Yi-Ju; Chen, Zhoutao; Dewell, Scott B; Du, Lei; Fierro, Joseph M; Gomes, Xavier V; Godwin, Brian C; He, Wen; Helgesen, Scott; Ho, Chun Heen; Ho, Chun He; Irzyk, Gerard P; Jando, Szilveszter C; Alenquer, Maria L I; Jarvie, Thomas P; Jirage, Kshama B; Kim, Jong-Bum; Knight, James R; Lanza, Janna R; Leamon, John H; Lefkowitz, Steven M; Lei, Ming; Li, Jing; Lohman, Kenton L; Lu, Hong; Makhijani, Vinod B; McDade, Keith E; McKenna, Michael P; Myers, Eugene W; Nickerson, Elizabeth; Nobile, John R; Plant, Ramona; Puc, Bernard P; Ronan, Michael T; Roth, George T; Sarkis, Gary J; Simons, Jan Fredrik; Simpson, John W; Srinivasan, Maithreyan; Tartaro, Karrie R; Tomasz, Alexander; Vogt, Kari A; Volkmer, Greg A; Wang, Shally H; Wang, Yong; Weiner, Michael P; Yu, Pengguang; Begley, Richard F; Rothberg, Jonathan M

    2005-09-15

    The proliferation of large-scale DNA-sequencing projects in recent years has driven a search for alternative methods to reduce time and cost. Here we describe a scalable, highly parallel sequencing system with raw throughput significantly greater than that of state-of-the-art capillary electrophoresis instruments. The apparatus uses a novel fibre-optic slide of individual wells and is able to sequence 25 million bases, at 99% or better accuracy, in one four-hour run. To achieve an approximately 100-fold increase in throughput over current Sanger sequencing technology, we have developed an emulsion method for DNA amplification and an instrument for sequencing by synthesis using a pyrosequencing protocol optimized for solid support and picolitre-scale volumes. Here we show the utility, throughput, accuracy and robustness of this system by shotgun sequencing and de novo assembly of the Mycoplasma genitalium genome with 96% coverage at 99.96% accuracy in one run of the machine.

  12. History, applications, and challenges of immune repertoire research.

    PubMed

    Liu, Xiao; Wu, Jinghua

    2018-02-27

    The diversity of T and B cells in terms of their receptor sequences is huge in the vertebrate's immune system and provides broad protection against the vast diversity of pathogens. Immune repertoire is defined as the sum of T cell receptors and B cell receptors (also named immunoglobulin) that makes the organism's adaptive immune system. Before the emergence of high-throughput sequencing, the studies on immune repertoire were limited by the underdeveloped methodologies, since it was impossible to capture the whole picture by the low-throughput tools. The massive paralleled sequencing technology suits perfectly the researches on immune repertoire. In this article, we review the history of immune repertoire studies, in terms of technologies and research applications. Particularly, we discuss several aspects of challenges in this field and highlight the efforts to develop potential solutions, in the era of high-throughput sequencing of the immune repertoire.

  13. Quantum-Dot-Based Electrochemical Immunoassay for High-Throughput Screening of the Prostate-Specific Antigen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jun; Liu, Guodong; Wu, Hong

    2008-01-01

    In this paper, we demonstrate an electrochemical high-throughput sensing platform for simple, sensitive detection of PSA based on QD labels. This sensing platform uses a microplate for immunoreactions and disposable screen-printed electrodes (SPE) for electrochemical stripping analysis of metal ions released from QD labels. With the 96-well microplate, capturing antibodies are conveniently immobilized to the well surface, and the process of immunoreaction is easily controlled. The formed sandwich complexes on the well surface are also easily isolated from reaction solutions. In particular, a microplate-based electrochemical assay can make it feasible to conduct a parallel analysis of several samples or multiplemore » protein markers. This assay offers a number of advantages including (1) simplicity, cost-effectiveness, (2) high sensitivity, (3) capability to sense multiple samples or targets in parallel, and (4) a potentially portable device with an SPE array implanted in the microplate. This PSA assay is sensitive because it uses two amplification processes: (1) QDs as a label for enhancing electrical signal since secondary antibodies are linked to QDs that contain a large number of metal atoms and (2) there is inherent signal amplification for electrochemical stripping analysis—preconcentration of metal ion onto the electrode surface for amplifying electrical signals. Therefore, the high sensitivity of this method, stemming from dual signal amplification via QD labels and pre-concentration, allows low concentration levels to be detected while using small sample volumes. Thus, this QD-based electrochemical detection approach offers a simple, rapid, cost-effective, and high throughput assay of PSA.« less

  14. A parallel genome-wide RNAi screening strategy to identify host proteins important for entry of Marburg virus and H5N1 influenza virus.

    PubMed

    Cheng, Han; Koning, Katie; O'Hearn, Aileen; Wang, Minxiu; Rumschlag-Booms, Emily; Varhegyi, Elizabeth; Rong, Lijun

    2015-11-24

    Genome-wide RNAi screening has been widely used to identify host proteins involved in replication and infection of different viruses, and numerous host factors are implicated in the replication cycles of these viruses, demonstrating the power of this approach. However, discrepancies on target identification of the same viruses by different groups suggest that high throughput RNAi screening strategies need to be carefully designed, developed and optimized prior to the large scale screening. Two genome-wide RNAi screens were performed in parallel against the entry of pseudotyped Marburg viruses and avian influenza virus H5N1 utilizing an HIV-1 based surrogate system, to identify host factors which are important for virus entry. A comparative analysis approach was employed in data analysis, which alleviated systematic positional effects and reduced the false positive number of virus-specific hits. The parallel nature of the strategy allows us to easily identify the host factors for a specific virus with a greatly reduced number of false positives in the initial screen, which is one of the major problems with high throughput screening. The power of this strategy is illustrated by a genome-wide RNAi screen for identifying the host factors important for Marburg virus and/or avian influenza virus H5N1 as described in this study. This strategy is particularly useful for highly pathogenic viruses since pseudotyping allows us to perform high throughput screens in the biosafety level 2 (BSL-2) containment instead of the BSL-3 or BSL-4 for the infectious viruses, with alleviated safety concerns. The screening strategy together with the unique comparative analysis approach makes the data more suitable for hit selection and enables us to identify virus-specific hits with a much lower false positive rate.

  15. Recycling isoelectric focusing with computer controlled data acquisition system. [for high resolution electrophoretic separation and purification of biomolecules

    NASA Technical Reports Server (NTRS)

    Egen, N. B.; Twitty, G. E.; Bier, M.

    1979-01-01

    Isoelectric focusing is a high-resolution technique for separating and purifying large peptides, proteins, and other biomolecules. The apparatus described in the present paper constitutes a new approach to fluid stabilization and increased throughput. Stabilization is achieved by flowing the process fluid uniformly through an array of closely spaced filter elements oriented parallel both to the electrodes and the direction of the flow. This seems to overcome the major difficulties of parabolic flow and electroosmosis at the walls, while limiting the convection to chamber compartments defined by adjacent spacers. Increased throughput is achieved by recirculating the process fluid through external heat exchange reservoirs, where the Joule heat is dissipated.

  16. High-Throughput, Motility-Based Sorter for Microswimmers and Gene Discovery Platform

    NASA Astrophysics Data System (ADS)

    Yuan, Jinzhou; Raizen, David; Bau, Haim

    2015-11-01

    Animal motility varies with genotype, disease progression, aging, and environmental conditions. In many studies, it is desirable to carry out high throughput motility-based sorting to isolate rare animals for, among other things, forward genetic screens to identify genetic pathways that regulate phenotypes of interest. Many commonly used screening processes are labor-intensive, lack sensitivity, and require extensive investigator training. Here, we describe a sensitive, high throughput, automated, motility-based method for sorting nematodes. Our method was implemented in a simple microfluidic device capable of sorting many thousands of animals per hour per module, and is amenable to parallelism. The device successfully enriched for known C. elegans motility mutants. Furthermore, using this device, we isolated low-abundance mutants capable of suppressing the somnogenic effects of the flp-13 gene, which regulates sleep-like quiescence in C. elegans. Subsequent genomic sequencing led to the identification of a flp-13-suppressor gene. This research was supported, in part, by NIH NIA Grant 5R03AG042690-02.

  17. An Automated High-throughput Array Microscope for Cancer Cell Mechanics

    NASA Astrophysics Data System (ADS)

    Cribb, Jeremy A.; Osborne, Lukas D.; Beicker, Kellie; Psioda, Matthew; Chen, Jian; O'Brien, E. Timothy; Taylor, Russell M., II; Vicci, Leandra; Hsiao, Joe Ping-Lin; Shao, Chong; Falvo, Michael; Ibrahim, Joseph G.; Wood, Kris C.; Blobe, Gerard C.; Superfine, Richard

    2016-06-01

    Changes in cellular mechanical properties correlate with the progression of metastatic cancer along the epithelial-to-mesenchymal transition (EMT). Few high-throughput methodologies exist that measure cell compliance, which can be used to understand the impact of genetic alterations or to screen the efficacy of chemotherapeutic agents. We have developed a novel array high-throughput microscope (AHTM) system that combines the convenience of the standard 96-well plate with the ability to image cultured cells and membrane-bound microbeads in twelve independently-focusing channels simultaneously, visiting all wells in eight steps. We use the AHTM and passive bead rheology techniques to determine the relative compliance of human pancreatic ductal epithelial (HPDE) cells, h-TERT transformed HPDE cells (HPNE), and four gain-of-function constructs related to EMT. The AHTM found HPNE, H-ras, Myr-AKT, and Bcl2 transfected cells more compliant relative to controls, consistent with parallel tests using atomic force microscopy and invasion assays, proving the AHTM capable of screening for changes in mechanical phenotype.

  18. Further development of a robust workup process for solution-phase high-throughput library synthesis to address environmental and sample tracking issues.

    PubMed

    Kuroda, Noritaka; Hird, Nick; Cork, David G

    2006-01-01

    During further improvement of a high-throughput, solution-phase synthesis system, new workup tools and apparatus for parallel liquid-liquid extraction and evaporation have been developed. A combination of in-house design and collaboration with external manufacturers has been used to address (1) environmental issues concerning solvent emissions and (2) sample tracking errors arising from manual intervention. A parallel liquid-liquid extraction unit, containing miniature high-speed magnetic stirrers for efficient mixing of organic and aqueous phases, has been developed for use on a multichannel liquid handler. Separation of the phases is achieved by dispensing them into a newly patented filter tube containing a vertical hydrophobic porous membrane, which allows only the organic phase to pass into collection vials positioned below. The vertical positioning of the membrane overcomes the hitherto dependence on the use of heavier-than-water, bottom-phase, organic solvents such as dichloromethane, which are restricted due to environmental concerns. Both small (6-mL) and large (60-mL) filter tubes were developed for parallel phase separation in library and template synthesis, respectively. In addition, an apparatus for parallel solvent evaporation was developed to (1) remove solvent from the above samples with highly efficient recovery and (2) avoid the movement of individual samples between their collection on a liquid handler and registration to prevent sample identification errors. The apparatus uses a diaphragm pump to achieve a dynamic circulating closed system with a heating block for the rack of 96 sample vials and an efficient condenser to trap the solvents. Solvent recovery is typically >98%, and convenient operation and monitoring has made the apparatus the first choice for removal of volatile solvents.

  19. One-dimensional acoustic standing waves in rectangular channels for flow cytometry.

    PubMed

    Austin Suthanthiraraj, Pearlson P; Piyasena, Menake E; Woods, Travis A; Naivar, Mark A; Lόpez, Gabriel P; Graves, Steven W

    2012-07-01

    Flow cytometry has become a powerful analytical tool for applications ranging from blood diagnostics to high throughput screening of molecular assemblies on microsphere arrays. However, instrument size, expense, throughput, and consumable use limit its use in resource poor areas of the world, as a component in environmental monitoring, and for detection of very rare cell populations. For these reasons, new technologies to improve the size and cost-to-performance ratio of flow cytometry are required. One such technology is the use of acoustic standing waves that efficiently concentrate cells and particles to the center of flow channels for analysis. The simplest form of this method uses one-dimensional acoustic standing waves to focus particles in rectangular channels. We have developed one-dimensional acoustic focusing flow channels that can be fabricated in simple capillary devices or easily microfabricated using photolithography and deep reactive ion etching. Image and video analysis demonstrates that these channels precisely focus single flowing streams of particles and cells for traditional flow cytometry analysis. Additionally, use of standing waves with increasing harmonics and in parallel microfabricated channels is shown to effectively create many parallel focused streams. Furthermore, we present the fabrication of an inexpensive optical platform for flow cytometry in rectangular channels and use of the system to provide precise analysis. The simplicity and low-cost of the acoustic focusing devices developed here promise to be effective for flow cytometers that have reduced size, cost, and consumable use. Finally, the straightforward path to parallel flow streams using one-dimensional multinode acoustic focusing, indicates that simple acoustic focusing in rectangular channels may also have a prominent role in high-throughput flow cytometry. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. WDM mid-board optics for chip-to-chip wavelength routing interconnects in the H2020 ICT-STREAMS

    NASA Astrophysics Data System (ADS)

    Kanellos, G. T.; Pleros, N.

    2017-02-01

    Multi-socket server boards have emerged to increase the processing power density on the board level and further flatten the data center networks beyond leaf-spine architectures. Scaling however the number of processors per board puts current electronic technologies into challenge, as it requires high bandwidth interconnects and high throughput switches with increased number of ports that are currently unavailable. On-board optical interconnection has proved the potential to efficiently satisfy the bandwidth needs, but their use has been limited to parallel links without performing any smart routing functionality. With CWDM optical interconnects already a commodity, cyclical wavelength routing proposed to fit the datacom for rack-to-rack and board-to-board communication now becomes a promising on-board routing platform. ICT-STREAMS is a European research project that aims to combine WDM parallel on-board transceivers with a cyclical AWGR, in order to create a new board-level, chip-to-chip interconnection paradigm that will leverage WDM parallel transmission to a powerful wavelength routing platform capable to interconnect multiple processors with unprecedented bandwidth and throughput capacity. Direct, any-to-any, on-board interconnection of multiple processors will significantly contribute to further flatten the data centers and facilitate east-west communication. In the present communication, we present ICT-STREAMS on-board wavelength routing architecture for multiple chip-to-chip interconnections and evaluate the overall system performance in terms of throughput and latency for several schemes and traffic profiles. We also review recent advances of the ICT-STREAMS platform key-enabling technologies that span from Si in-plane lasers and polymer based electro-optical circuit boards to silicon photonics transceivers and photonic-crystal amplifiers.

  1. Hydrogen storage materials discovery via high throughput ball milling and gas sorption.

    PubMed

    Li, Bin; Kaye, Steven S; Riley, Conor; Greenberg, Doron; Galang, Daniel; Bailey, Mark S

    2012-06-11

    The lack of a high capacity hydrogen storage material is a major barrier to the implementation of the hydrogen economy. To accelerate discovery of such materials, we have developed a high-throughput workflow for screening of hydrogen storage materials in which candidate materials are synthesized and characterized via highly parallel ball mills and volumetric gas sorption instruments, respectively. The workflow was used to identify mixed imides with significantly enhanced absorption rates relative to Li2Mg(NH)2. The most promising material, 2LiNH2:MgH2 + 5 atom % LiBH4 + 0.5 atom % La, exhibits the best balance of absorption rate, capacity, and cycle-life, absorbing >4 wt % H2 in 1 h at 120 °C after 11 absorption-desorption cycles.

  2. Evaluation of concurrent priority queue algorithms. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Q.

    1991-02-01

    The priority queue is a fundamental data structure that is used in a large variety of parallel algorithms, such as multiprocessor scheduling and parallel best-first search of state-space graphs. This thesis addresses the design and experimental evaluation of two novel concurrent priority queues: a parallel Fibonacci heap and a concurrent priority pool, and compares them with the concurrent binary heap. The parallel Fibonacci heap is based on the sequential Fibonacci heap, which is theoretically the most efficient data structure for sequential priority queues. This scheme not only preserves the efficient operation time bounds of its sequential counterpart, but also hasmore » very low contention by distributing locks over the entire data structure. The experimental results show its linearly scalable throughput and speedup up to as many processors as tested (currently 18). A concurrent access scheme for a doubly linked list is described as part of the implementation of the parallel Fibonacci heap. The concurrent priority pool is based on the concurrent B-tree and the concurrent pool. The concurrent priority pool has the highest throughput among the priority queues studied. Like the parallel Fibonacci heap, the concurrent priority pool scales linearly up to as many processors as tested. The priority queues are evaluated in terms of throughput and speedup. Some applications of concurrent priority queues such as the vertex cover problem and the single source shortest path problem are tested.« less

  3. Short-read, high-throughput sequencing technology for STR genotyping

    PubMed Central

    Bornman, Daniel M.; Hester, Mark E.; Schuetter, Jared M.; Kasoji, Manjula D.; Minard-Smith, Angela; Barden, Curt A.; Nelson, Scott C.; Godbold, Gene D.; Baker, Christine H.; Yang, Boyu; Walther, Jacquelyn E.; Tornes, Ivan E.; Yan, Pearlly S.; Rodriguez, Benjamin; Bundschuh, Ralf; Dickens, Michael L.; Young, Brian A.; Faith, Seth A.

    2013-01-01

    DNA-based methods for human identification principally rely upon genotyping of short tandem repeat (STR) loci. Electrophoretic-based techniques for variable-length classification of STRs are universally utilized, but are limited in that they have relatively low throughput and do not yield nucleotide sequence information. High-throughput sequencing technology may provide a more powerful instrument for human identification, but is not currently validated for forensic casework. Here, we present a systematic method to perform high-throughput genotyping analysis of the Combined DNA Index System (CODIS) STR loci using short-read (150 bp) massively parallel sequencing technology. Open source reference alignment tools were optimized to evaluate PCR-amplified STR loci using a custom designed STR genome reference. Evaluation of this approach demonstrated that the 13 CODIS STR loci and amelogenin (AMEL) locus could be accurately called from individual and mixture samples. Sensitivity analysis showed that as few as 18,500 reads, aligned to an in silico referenced genome, were required to genotype an individual (>99% confidence) for the CODIS loci. The power of this technology was further demonstrated by identification of variant alleles containing single nucleotide polymorphisms (SNPs) and the development of quantitative measurements (reads) for resolving mixed samples. PMID:25621315

  4. Parallel processing of genomics data

    NASA Astrophysics Data System (ADS)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  5. Nebula: reconstruction and visualization of scattering data in reciprocal space.

    PubMed

    Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H

    2015-04-01

    Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time-scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula , is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware.

  6. Nebula: reconstruction and visualization of scattering data in reciprocal space

    PubMed Central

    Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H.

    2015-01-01

    Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time­scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula, is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware. PMID:25844083

  7. High-Throughput Tabular Data Processor - Platform independent graphical tool for processing large data sets.

    PubMed

    Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz

    2018-01-01

    High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).

  8. High-Throughput Tabular Data Processor – Platform independent graphical tool for processing large data sets

    PubMed Central

    Bałut, Magdalena; Buckley, Patrick G.; Ochocka, J. Renata; Bartoszewski, Rafał; Crossman, David K.; Messiaen, Ludwine M.; Piotrowski, Arkadiusz

    2018-01-01

    High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp). PMID:29432475

  9. Parallel Electrochemical Treatment System and Application for Identifying Acid-Stable Oxygen Evolution Electrocatalysts

    DOE PAGES

    Jones, Ryan J. R.; Shinde, Aniketa; Guevarra, Dan; ...

    2015-01-05

    There are many energy technologies require electrochemical stability or preactivation of functional materials. Due to the long experiment duration required for either electrochemical preactivation or evaluation of operational stability, parallel screening is required to enable high throughput experimentation. We found that imposing operational electrochemical conditions to a library of materials in parallel creates several opportunities for experimental artifacts. We discuss the electrochemical engineering principles and operational parameters that mitigate artifacts int he parallel electrochemical treatment system. We also demonstrate the effects of resistive losses within the planar working electrode through a combination of finite element modeling and illustrative experiments. Operationmore » of the parallel-plate, membrane-separated electrochemical treatment system is demonstrated by exposing a composition library of mixed metal oxides to oxygen evolution conditions in 1M sulfuric acid for 2h. This application is particularly important because the electrolysis and photoelectrolysis of water are promising future energy technologies inhibited by the lack of highly active, acid-stable catalysts containing only earth abundant elements.« less

  10. 75 FR 42105 - Memorandum of Understanding: Food and Drug Administration and the National Institutes of Health...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-20

    ... of animals in regulatory testing is anticipated to occur in parallel with an increased ability to... phylogenetically lower animal species (e.g., fish, worms), as well as high throughput whole genome analytical... result in test methods for toxicity testing that are more scientifically and economically efficient and...

  11. MAPPER: high-throughput maskless lithography

    NASA Astrophysics Data System (ADS)

    Wieland, M. J.; de Boer, G.; ten Berge, G. F.; Jager, R.; van de Peut, T.; Peijster, J. J. M.; Slot, E.; Steenbrink, S. W. H. K.; Teepen, T. F.; van Veen, A. H. V.; Kampherbeek, B. J.

    2009-03-01

    Maskless electron beam lithography, or electron beam direct write, has been around for a long time in the semiconductor industry and was pioneered from the mid-1960s onwards. This technique has been used for mask writing applications as well as device engineering and in some cases chip manufacturing. However because of its relatively low throughput compared to optical lithography, electron beam lithography has never been the mainstream lithography technology. To extend optical lithography double patterning, as a bridging technology, and EUV lithography are currently explored. Irrespective of the technical viability of both approaches, one thing seems clear. They will be expensive [1]. MAPPER Lithography is developing a maskless lithography technology based on massively-parallel electron-beam writing with high speed optical data transport for switching the electron beams. In this way optical columns can be made with a throughput of 10-20 wafers per hour. By clustering several of these columns together high throughputs can be realized in a small footprint. This enables a highly cost-competitive alternative to double patterning and EUV alternatives. In 2007 MAPPER obtained its Proof of Lithography milestone by exposing in its Demonstrator 45 nm half pitch structures with 110 electron beams in parallel, where all the beams where individually switched on and off [2]. In 2008 MAPPER has taken a next step in its development by building several tools. The objective of building these tools is to involve semiconductor companies to be able to verify tool performance in their own environment. To enable this, the tools will have a 300 mm wafer stage in addition to a 110-beam optics column. First exposures at 45 nm half pitch resolution have been performed and analyzed. On the same wafer it is observed that all beams print and based on analysis of 11 beams the CD for the different patterns is within 2.2 nm from target and the CD uniformity for the different patterns is better than 2.8 nm.

  12. Development of micropump-actuated negative pressure pinched injection for parallel electrophoresis on array microfluidic chip.

    PubMed

    Li, Bowei; Jiang, Lei; Xie, Hua; Gao, Yan; Qin, Jianhua; Lin, Bingcheng

    2009-09-01

    A micropump-actuated negative pressure pinched injection method is developed for parallel electrophoresis on a multi-channel LIF detection system. The system has a home-made device that could individually control 16-port solenoid valves and a high-voltage power supply. The laser beam is excitated and distributes to the array separation channels for detection. The hybrid Glass-PDMS microfluidic chip comprises two common reservoirs, four separation channels coupled to their respective pneumatic micropumps and two reference channels. Due to use of pressure as a driving force, the proposed method has no sample bias effect for separation. There is only one high-voltage supply needed for separation without relying on the number of channels, which is significant for high-throughput analysis, and the time for sample loading is shortened to 1 s. In addition, the integrated micropumps can provide the versatile interface for coupling with other function units to satisfy the complicated demands. The performance is verified by separation of DNA marker and Hepatitis B virus DNA samples. And this method is also expected to show the potential throughput for the DNA analysis in the field of disease diagnosis.

  13. Arioc: high-throughput read alignment with GPU-accelerated exploration of the seed-and-extend search space

    PubMed Central

    Budavari, Tamas; Langmead, Ben; Wheelan, Sarah J.; Salzberg, Steven L.; Szalay, Alexander S.

    2015-01-01

    When computing alignments of DNA sequences to a large genome, a key element in achieving high processing throughput is to prioritize locations in the genome where high-scoring mappings might be expected. We formulated this task as a series of list-processing operations that can be efficiently performed on graphics processing unit (GPU) hardware.We followed this approach in implementing a read aligner called Arioc that uses GPU-based parallel sort and reduction techniques to identify high-priority locations where potential alignments may be found. We then carried out a read-by-read comparison of Arioc’s reported alignments with the alignments found by several leading read aligners. With simulated reads, Arioc has comparable or better accuracy than the other read aligners we tested. With human sequencing reads, Arioc demonstrates significantly greater throughput than the other aligners we evaluated across a wide range of sensitivity settings. The Arioc software is available at https://github.com/RWilton/Arioc. It is released under a BSD open-source license. PMID:25780763

  14. An Automated, High-Throughput System for GISAXS and GIWAXS Measurements of Thin Films

    NASA Astrophysics Data System (ADS)

    Schaible, Eric; Jimenez, Jessica; Church, Matthew; Lim, Eunhee; Stewart, Polite; Hexemer, Alexander

    Grazing incidence small-angle X-ray scattering (GISAXS) and grazing incidence wide-angle X-ray scattering (GIWAXS) are important techniques for characterizing thin films. In order to meet rapidly increasing demand, the SAXSWAXS beamline at the Advanced Light Source (beamline 7.3.3) has implemented a fully automated, high-throughput system to conduct SAXS, GISAXS and GIWAXS measurements. An automated robot arm transfers samples from a holding tray to a measurement stage. Intelligent software aligns each sample in turn, and measures each according to user-defined specifications. Users mail in trays of samples on individually barcoded pucks, and can download and view their data remotely. Data will be pipelined to the NERSC supercomputing facility, and will be available to users via a web portal that facilitates highly parallelized analysis.

  15. An ultra-HTS process for the identification of small molecule modulators of orphan G-protein-coupled receptors.

    PubMed

    Cacace, Angela; Banks, Martyn; Spicer, Timothy; Civoli, Francesca; Watson, John

    2003-09-01

    G-protein-coupled receptors (GPCRs) are the most successful target proteins for drug discovery research to date. More than 150 orphan GPCRs of potential therapeutic interest have been identified for which no activating ligands or biological functions are known. One of the greatest challenges in the pharmaceutical industry is to link these orphan GPCRs with human diseases. Highly automated parallel approaches that integrate ultra-high throughput and focused screening can be used to identify small molecule modulators of orphan GPCRs. These small molecules can then be employed as pharmacological tools to explore the function of orphan receptors in models of human disease. In this review, we describe methods that utilize powerful ultra-high-throughput screening technologies to identify surrogate ligands of orphan GPCRs.

  16. A high-throughput, multi-channel photon-counting detector with picosecond timing

    NASA Astrophysics Data System (ADS)

    Lapington, J. S.; Fraser, G. W.; Miller, G. M.; Ashton, T. J. R.; Jarron, P.; Despeisse, M.; Powolny, F.; Howorth, J.; Milnes, J.

    2009-06-01

    High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchannel plate devices with very high time resolution, and high-speed multi-channel ASIC electronics developed for the LHC at CERN, provides the necessary building blocks for a high-throughput detector system with up to 1024 parallel counting channels and 20 ps time resolution. We describe the detector and electronic design, discuss the current status of the HiContent project and present the results from a 64-channel prototype system. In the absence of an operational detector, we present measurements of the electronics performance using a pulse generator to simulate detector events. Event timing results from the NINO high-speed front-end ASIC captured using a fast digital oscilloscope are compared with data taken with the proposed electronic configuration which uses the multi-channel HPTDC timing ASIC.

  17. Parallel confocal detection of single biomolecules using diffractive optics and integrated detector units.

    PubMed

    Blom, H; Gösch, M

    2004-04-01

    The past few years we have witnessed a tremendous surge of interest in so-called array-based miniaturised analytical systems due to their value as extremely powerful tools for high-throughput sequence analysis, drug discovery and development, and diagnostic tests in medicine (see articles in Issue 1). Terminologies that have been used to describe these array-based bioscience systems include (but are not limited to): DNA-chip, microarrays, microchip, biochip, DNA-microarrays and genome chip. Potential technological benefits of introducing these miniaturised analytical systems include improved accuracy, multiplexing, lower sample and reagent consumption, disposability, and decreased analysis times, just to mention a few examples. Among the many alternative principles of detection-analysis (e.g.chemiluminescence, electroluminescence and conductivity), fluorescence-based techniques are widely used, examples being fluorescence resonance energy transfer, fluorescence quenching, fluorescence polarisation, time-resolved fluorescence, and fluorescence fluctuation spectroscopy (see articles in Issue 11). Time-dependent fluctuations of fluorescent biomolecules with different molecular properties, like molecular weight, translational and rotational diffusion time, colour and lifetime, potentially provide all the kinetic and thermodynamic information required in analysing complex interactions. In this mini-review article, we present recent extensions aimed to implement parallel laser excitation and parallel fluorescence detection that can lead to even further increase in throughput in miniaturised array-based analytical systems. We also report on developments and characterisations of multiplexing extension that allow multifocal laser excitation together with matched parallel fluorescence detection for parallel confocal dynamical fluorescence fluctuation studies at the single biomolecule level.

  18. Massively parallel whole genome amplification for single-cell sequencing using droplet microfluidics.

    PubMed

    Hosokawa, Masahito; Nishikawa, Yohei; Kogawa, Masato; Takeyama, Haruko

    2017-07-12

    Massively parallel single-cell genome sequencing is required to further understand genetic diversities in complex biological systems. Whole genome amplification (WGA) is the first step for single-cell sequencing, but its throughput and accuracy are insufficient in conventional reaction platforms. Here, we introduce single droplet multiple displacement amplification (sd-MDA), a method that enables massively parallel amplification of single cell genomes while maintaining sequence accuracy and specificity. Tens of thousands of single cells are compartmentalized in millions of picoliter droplets and then subjected to lysis and WGA by passive droplet fusion in microfluidic channels. Because single cells are isolated in compartments, their genomes are amplified to saturation without contamination. This enables the high-throughput acquisition of contamination-free and cell specific sequence reads from single cells (21,000 single-cells/h), resulting in enhancement of the sequence data quality compared to conventional methods. This method allowed WGA of both single bacterial cells and human cancer cells. The obtained sequencing coverage rivals those of conventional techniques with superior sequence quality. In addition, we also demonstrate de novo assembly of uncultured soil bacteria and obtain draft genomes from single cell sequencing. This sd-MDA is promising for flexible and scalable use in single-cell sequencing.

  19. Algorithm for fast event parameters estimation on GEM acquired data

    NASA Astrophysics Data System (ADS)

    Linczuk, Paweł; Krawczyk, Rafał D.; Poźniak, Krzysztof T.; Kasprowicz, Grzegorz; Wojeński, Andrzej; Chernyshova, Maryna; Czarski, Tomasz

    2016-09-01

    We present study of a software-hardware environment for developing fast computation with high throughput and low latency methods, which can be used as back-end in High Energy Physics (HEP) and other High Performance Computing (HPC) systems, based on high amount of input from electronic sensor based front-end. There is a parallelization possibilities discussion and testing on Intel HPC solutions with consideration of applications with Gas Electron Multiplier (GEM) measurement systems presented in this paper.

  20. High-Throughput Nanofabrication of Infra-red and Chiral Metamaterials using Nanospherical-Lens Lithography

    PubMed Central

    Chang, Yun-Chorng; Lu, Sih-Chen; Chung, Hsin-Chan; Wang, Shih-Ming; Tsai, Tzung-Da; Guo, Tzung-Fang

    2013-01-01

    Various infra-red and planar chiral metamaterials were fabricated using the modified Nanospherical-Lens Lithography. By replacing the light source with a hand-held ultraviolet lamp, its asymmetric light emission pattern produces the elliptical-shaped photoresist holes after passing through the spheres. The long axis of the ellipse is parallel to the lamp direction. The fabricated ellipse arrays exhibit localized surface plasmon resonance in mid-infra-red and are ideal platforms for surface enhanced infra-red absorption (SEIRA). We also demonstrate a way to design and fabricate complicated patterns by tuning parameters in each exposure step. This method is both high-throughput and low-cost, which is a powerful tool for future infra-red metamaterials applications. PMID:24284941

  1. Application of visual basic in high-throughput mass spectrometry-directed purification of combinatorial libraries.

    PubMed

    Li, B; Chan, E C Y

    2003-01-01

    We present an approach to customize the sample submission process for high-throughput purification (HTP) of combinatorial parallel libraries using preparative liquid chromatography electrospray ionization mass spectrometry. In this study, Visual Basic and Visual Basic for Applications programs were developed using Microsoft Visual Basic 6 and Microsoft Excel 2000, respectively. These programs are subsequently applied for the seamless electronic submission and handling of data for HTP. Functions were incorporated into these programs where medicinal chemists can perform on-line verification of the purification status and on-line retrieval of postpurification data. The application of these user friendly and cost effective programs in our HTP technology has greatly increased our work efficiency by reducing paper work and manual manipulation of data.

  2. Parallel Workflow for High-Throughput (>1,000 Samples/Day) Quantitative Analysis of Human Insulin-Like Growth Factor 1 Using Mass Spectrometric Immunoassay

    PubMed Central

    Oran, Paul E.; Trenchevska, Olgica; Nedelkov, Dobrin; Borges, Chad R.; Schaab, Matthew R.; Rehder, Douglas S.; Jarvis, Jason W.; Sherma, Nisha D.; Shen, Luhui; Krastins, Bryan; Lopez, Mary F.; Schwenke, Dawn C.; Reaven, Peter D.; Nelson, Randall W.

    2014-01-01

    Insulin-like growth factor 1 (IGF1) is an important biomarker for the management of growth hormone disorders. Recently there has been rising interest in deploying mass spectrometric (MS) methods of detection for measuring IGF1. However, widespread clinical adoption of any MS-based IGF1 assay will require increased throughput and speed to justify the costs of analyses, and robust industrial platforms that are reproducible across laboratories. Presented here is an MS-based quantitative IGF1 assay with performance rating of >1,000 samples/day, and a capability of quantifying IGF1 point mutations and posttranslational modifications. The throughput of the IGF1 mass spectrometric immunoassay (MSIA) benefited from a simplified sample preparation step, IGF1 immunocapture in a tip format, and high-throughput MALDI-TOF MS analysis. The Limit of Detection and Limit of Quantification of the resulting assay were 1.5 μg/L and 5 μg/L, respectively, with intra- and inter-assay precision CVs of less than 10%, and good linearity and recovery characteristics. The IGF1 MSIA was benchmarked against commercially available IGF1 ELISA via Bland-Altman method comparison test, resulting in a slight positive bias of 16%. The IGF1 MSIA was employed in an optimized parallel workflow utilizing two pipetting robots and MALDI-TOF-MS instruments synced into one-hour phases of sample preparation, extraction and MSIA pipette tip elution, MS data collection, and data processing. Using this workflow, high-throughput IGF1 quantification of 1,054 human samples was achieved in approximately 9 hours. This rate of assaying is a significant improvement over existing MS-based IGF1 assays, and is on par with that of the enzyme-based immunoassays. Furthermore, a mutation was detected in ∼1% of the samples (SNP: rs17884626, creating an A→T substitution at position 67 of the IGF1), demonstrating the capability of IGF1 MSIA to detect point mutations and posttranslational modifications. PMID:24664114

  3. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  4. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  5. Arrays of High-Aspect Ratio Microchannels for High-Throughput Isolation of Circulating Tumor Cells (CTCs).

    PubMed

    Hupert, Mateusz L; Jackson, Joshua M; Wang, Hong; Witek, Małgorzata A; Kamande, Joyce; Milowsky, Matthew I; Whang, Young E; Soper, Steven A

    2014-10-01

    Microsystem-based technologies are providing new opportunities in the area of in vitro diagnostics due to their ability to provide process automation enabling point-of-care operation. As an example, microsystems used for the isolation and analysis of circulating tumor cells (CTCs) from complex, heterogeneous samples in an automated fashion with improved recoveries and selectivity are providing new opportunities for this important biomarker. Unfortunately, many of the existing microfluidic systems lack the throughput capabilities and/or are too expensive to manufacture to warrant their widespread use in clinical testing scenarios. Here, we describe a disposable, all-polymer, microfluidic system for the high-throughput (HT) isolation of CTCs directly from whole blood inputs. The device employs an array of high aspect ratio (HAR), parallel, sinusoidal microchannels (25 µm × 150 µm; W × D; AR = 6.0) with walls covalently decorated with anti-EpCAM antibodies to provide affinity-based isolation of CTCs. Channel width, which is similar to an average CTC diameter (12-25 µm), plays a critical role in maximizing the probability of cell/wall interactions and allows for achieving high CTC recovery. The extended channel depth allows for increased throughput at the optimized flow velocity (2 mm/s in a microchannel); maximizes cell recovery, and prevents clogging of the microfluidic channels during blood processing. Fluidic addressing of the microchannel array with a minimal device footprint is provided by large cross-sectional area feed and exit channels poised orthogonal to the network of the sinusoidal capillary channels (so-called Z-geometry). Computational modeling was used to confirm uniform addressing of the channels in the isolation bed. Devices with various numbers of parallel microchannels ranging from 50 to 320 have been successfully constructed. Cyclic olefin copolymer (COC) was chosen as the substrate material due to its superior properties during UV-activation of the HAR microchannels surfaces prior to antibody attachment. Operation of the HT-CTC device has been validated by isolation of CTCs directly from blood secured from patients with metastatic prostate cancer. High CTC sample purities (low number of contaminating white blood cells, WBCs) allowed for direct lysis and molecular profiling of isolated CTCs.

  6. Competitive Genomic Screens of Barcoded Yeast Libraries

    PubMed Central

    Urbanus, Malene; Proctor, Michael; Heisler, Lawrence E.; Giaever, Guri; Nislow, Corey

    2011-01-01

    By virtue of advances in next generation sequencing technologies, we have access to new genome sequences almost daily. The tempo of these advances is accelerating, promising greater depth and breadth. In light of these extraordinary advances, the need for fast, parallel methods to define gene function becomes ever more important. Collections of genome-wide deletion mutants in yeasts and E. coli have served as workhorses for functional characterization of gene function, but this approach is not scalable, current gene-deletion approaches require each of the thousands of genes that comprise a genome to be deleted and verified. Only after this work is complete can we pursue high-throughput phenotyping. Over the past decade, our laboratory has refined a portfolio of competitive, miniaturized, high-throughput genome-wide assays that can be performed in parallel. This parallelization is possible because of the inclusion of DNA 'tags', or 'barcodes,' into each mutant, with the barcode serving as a proxy for the mutation and one can measure the barcode abundance to assess mutant fitness. In this study, we seek to fill the gap between DNA sequence and barcoded mutant collections. To accomplish this we introduce a combined transposon disruption-barcoding approach that opens up parallel barcode assays to newly sequenced, but poorly characterized microbes. To illustrate this approach we present a new Candida albicans barcoded disruption collection and describe how both microarray-based and next generation sequencing-based platforms can be used to collect 10,000 - 1,000,000 gene-gene and drug-gene interactions in a single experiment. PMID:21860376

  7. High throughput on-chip analysis of high-energy charged particle tracks using lensfree imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Wei; Shabbir, Faizan; Gong, Chao

    2015-04-13

    We demonstrate a high-throughput charged particle analysis platform, which is based on lensfree on-chip microscopy for rapid ion track analysis using allyl diglycol carbonate, i.e., CR-39 plastic polymer as the sensing medium. By adopting a wide-area opto-electronic image sensor together with a source-shifting based pixel super-resolution technique, a large CR-39 sample volume (i.e., 4 cm × 4 cm × 0.1 cm) can be imaged in less than 1 min using a compact lensfree on-chip microscope, which detects partially coherent in-line holograms of the ion tracks recorded within the CR-39 detector. After the image capture, using highly parallelized reconstruction and ion track analysis algorithms running on graphics processingmore » units, we reconstruct and analyze the entire volume of a CR-39 detector within ∼1.5 min. This significant reduction in the entire imaging and ion track analysis time not only increases our throughput but also allows us to perform time-resolved analysis of the etching process to monitor and optimize the growth of ion tracks during etching. This computational lensfree imaging platform can provide a much higher throughput and more cost-effective alternative to traditional lens-based scanning optical microscopes for ion track analysis using CR-39 and other passive high energy particle detectors.« less

  8. A 64Cycles/MB, Luma-Chroma Parallelized H.264/AVC Deblocking Filter for 4K × 2K Applications

    NASA Astrophysics Data System (ADS)

    Shen, Weiwei; Fan, Yibo; Zeng, Xiaoyang

    In this paper, a high-throughput debloking filter is presented for H.264/AVC standard, catering video applications with 4K × 2K (4096 × 2304) ultra-definition resolution. In order to strengthen the parallelism without simply increasing the area, we propose a luma-chroma parallel method. Meanwhile, this work reduces the number of processing cycles, the amount of external memory traffic and the working frequency, by using triple four-stage pipeline filters and a luma-chroma interlaced sequence. Furthermore, it eliminates most unnecessary off-chip memory bandwidth with a highly reusable memory scheme, and adopts a “slide window” buffer scheme. As a result, our design can support 4K × 2K at 30fps applications at the working frequency of only 70.8MHz.

  9. The Nano-Patch-Clamp Array: Microfabricated Glass Chips for High-Throughput Electrophysiology

    NASA Astrophysics Data System (ADS)

    Fertig, Niels

    2003-03-01

    Electrophysiology (i.e. patch clamping) remains the gold standard for pharmacological testing of putative ion channel active drugs (ICADs), but suffers from low throughput. A new ion channel screening technology based on microfabricated glass chip devices will be presented. The glass chips contain very fine apertures, which are used for whole-cell voltage clamp recordings as well as single channel recordings from mammalian cell lines. Chips containing multiple patch clamp wells will be used in a first bench-top device, which will allow perfusion and electrical readout of each well. This scalable technology will allow for automated, rapid and parallel screening on ion channel drug targets.

  10. Turbulent flow chromatography TFC-tandem mass spectrometry supporting in vitro/vivo studies of NCEs in high throughput fashion.

    PubMed

    Verdirame, Maria; Veneziano, Maria; Alfieri, Anna; Di Marco, Annalise; Monteagudo, Edith; Bonelli, Fabio

    2010-03-11

    Turbulent Flow Chromatography (TFC) is a powerful approach for on-line extraction in bioanalytical studies. It improves sensitivity and reduces sample preparation time, two factors that are of primary importance in drug discovery. In this paper the application of the ARIA system to the analytical support of in vivo pharmacokinetics (PK) and in vitro drug metabolism studies is described, with an emphasis in high throughput optimization. For PK studies, a comparison between acetonitrile plasma protein precipitation (APPP) and TFC was carried out. Our optimized TFC methodology gave better S/N ratios and lower limit of quantification (LOQ) than conventional procedures. A robust and high throughput analytical method to support hepatocyte metabolic stability screening of new chemical entities was developed by hyphenation of TFC with mass spectrometry. An in-loop dilution injection procedure was implemented to overcome one of the main issues when using TFC, that is the early elution of hydrophilic compounds that renders low recoveries. A comparison between off-line solid phase extraction (SPE) and TFC was also carried out, and recovery, sensitivity (LOQ), matrix effect and robustness were evaluated. The use of two parallel columns in the configuration of the system provided a further increase of the throughput. Copyright 2009 Elsevier B.V. All rights reserved.

  11. Research progress of plant population genomics based on high-throughput sequencing.

    PubMed

    Wang, Yun-sheng

    2016-08-01

    Population genomics, a new paradigm for population genetics, combine the concepts and techniques of genomics with the theoretical system of population genetics and improve our understanding of microevolution through identification of site-specific effect and genome-wide effects using genome-wide polymorphic sites genotypeing. With the appearance and improvement of the next generation high-throughput sequencing technology, the numbers of plant species with complete genome sequences increased rapidly and large scale resequencing has also been carried out in recent years. Parallel sequencing has also been done in some plant species without complete genome sequences. These studies have greatly promoted the development of population genomics and deepened our understanding of the genetic diversity, level of linking disequilibium, selection effect, demographical history and molecular mechanism of complex traits of relevant plant population at a genomic level. In this review, I briely introduced the concept and research methods of population genomics and summarized the research progress of plant population genomics based on high-throughput sequencing. I also discussed the prospect as well as existing problems of plant population genomics in order to provide references for related studies.

  12. Projection Exposure with Variable Axis Immersion Lenses: A High-Throughput Electron Beam Approach to “Suboptical” Lithography

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Hans

    1995-12-01

    IBM's high-throughput e-beam stepper approach PRojection Exposure with Variable Axis Immersion Lenses (PREVAIL) is reviewed. The PREVAIL concept combines technology building blocks of our probe-forming EL-3 and EL-4 systems with the exposure efficiency of pattern projection. The technology represents an extension of the shaped-beam approach toward massively parallel pixel projection. As demonstrated, the use of variable-axis lenses can provide large field coverage through reduction of off-axis aberrations which limit the performance of conventional projection systems. Subfield pattern sections containing 107 or more pixels can be electronically selected (mask plane), projected and positioned (wafer plane) at high speed. To generate the entire chip pattern subfields must be stitched together sequentially in a combination of electronic and mechanical positioning of mask and wafer. The PREVAIL technology promises throughput levels competitive with those of optical steppers at superior resolution. The PREVAIL project is being pursued to demonstrate the viability of the technology and to develop an e-beam alternative to “suboptical” lithography.

  13. High-Throughput, Adaptive FFT Architecture for FPGA-Based Spaceborne Data Processors

    NASA Technical Reports Server (NTRS)

    NguyenKobayashi, Kayla; Zheng, Jason X.; He, Yutao; Shah, Biren N.

    2011-01-01

    Exponential growth in microelectronics technology such as field-programmable gate arrays (FPGAs) has enabled high-performance spaceborne instruments with increasing onboard data processing capabilities. As a commonly used digital signal processing (DSP) building block, fast Fourier transform (FFT) has been of great interest in onboard data processing applications, which needs to strike a reasonable balance between high-performance (throughput, block size, etc.) and low resource usage (power, silicon footprint, etc.). It is also desirable to be designed so that a single design can be reused and adapted into instruments with different requirements. The Multi-Pass Wide Kernel FFT (MPWK-FFT) architecture was developed, in which the high-throughput benefits of the parallel FFT structure and the low resource usage of Singleton s single butterfly method is exploited. The result is a wide-kernel, multipass, adaptive FFT architecture. The 32K-point MPWK-FFT architecture includes 32 radix-2 butterflies, 64 FIFOs to store the real inputs, 64 FIFOs to store the imaginary inputs, complex twiddle factor storage, and FIFO logic to route the outputs to the correct FIFO. The inputs are stored in sequential fashion into the FIFOs, and the outputs of each butterfly are sequentially written first into the even FIFO, then the odd FIFO. Because of the order of the outputs written into the FIFOs, the depth of the even FIFOs, which are 768 each, are 1.5 times larger than the odd FIFOs, which are 512 each. The total memory needed for data storage, assuming that each sample is 36 bits, is 2.95 Mbits. The twiddle factors are stored in internal ROM inside the FPGA for fast access time. The total memory size to store the twiddle factors is 589.9Kbits. This FFT structure combines the benefits of high throughput from the parallel FFT kernels and low resource usage from the multi-pass FFT kernels with desired adaptability. Space instrument missions that need onboard FFT capabilities such as the proposed DESDynl, SWOT (Surface Water Ocean Topography), and Europa sounding radar missions would greatly benefit from this technology with significant reductions in non-recurring cost and risk.

  14. Condor-COPASI: high-throughput computing for biochemical networks

    PubMed Central

    2012-01-01

    Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage. PMID:22834945

  15. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  16. High Throughput Assays for Exposure Science (NIEHS OHAT ...

    EPA Pesticide Factsheets

    High throughput screening (HTS) data that characterize chemically induced biological activity have been generated for thousands of chemicals by the US interagency Tox21 and the US EPA ToxCast programs. In many cases there are no data available for comparing bioactivity from HTS with relevant human exposures. The EPA’s ExpoCast program is developing high-throughput approaches to generate the needed exposure estimates using existing databases and new, high-throughput measurements. The exposure pathway (i.e., the route of chemical from manufacture to human intake) significantly impacts the level of exposure. The presence, concentration, and formulation of chemicals in consumer products and articles of commerce (e.g., clothing) can therefore provide critical information for estimating risk. We have found that there are only limited data available on the chemical constituents (e.g., flame retardants, plasticizers) within most articles of commerce. Furthermore, the presence of some chemicals in otherwise well characterized products may be due to product packaging. We are analyzing sample consumer products using 2D gas chromatograph (GC) x GC Time of Flight Mass Spectrometry (GCxGCTOF/MS), which is suited for forensic investigation of chemicals in complex matrices (including toys, cleaners, and food). In parallel, we are working to create a reference library of retention times and spectral information for the entire Tox21 chemical library. In an examination of five p

  17. High-resolution, high-throughput imaging with a multibeam scanning electron microscope.

    PubMed

    Eberle, A L; Mikula, S; Schalek, R; Lichtman, J; Knothe Tate, M L; Zeidler, D

    2015-08-01

    Electron-electron interactions and detector bandwidth limit the maximal imaging speed of single-beam scanning electron microscopes. We use multiple electron beams in a single column and detect secondary electrons in parallel to increase the imaging speed by close to two orders of magnitude and demonstrate imaging for a variety of samples ranging from biological brain tissue to semiconductor wafers. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  18. Molecular profiling of single circulating tumor cells from lung cancer patients.

    PubMed

    Park, Seung-Min; Wong, Dawson J; Ooi, Chin Chun; Kurtz, David M; Vermesh, Ophir; Aalipour, Amin; Suh, Susie; Pian, Kelsey L; Chabon, Jacob J; Lee, Sang Hun; Jamali, Mehran; Say, Carmen; Carter, Justin N; Lee, Luke P; Kuschner, Ware G; Schwartz, Erich J; Shrager, Joseph B; Neal, Joel W; Wakelee, Heather A; Diehn, Maximilian; Nair, Viswam S; Wang, Shan X; Gambhir, Sanjiv S

    2016-12-27

    Circulating tumor cells (CTCs) are established cancer biomarkers for the "liquid biopsy" of tumors. Molecular analysis of single CTCs, which recapitulate primary and metastatic tumor biology, remains challenging because current platforms have limited throughput, are expensive, and are not easily translatable to the clinic. Here, we report a massively parallel, multigene-profiling nanoplatform to compartmentalize and analyze hundreds of single CTCs. After high-efficiency magnetic collection of CTC from blood, a single-cell nanowell array performs CTC mutation profiling using modular gene panels. Using this approach, we demonstrated multigene expression profiling of individual CTCs from non-small-cell lung cancer (NSCLC) patients with remarkable sensitivity. Thus, we report a high-throughput, multiplexed strategy for single-cell mutation profiling of individual lung cancer CTCs toward minimally invasive cancer therapy prediction and disease monitoring.

  19. Multiplexed high resolution soft x-ray RIXS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chuang, Y.-D.; Voronov, D.; Warwick, T.

    2016-07-27

    High-resolution Resonance Inelastic X-ray Scattering (RIXS) is a technique that allows us to probe the electronic excitations of complex materials with unprecedented precision. However, the RIXS process has a low cross section, compounded by the fact that the optical spectrometers used to analyze the scattered photons can only collect a small solid angle and overall have a small efficiency. Here we present a method to significantly increase the throughput of RIXS systems, by energy multiplexing, so that a complete RIXS map of scattered intensity versus photon energy in and photon energy out can be recorded simultaneously{sup 1}. This parallel acquisitionmore » scheme should provide a gain in throughput of over 100.. A system based on this principle, QERLIN, is under construction at the Advanced Light Source (ALS).« less

  20. A Microfluidic Platform for High-Throughput Multiplexed Protein Quantitation

    PubMed Central

    Volpetti, Francesca; Garcia-Cordero, Jose; Maerkl, Sebastian J.

    2015-01-01

    We present a high-throughput microfluidic platform capable of quantitating up to 384 biomarkers in 4 distinct samples by immunoassay. The microfluidic device contains 384 unit cells, which can be individually programmed with pairs of capture and detection antibody. Samples are quantitated in each unit cell by four independent MITOMI detection areas, allowing four samples to be analyzed in parallel for a total of 1,536 assays per device. We show that the device can be pre-assembled and stored for weeks at elevated temperature and we performed proof-of-concept experiments simultaneously quantitating IL-6, IL-1β, TNF-α, PSA, and GFP. Finally, we show that the platform can be used to identify functional antibody combinations by screening 64 antibody combinations requiring up to 384 unique assays per device. PMID:25680117

  1. New Bandwidth Efficient Parallel Concatenated Coding Schemes

    NASA Technical Reports Server (NTRS)

    Denedetto, S.; Divsalar, D.; Montorsi, G.; Pollara, F.

    1996-01-01

    We propose a new solution to parallel concatenation of trellis codes with multilevel amplitude/phase modulations and a suitable iterative decoding structure. Examples are given for throughputs 2 bits/sec/Hz with 8PSK and 16QAM signal constellations.

  2. Modelling for Ship Design and Production

    DTIC Science & Technology

    1991-09-01

    the physical production process. The product has to be delivered within the chain of order processing . The process “ship production” is defined by the...environment is of increasing importance. Changing product types, complexity and parallelism of order processing , short throughput times and fixed due...specialized and high quality products under manu- facturing conditions which ensure economic and effective order processing . Mapping these main

  3. Supercomputing with toys: harnessing the power of NVIDIA 8800GTX and playstation 3 for bioinformatics problem.

    PubMed

    Wilson, Justin; Dai, Manhong; Jakupovic, Elvis; Watson, Stanley; Meng, Fan

    2007-01-01

    Modern video cards and game consoles typically have much better performance to price ratios than that of general purpose CPUs. The parallel processing capabilities of game hardware are well-suited for high throughput biomedical data analysis. Our initial results suggest that game hardware is a cost-effective platform for some computationally demanding bioinformatics problems.

  4. Design and implementation of a high performance network security processor

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  5. Multiplex enrichment quantitative PCR (ME-qPCR): a high-throughput, highly sensitive detection method for GMO identification.

    PubMed

    Fu, Wei; Zhu, Pengyu; Wei, Shuang; Zhixin, Du; Wang, Chenguang; Wu, Xiyang; Li, Feiwu; Zhu, Shuifang

    2017-04-01

    Among all of the high-throughput detection methods, PCR-based methodologies are regarded as the most cost-efficient and feasible methodologies compared with the next-generation sequencing or ChIP-based methods. However, the PCR-based methods can only achieve multiplex detection up to 15-plex due to limitations imposed by the multiplex primer interactions. The detection throughput cannot meet the demands of high-throughput detection, such as SNP or gene expression analysis. Therefore, in our study, we have developed a new high-throughput PCR-based detection method, multiplex enrichment quantitative PCR (ME-qPCR), which is a combination of qPCR and nested PCR. The GMO content detection results in our study showed that ME-qPCR could achieve high-throughput detection up to 26-plex. Compared to the original qPCR, the Ct values of ME-qPCR were lower for the same group, which showed that ME-qPCR sensitivity is higher than the original qPCR. The absolute limit of detection for ME-qPCR could achieve levels as low as a single copy of the plant genome. Moreover, the specificity results showed that no cross-amplification occurred for irrelevant GMO events. After evaluation of all of the parameters, a practical evaluation was performed with different foods. The more stable amplification results, compared to qPCR, showed that ME-qPCR was suitable for GMO detection in foods. In conclusion, ME-qPCR achieved sensitive, high-throughput GMO detection in complex substrates, such as crops or food samples. In the future, ME-qPCR-based GMO content identification may positively impact SNP analysis or multiplex gene expression of food or agricultural samples. Graphical abstract For the first-step amplification, four primers (A, B, C, and D) have been added into the reaction volume. In this manner, four kinds of amplicons have been generated. All of these four amplicons could be regarded as the target of second-step PCR. For the second-step amplification, three parallels have been taken for the final evaluation. After the second evaluation, the final amplification curves and melting curves have been achieved.

  6. Acoustic impedance matched buffers enable separation of bacteria from blood cells at high cell concentrations.

    PubMed

    Ohlsson, Pelle; Petersson, Klara; Augustsson, Per; Laurell, Thomas

    2018-06-14

    Sepsis is a common and often deadly systemic response to an infection, usually caused by bacteria. The gold standard for finding the causing pathogen in a blood sample is blood culture, which may take hours to days. Shortening the time to diagnosis would significantly reduce mortality. To replace the time-consuming blood culture we are developing a method to directly separate bacteria from red and white blood cells to enable faster bacteria identification. The blood cells are moved from the sample flow into a parallel stream using acoustophoresis. Due to their smaller size, the bacteria are not affected by the acoustic field and therefore remain in the blood plasma flow and can be directed to a separate outlet. When optimizing for sample throughput, 1 ml of undiluted whole blood equivalent can be processed within 12.5 min, while maintaining the bacteria recovery at 90% and the blood cell removal above 99%. That makes this the fastest label-free microfluidic continuous flow method per channel to separate bacteria from blood with high bacteria recovery (>80%). The high throughput was achieved by matching the acoustic impedance of the parallel stream to that of the blood sample, to avoid that acoustic forces relocate the fluid streams.

  7. DOVIS: an implementation for high-throughput virtual screening using AutoDock.

    PubMed

    Zhang, Shuxing; Kumar, Kamal; Jiang, Xiaohui; Wallqvist, Anders; Reifman, Jaques

    2008-02-27

    Molecular-docking-based virtual screening is an important tool in drug discovery that is used to significantly reduce the number of possible chemical compounds to be investigated. In addition to the selection of a sound docking strategy with appropriate scoring functions, another technical challenge is to in silico screen millions of compounds in a reasonable time. To meet this challenge, it is necessary to use high performance computing (HPC) platforms and techniques. However, the development of an integrated HPC system that makes efficient use of its elements is not trivial. We have developed an application termed DOVIS that uses AutoDock (version 3) as the docking engine and runs in parallel on a Linux cluster. DOVIS can efficiently dock large numbers (millions) of small molecules (ligands) to a receptor, screening 500 to 1,000 compounds per processor per day. Furthermore, in DOVIS, the docking session is fully integrated and automated in that the inputs are specified via a graphical user interface, the calculations are fully integrated with a Linux cluster queuing system for parallel processing, and the results can be visualized and queried. DOVIS removes most of the complexities and organizational problems associated with large-scale high-throughput virtual screening, and provides a convenient and efficient solution for AutoDock users to use this software in a Linux cluster platform.

  8. Crystal MD: The massively parallel molecular dynamics software for metal with BCC structure

    NASA Astrophysics Data System (ADS)

    Hu, Changjun; Bai, He; He, Xinfu; Zhang, Boyao; Nie, Ningming; Wang, Xianmeng; Ren, Yingwen

    2017-02-01

    Material irradiation effect is one of the most important keys to use nuclear power. However, the lack of high-throughput irradiation facility and knowledge of evolution process, lead to little understanding of the addressed issues. With the help of high-performance computing, we could make a further understanding of micro-level-material. In this paper, a new data structure is proposed for the massively parallel simulation of the evolution of metal materials under irradiation environment. Based on the proposed data structure, we developed the new molecular dynamics software named Crystal MD. The simulation with Crystal MD achieved over 90% parallel efficiency in test cases, and it takes more than 25% less memory on multi-core clusters than LAMMPS and IMD, which are two popular molecular dynamics simulation software. Using Crystal MD, a two trillion particles simulation has been performed on Tianhe-2 cluster.

  9. "First generation" automated DNA sequencing technology.

    PubMed

    Slatko, Barton E; Kieleczawa, Jan; Ju, Jingyue; Gardner, Andrew F; Hendrickson, Cynthia L; Ausubel, Frederick M

    2011-10-01

    Beginning in the 1980s, automation of DNA sequencing has greatly increased throughput, reduced costs, and enabled large projects to be completed more easily. The development of automation technology paralleled the development of other aspects of DNA sequencing: better enzymes and chemistry, separation and imaging technology, sequencing protocols, robotics, and computational advancements (including base-calling algorithms with quality scores, database developments, and sequence analysis programs). Despite the emergence of high-throughput sequencing platforms, automated Sanger sequencing technology remains useful for many applications. This unit provides background and a description of the "First-Generation" automated DNA sequencing technology. It also includes protocols for using the current Applied Biosystems (ABI) automated DNA sequencing machines. © 2011 by John Wiley & Sons, Inc.

  10. Parallel processing of embossing dies with ultrafast lasers

    NASA Astrophysics Data System (ADS)

    Jarczynski, Manfred; Mitra, Thomas; Brüning, Stephan; Du, Keming; Jenke, Gerald

    2018-02-01

    Functionalization of surfaces equips products and components with new features like hydrophilic behavior, adjustable gloss level, light management properties, etc. Small feature sizes demand diffraction-limited spots and adapted fluence for different materials. Through the availability of high power fast repeating ultrashort pulsed lasers and efficient optical processing heads delivering diffraction-limited small spot size of around 10μm it is feasible to achieve fluences higher than an adequate patterning requires. Hence, parallel processing is becoming of interest to increase the throughput and allow mass production of micro machined surfaces. The first step on the roadmap of parallel processing for cylinder embossing dies was realized with an eight- spot processing head based on ns-fiber laser with passive optical beam splitting, individual spot switching by acousto optical modulation and an advanced imaging. Patterning of cylindrical embossing dies shows a high efficiency of nearby 80%, diffraction-limited and equally spaced spots with pitches down to 25μm achieved by a compression using cascaded prism arrays. Due to the nanoseconds laser pulses the ablation shows the typical surrounding material deposition of a hot process. In the next step the processing head was adapted to a picosecond-laser source and the 500W fiber laser was replaced by an ultrashort pulsed laser with 300W, 12ps and a repetition frequency of up to 6MHz. This paper presents details about the processing head design and the analysis of ablation rates and patterns on steel, copper and brass dies. Furthermore, it gives an outlook on scaling the parallel processing head from eight to 16 individually switched beamlets to increase processing throughput and optimized utilization of the available ultrashort pulsed laser energy.

  11. Efficient high-throughput biological process characterization: Definitive screening design with the ambr250 bioreactor system.

    PubMed

    Tai, Mitchell; Ly, Amanda; Leung, Inne; Nayar, Gautam

    2015-01-01

    The burgeoning pipeline for new biologic drugs has increased the need for high-throughput process characterization to efficiently use process development resources. Breakthroughs in highly automated and parallelized upstream process development have led to technologies such as the 250-mL automated mini bioreactor (ambr250™) system. Furthermore, developments in modern design of experiments (DoE) have promoted the use of definitive screening design (DSD) as an efficient method to combine factor screening and characterization. Here we utilize the 24-bioreactor ambr250™ system with 10-factor DSD to demonstrate a systematic experimental workflow to efficiently characterize an Escherichia coli (E. coli) fermentation process for recombinant protein production. The generated process model is further validated by laboratory-scale experiments and shows how the strategy is useful for quality by design (QbD) approaches to control strategies for late-stage characterization. © 2015 American Institute of Chemical Engineers.

  12. ClusCo: clustering and comparison of protein models.

    PubMed

    Jamroz, Michal; Kolinski, Andrzej

    2013-02-22

    The development, optimization and validation of protein modeling methods require efficient tools for structural comparison. Frequently, a large number of models need to be compared with the target native structure. The main reason for the development of Clusco software was to create a high-throughput tool for all-versus-all comparison, because calculating similarity matrix is the one of the bottlenecks in the protein modeling pipeline. Clusco is fast and easy-to-use software for high-throughput comparison of protein models with different similarity measures (cRMSD, dRMSD, GDT_TS, TM-Score, MaxSub, Contact Map Overlap) and clustering of the comparison results with standard methods: K-means Clustering or Hierarchical Agglomerative Clustering. The application was highly optimized and written in C/C++, including the code for parallel execution on CPU and GPU, which resulted in a significant speedup over similar clustering and scoring computation programs.

  13. Massively Parallel Rogue Cell Detection Using Serial Time-Encoded Amplified Microscopy of Inertially Ordered Cells in High-Throughput Flow

    DTIC Science & Technology

    2012-08-01

    techniques and STEAM imager. It couples the high-speed capability of the STEAM imager and differential phase contrast imaging of DIC / Nomarski microscopy...On 10 TPE chips, we obtained 9 homogenous and strong bonds, the failed bond being due to operator error and presence of air bubbles in the TPE...instruments, structural dynamics, and microelectromechanical systems (MEMS) via laser-scanning surface vibrometry , and observation of biomechanical motility

  14. High throughput parallel backside contacting and periodic texturing for high-efficiency solar cells

    DOEpatents

    Daniel, Claus; Blue, Craig A.; Ott, Ronald D.

    2014-08-19

    Disclosed are configurations of long-range ordered features of solar cell materials, and methods for forming same. Some features include electrical access openings through a backing layer to a photovoltaic material in the solar cell. Some features include textured features disposed adjacent a surface of a solar cell material. Typically the long-range ordered features are formed by ablating the solar cell material with a laser interference pattern from at least two laser beams.

  15. High-throughput NGL electron-beam direct-write lithography system

    NASA Astrophysics Data System (ADS)

    Parker, N. William; Brodie, Alan D.; McCoy, John H.

    2000-07-01

    Electron beam lithography systems have historically had low throughput. The only practical solution to this limitation is an approach using many beams writing simultaneously. For single-column multi-beam systems, including projection optics (SCALPELR and PREVAIL) and blanked aperture arrays, throughput and resolution are limited by space-charge effects. Multibeam micro-column (one beam per column) systems are limited by the need for low voltage operation, electrical connection density and fabrication complexities. In this paper, we discuss a new multi-beam concept employing multiple columns each with multiple beams to generate a very large total number of parallel writing beams. This overcomes the limitations of space-charge interactions and low voltage operation. We also discuss a rationale leading to the optimum number of columns and beams per column. Using this approach we show how production throughputs >= 60 wafers per hour can be achieved at CDs

  16. Simulating electron wave dynamics in graphene superlattices exploiting parallel processing advantages

    NASA Astrophysics Data System (ADS)

    Rodrigues, Manuel J.; Fernandes, David E.; Silveirinha, Mário G.; Falcão, Gabriel

    2018-01-01

    This work introduces a parallel computing framework to characterize the propagation of electron waves in graphene-based nanostructures. The electron wave dynamics is modeled using both "microscopic" and effective medium formalisms and the numerical solution of the two-dimensional massless Dirac equation is determined using a Finite-Difference Time-Domain scheme. The propagation of electron waves in graphene superlattices with localized scattering centers is studied, and the role of the symmetry of the microscopic potential in the electron velocity is discussed. The computational methodologies target the parallel capabilities of heterogeneous multi-core CPU and multi-GPU environments and are built with the OpenCL parallel programming framework which provides a portable, vendor agnostic and high throughput-performance solution. The proposed heterogeneous multi-GPU implementation achieves speedup ratios up to 75x when compared to multi-thread and multi-core CPU execution, reducing simulation times from several hours to a couple of minutes.

  17. Classified one-step high-radix signed-digit arithmetic units

    NASA Astrophysics Data System (ADS)

    Cherri, Abdallah K.

    1998-08-01

    High-radix number systems enable higher information storage density, less complexity, fewer system components, and fewer cascaded gates and operations. A simple one-step fully parallel high-radix signed-digit arithmetic is proposed for parallel optical computing based on new joint spatial encodings. This reduces hardware requirements and improves throughput by reducing the space-bandwidth produce needed. The high-radix signed-digit arithmetic operations are based on classifying the neighboring input digit pairs into various groups to reduce the computation rules. A new joint spatial encoding technique is developed to present both the operands and the computation rules. This technique increases the spatial bandwidth product of the spatial light modulators of the system. An optical implementation of the proposed high-radix signed-digit arithmetic operations is also presented. It is shown that our one-step trinary signed-digit and quaternary signed-digit arithmetic units are much simpler and better than all previously reported high-radix signed-digit techniques.

  18. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  19. Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fijany, A.; Milman, M.; Redding, D.

    1994-12-31

    In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less

  20. High-throughput electrophysiological assays for voltage gated ion channels using SyncroPatch 768PE.

    PubMed

    Li, Tianbo; Lu, Gang; Chiang, Eugene Y; Chernov-Rogan, Tania; Grogan, Jane L; Chen, Jun

    2017-01-01

    Ion channels regulate a variety of physiological processes and represent an important class of drug target. Among the many methods of studying ion channel function, patch clamp electrophysiology is considered the gold standard by providing the ultimate precision and flexibility. However, its utility in ion channel drug discovery is impeded by low throughput. Additionally, characterization of endogenous ion channels in primary cells remains technical challenging. In recent years, many automated patch clamp (APC) platforms have been developed to overcome these challenges, albeit with varying throughput, data quality and success rate. In this study, we utilized SyncroPatch 768PE, one of the latest generation APC platforms which conducts parallel recording from two-384 modules with giga-seal data quality, to push these 2 boundaries. By optimizing various cell patching parameters and a two-step voltage protocol, we developed a high throughput APC assay for the voltage-gated sodium channel Nav1.7. By testing a group of Nav1.7 reference compounds' IC50, this assay was proved to be highly consistent with manual patch clamp (R > 0.9). In a pilot screening of 10,000 compounds, the success rate, defined by > 500 MΩ seal resistance and >500 pA peak current, was 79%. The assay was robust with daily throughput ~ 6,000 data points and Z' factor 0.72. Using the same platform, we also successfully recorded endogenous voltage-gated potassium channel Kv1.3 in primary T cells. Together, our data suggest that SyncroPatch 768PE provides a powerful platform for ion channel research and drug discovery.

  1. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  2. A Protocol for Functional Assessment of Whole-Protein Saturation Mutagenesis Libraries Utilizing High-Throughput Sequencing.

    PubMed

    Stiffler, Michael A; Subramanian, Subu K; Salinas, Victor H; Ranganathan, Rama

    2016-07-03

    Site-directed mutagenesis has long been used as a method to interrogate protein structure, function and evolution. Recent advances in massively-parallel sequencing technology have opened up the possibility of assessing the functional or fitness effects of large numbers of mutations simultaneously. Here, we present a protocol for experimentally determining the effects of all possible single amino acid mutations in a protein of interest utilizing high-throughput sequencing technology, using the 263 amino acid antibiotic resistance enzyme TEM-1 β-lactamase as an example. In this approach, a whole-protein saturation mutagenesis library is constructed by site-directed mutagenic PCR, randomizing each position individually to all possible amino acids. The library is then transformed into bacteria, and selected for the ability to confer resistance to β-lactam antibiotics. The fitness effect of each mutation is then determined by deep sequencing of the library before and after selection. Importantly, this protocol introduces methods which maximize sequencing read depth and permit the simultaneous selection of the entire mutation library, by mixing adjacent positions into groups of length accommodated by high-throughput sequencing read length and utilizing orthogonal primers to barcode each group. Representative results using this protocol are provided by assessing the fitness effects of all single amino acid mutations in TEM-1 at a clinically relevant dosage of ampicillin. The method should be easily extendable to other proteins for which a high-throughput selection assay is in place.

  3. eRNA: a graphic user interface-based tool optimized for large data analysis from high-throughput RNA sequencing

    PubMed Central

    2014-01-01

    Background RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. Results We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification” includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module “mRNA identification” includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module “Target screening” provides expression profiling analyses and graphic visualization. The module “Self-testing” offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program’s functionality. Conclusions eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory. PMID:24593312

  4. eRNA: a graphic user interface-based tool optimized for large data analysis from high-throughput RNA sequencing.

    PubMed

    Yuan, Tiezheng; Huang, Xiaoyi; Dittmar, Rachel L; Du, Meijun; Kohli, Manish; Boardman, Lisa; Thibodeau, Stephen N; Wang, Liang

    2014-03-05

    RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification" includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module "mRNA identification" includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module "Target screening" provides expression profiling analyses and graphic visualization. The module "Self-testing" offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program's functionality. eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory.

  5. Accelerating research into bio-based FDCA-polyesters by using small scale parallel film reactors.

    PubMed

    Gruter, Gert-Jan M; Sipos, Laszlo; Adrianus Dam, Matheus

    2012-02-01

    High Throughput experimentation has been well established as a tool in early stage catalyst development and catalyst and process scale-up today. One of the more challenging areas of catalytic research is polymer catalysis. The main difference with most non-polymer catalytic conversions is the fact that the product is not a well defined molecule and the catalytic performance cannot be easily expressed only in terms of catalyst activity and selectivity. In polymerization reactions, polymer chains are formed that can have various lengths (resulting in a molecular weight distribution rather than a defined molecular weight), that can have different compositions (when random or block co-polymers are produced), that can have cross-linking (often significantly affecting physical properties), that can have different endgroups (often affecting subsequent processing steps) and several other variations. In addition, for polyolefins, mass and heat transfer, oxygen and moisture sensitivity, stereoregularity and many other intrinsic features make relevant high throughput screening in this field an incredible challenge. For polycondensation reactions performed in the melt often the viscosity becomes already high at modest molecular weights, which greatly influences mass transfer of the condensation product (often water or methanol). When reactions become mass transfer limited, catalyst performance comparison is often no longer relevant. This however does not mean that relevant experiments for these application areas cannot be performed on small scale. Relevant catalyst screening experiments for polycondensation reactions can be performed in very efficient small scale parallel equipment. Both transesterification and polycondensation as well as post condensation through solid-stating in parallel equipment have been developed. Next to polymer synthesis, polymer characterization also needs to be accelerated without making concessions to quality in order to draw relevant conclusions.

  6. Massively Parallel Nanostructure Assembly Strategies for Sensing and Information Technology. Phase 2

    DTIC Science & Technology

    2013-05-25

    field. This work has focused on the synthesis of new functional materials and the development of high-throughput, facile methods to assemble...Hong (Seoul National University, Korea). Specifically, gapped nanowires (GNW) were identified as candidate materials for synthesis and assembly as...Throughout the course of this grant, we reported major accomplishments both in the synthesis and assembly of such structures. Synthetically, we report three

  7. Noninvasive prenatal screening for fetal common sex chromosome aneuploidies from maternal blood.

    PubMed

    Zhang, Bin; Lu, Bei-Yi; Yu, Bin; Zheng, Fang-Xiu; Zhou, Qin; Chen, Ying-Ping; Zhang, Xiao-Qing

    2017-04-01

    Objective To explore the feasibility of high-throughput massively parallel genomic DNA sequencing technology for the noninvasive prenatal detection of fetal sex chromosome aneuploidies (SCAs). Methods The study enrolled pregnant women who were prepared to undergo noninvasive prenatal testing (NIPT) in the second trimester. Cell-free fetal DNA (cffDNA) was extracted from the mother's peripheral venous blood and a high-throughput sequencing procedure was undertaken. Patients identified as having pregnancies associated with SCAs were offered prenatal fetal chromosomal karyotyping. Results The study enrolled 10 275 pregnant women who were prepared to undergo NIPT. Of these, 57 pregnant women (0.55%) showed fetal SCA, including 27 with Turner syndrome (45,X), eight with Triple X syndrome (47,XXX), 12 with Klinefelter syndrome (47,XXY) and three with 47,XYY. Thirty-three pregnant women agreed to undergo fetal karyotyping and 18 had results consistent with NIPT, while 15 patients received a normal karyotype result. The overall positive predictive value of NIPT for detecting SCAs was 54.54% (18/33) and for detecting Turner syndrome (45,X) was 29.41% (5/17). Conclusion NIPT can be used to identify fetal SCAs by analysing cffDNA using massively parallel genomic sequencing, although the accuracy needs to be improved particularly for Turner syndrome (45,X).

  8. Streamlined approach to high-quality purification and identification of compound series using high-resolution MS and NMR.

    PubMed

    Mühlebach, Anneke; Adam, Joachim; Schön, Uwe

    2011-11-01

    Automated medicinal chemistry (parallel chemistry) has become an integral part of the drug-discovery process in almost every large pharmaceutical company. Parallel array synthesis of individual organic compounds has been used extensively to generate diverse structural libraries to support different phases of the drug-discovery process, such as hit-to-lead, lead finding, or lead optimization. In order to guarantee effective project support, efficiency in the production of compound libraries has been maximized. As a consequence, also throughput in chromatographic purification and analysis has been adapted. As a recent trend, more laboratories are preparing smaller, yet more focused libraries with even increasing demands towards quality, i.e. optimal purity and unambiguous confirmation of identity. This paper presents an automated approach how to combine effective purification and structural conformation of a lead optimization library created by microwave-assisted organic synthesis. The results of complementary analytical techniques such as UHPLC-HRMS and NMR are not only regarded but even merged for fast and easy decision making, providing optimal quality of compound stock. In comparison with the previous procedures, throughput times are at least four times faster, while compound consumption could be decreased more than threefold. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Massively parallel haplotyping on microscopic beads for the high-throughput phase analysis of single molecules.

    PubMed

    Boulanger, Jérôme; Muresan, Leila; Tiemann-Boege, Irene

    2012-01-01

    In spite of the many advances in haplotyping methods, it is still very difficult to characterize rare haplotypes in tissues and different environmental samples or to accurately assess the haplotype diversity in large mixtures. This would require a haplotyping method capable of analyzing the phase of single molecules with an unprecedented throughput. Here we describe such a haplotyping method capable of analyzing in parallel hundreds of thousands single molecules in one experiment. In this method, multiple PCR reactions amplify different polymorphic regions of a single DNA molecule on a magnetic bead compartmentalized in an emulsion drop. The allelic states of the amplified polymorphisms are identified with fluorescently labeled probes that are then decoded from images taken of the arrayed beads by a microscope. This method can evaluate the phase of up to 3 polymorphisms separated by up to 5 kilobases in hundreds of thousands single molecules. We tested the sensitivity of the method by measuring the number of mutant haplotypes synthesized by four different commercially available enzymes: Phusion, Platinum Taq, Titanium Taq, and Phire. The digital nature of the method makes it highly sensitive to detecting haplotype ratios of less than 1:10,000. We also accurately quantified chimera formation during the exponential phase of PCR by different DNA polymerases.

  10. A 48Cycles/MB H.264/AVC Deblocking Filter Architecture for Ultra High Definition Applications

    NASA Astrophysics Data System (ADS)

    Zhou, Dajiang; Zhou, Jinjia; Zhu, Jiayi; Goto, Satoshi

    In this paper, a highly parallel deblocking filter architecture for H.264/AVC is proposed to process one macroblock in 48 clock cycles and give real-time support to QFHD@60fps sequences at less than 100MHz. 4 edge filters organized in 2 groups for simultaneously processing vertical and horizontal edges are applied in this architecture to enhance its throughput. While parallelism increases, pipeline hazards arise owing to the latency of edge filters and data dependency of deblocking algorithm. To solve this problem, a zig-zag processing schedule is proposed to eliminate the pipeline bubbles. Data path of the architecture is then derived according to the processing schedule and optimized through data flow merging, so as to minimize the cost of logic and internal buffer. Meanwhile, the architecture's data input rate is designed to be identical to its throughput, while the transmission order of input data can also match the zig-zag processing schedule. Therefore no intercommunication buffer is required between the deblocking filter and its previous component for speed matching or data reordering. As a result, only one 24×64 two-port SRAM as internal buffer is required in this design. When synthesized with SMIC 130nm process, the architecture costs a gate count of 30.2k, which is competitive considering its high performance.

  11. Thin-film-transistor array: an exploratory attempt for high throughput cell manipulation using electrowetting principle

    NASA Astrophysics Data System (ADS)

    Shaik, F. Azam; Cathcart, G.; Ihida, S.; Lereau-Bernier, M.; Leclerc, E.; Sakai, Y.; Toshiyoshi, H.; Tixier-Mita, A.

    2017-05-01

    In lab-on-a-chip (LoC) devices, microfluidic displacement of liquids is a key component. electrowetting on dielectric (EWOD) is a technique to move fluids, with the advantage of not requiring channels, pumps or valves. Fluids are discretized into droplets on microelectrodes and moved by applying an electric field via the electrodes to manipulate the contact angle. Micro-objects, such as biological cells, can be transported inside of these droplets. However, the design of conventional microelectrodes, made by standard micro-fabrication techniques, fixes the path of the droplets, and limits the reconfigurability of paths and thus limits the parallel processing of droplets. In that respect, thin film transistor (TFT) technology presents a great opportunity as it allows infinitely reconfigurable paths, with high parallelizability. We propose here to investigate the possibility of using TFT array devices for high throughput cell manipulation using EWOD. A COMSOL based 2D simulation coupled with a MATLAB algorithm was used to simulate the contact angle modulation, displacement and mixing of droplets. These simulations were confirmed by experimental results. The EWOD technique was applied to a droplet of culture medium containing HepG2 carcinoma cells and demonstrated no negative effects on the viability of the cells. This confirms the possibility of applying EWOD techniques to cellular applications, such as parallel cell analysis.

  12. Chemiluminescence analyzer of NOx as a high-throughput screening tool in selective catalytic reduction of NO

    PubMed Central

    Oh, Kwang Seok; Woo, Seong Ihl

    2011-01-01

    A chemiluminescence-based analyzer of NOx gas species has been applied for high-throughput screening of a library of catalytic materials. The applicability of the commercial NOx analyzer as a rapid screening tool was evaluated using selective catalytic reduction of NO gas. A library of 60 binary alloys composed of Pt and Co, Zr, La, Ce, Fe or W on Al2O3 substrate was tested for the efficiency of NOx removal using a home-built 64-channel parallel and sequential tubular reactor. The NOx concentrations measured by the NOx analyzer agreed well with the results obtained using micro gas chromatography for a reference catalyst consisting of 1 wt% Pt on γ-Al2O3. Most alloys showed high efficiency at 275 °C, which is typical of Pt-based catalysts for selective catalytic reduction of NO. The screening with NOx analyzer allowed to select Pt-Ce(X) (X=1–3) and Pt–Fe(2) as the optimal catalysts for NOx removal: 73% NOx conversion was achieved with the Pt–Fe(2) alloy, which was much better than the results for the reference catalyst and the other library alloys. This study demonstrates a sequential high-throughput method of practical evaluation of catalysts for the selective reduction of NO. PMID:27877438

  13. web cellHTS2: a web-application for the analysis of high-throughput screening data.

    PubMed

    Pelz, Oliver; Gilsdorf, Moritz; Boutros, Michael

    2010-04-12

    The analysis of high-throughput screening data sets is an expanding field in bioinformatics. High-throughput screens by RNAi generate large primary data sets which need to be analyzed and annotated to identify relevant phenotypic hits. Large-scale RNAi screens are frequently used to identify novel factors that influence a broad range of cellular processes, including signaling pathway activity, cell proliferation, and host cell infection. Here, we present a web-based application utility for the end-to-end analysis of large cell-based screening experiments by cellHTS2. The software guides the user through the configuration steps that are required for the analysis of single or multi-channel experiments. The web-application provides options for various standardization and normalization methods, annotation of data sets and a comprehensive HTML report of the screening data analysis, including a ranked hit list. Sessions can be saved and restored for later re-analysis. The web frontend for the cellHTS2 R/Bioconductor package interacts with it through an R-server implementation that enables highly parallel analysis of screening data sets. web cellHTS2 further provides a file import and configuration module for common file formats. The implemented web-application facilitates the analysis of high-throughput data sets and provides a user-friendly interface. web cellHTS2 is accessible online at http://web-cellHTS2.dkfz.de. A standalone version as a virtual appliance and source code for platforms supporting Java 1.5.0 can be downloaded from the web cellHTS2 page. web cellHTS2 is freely distributed under GPL.

  14. A high-throughput approach to profile RNA structure.

    PubMed

    Delli Ponti, Riccardo; Marti, Stefanie; Armaos, Alexandros; Tartaglia, Gian Gaetano

    2017-03-17

    Here we introduce the Computational Recognition of Secondary Structure (CROSS) method to calculate the structural profile of an RNA sequence (single- or double-stranded state) at single-nucleotide resolution and without sequence length restrictions. We trained CROSS using data from high-throughput experiments such as Selective 2΄-Hydroxyl Acylation analyzed by Primer Extension (SHAPE; Mouse and HIV transcriptomes) and Parallel Analysis of RNA Structure (PARS; Human and Yeast transcriptomes) as well as high-quality NMR/X-ray structures (PDB database). The algorithm uses primary structure information alone to predict experimental structural profiles with >80% accuracy, showing high performances on large RNAs such as Xist (17 900 nucleotides; Area Under the ROC Curve AUC of 0.75 on dimethyl sulfate (DMS) experiments). We integrated CROSS in thermodynamics-based methods to predict secondary structure and observed an increase in their predictive power by up to 30%. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. The use of coded PCR primers enables high-throughput sequencing of multiple homolog amplification products by 454 parallel sequencing.

    PubMed

    Binladen, Jonas; Gilbert, M Thomas P; Bollback, Jonathan P; Panitz, Frank; Bendixen, Christian; Nielsen, Rasmus; Willerslev, Eske

    2007-02-14

    The invention of the Genome Sequence 20 DNA Sequencing System (454 parallel sequencing platform) has enabled the rapid and high-volume production of sequence data. Until now, however, individual emulsion PCR (emPCR) reactions and subsequent sequencing runs have been unable to combine template DNA from multiple individuals, as homologous sequences cannot be subsequently assigned to their original sources. We use conventional PCR with 5'-nucleotide tagged primers to generate homologous DNA amplification products from multiple specimens, followed by sequencing through the high-throughput Genome Sequence 20 DNA Sequencing System (GS20, Roche/454 Life Sciences). Each DNA sequence is subsequently traced back to its individual source through 5'tag-analysis. We demonstrate that this new approach enables the assignment of virtually all the generated DNA sequences to the correct source once sequencing anomalies are accounted for (miss-assignment rate<0.4%). Therefore, the method enables accurate sequencing and assignment of homologous DNA sequences from multiple sources in single high-throughput GS20 run. We observe a bias in the distribution of the differently tagged primers that is dependent on the 5' nucleotide of the tag. In particular, primers 5' labelled with a cytosine are heavily overrepresented among the final sequences, while those 5' labelled with a thymine are strongly underrepresented. A weaker bias also exists with regards to the distribution of the sequences as sorted by the second nucleotide of the dinucleotide tags. As the results are based on a single GS20 run, the general applicability of the approach requires confirmation. However, our experiments demonstrate that 5'primer tagging is a useful method in which the sequencing power of the GS20 can be applied to PCR-based assays of multiple homologous PCR products. The new approach will be of value to a broad range of research areas, such as those of comparative genomics, complete mitochondrial analyses, population genetics, and phylogenetics.

  16. Controlling high-throughput manufacturing at the nano-scale

    NASA Astrophysics Data System (ADS)

    Cooper, Khershed P.

    2013-09-01

    Interest in nano-scale manufacturing research and development is growing. The reason is to accelerate the translation of discoveries and inventions of nanoscience and nanotechnology into products that would benefit industry, economy and society. Ongoing research in nanomanufacturing is focused primarily on developing novel nanofabrication techniques for a variety of applications—materials, energy, electronics, photonics, biomedical, etc. Our goal is to foster the development of high-throughput methods of fabricating nano-enabled products. Large-area parallel processing and highspeed continuous processing are high-throughput means for mass production. An example of large-area processing is step-and-repeat nanoimprinting, by which nanostructures are reproduced again and again over a large area, such as a 12 in wafer. Roll-to-roll processing is an example of continuous processing, by which it is possible to print and imprint multi-level nanostructures and nanodevices on a moving flexible substrate. The big pay-off is high-volume production and low unit cost. However, the anticipated cost benefits can only be realized if the increased production rate is accompanied by high yields of high quality products. To ensure product quality, we need to design and construct manufacturing systems such that the processes can be closely monitored and controlled. One approach is to bring cyber-physical systems (CPS) concepts to nanomanufacturing. CPS involves the control of a physical system such as manufacturing through modeling, computation, communication and control. Such a closely coupled system will involve in-situ metrology and closed-loop control of the physical processes guided by physics-based models and driven by appropriate instrumentation, sensing and actuation. This paper will discuss these ideas in the context of controlling high-throughput manufacturing at the nano-scale.

  17. Optima MDxt: A high throughput 335 keV mid-dose implanter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisner, Edward; David, Jonathan; Justesen, Perry

    2012-11-06

    The continuing demand for both energy purity and implant angle control along with high wafer throughput drove the development of the Axcelis Optima MDxt mid-dose ion implanter. The system utilizes electrostatic scanning, an electrostatic parallelizing lens and an electrostatic energy filter to produce energetically pure beams with high angular integrity. Based on field proven components, the Optima MDxt beamline architecture offers the high beam currents possible with singly charged species including arsenic at energies up to 335 keV as well as large currents from multiply charged species at energies extending over 1 MeV. Conversely, the excellent energy filtering capability allowsmore » high currents at low beam energies, since it is safe to utilize large deceleration ratios. This beamline is coupled with the >500 WPH capable endstation technology used on the Axcelis Optima XEx high energy ion implanter. The endstation includes in-situ angle measurements of the beam in order to maintain excellent beam-to-wafer implant angle control in both the horizontal and vertical directions. The Optima platform control system provides new generation dose control system that assures excellent dosimetry and charge control. This paper will describe the features and technologies that allow the Optima MDxt to provide superior process performance at the highest wafer throughput, and will provide examples of the process performance achievable.« less

  18. Optimisation of insect cell growth in deep-well blocks: development of a high-throughput insect cell expression screen.

    PubMed

    Bahia, Daljit; Cheung, Robert; Buchs, Mirjam; Geisse, Sabine; Hunt, Ian

    2005-01-01

    This report describes a method to culture insects cells in 24 deep-well blocks for the routine small-scale optimisation of baculovirus-mediated protein expression experiments. Miniaturisation of this process provides the necessary reduction in terms of resource allocation, reagents, and labour to allow extensive and rapid optimisation of expression conditions, with the concomitant reduction in lead-time before commencement of large-scale bioreactor experiments. This therefore greatly simplifies the optimisation process and allows the use of liquid handling robotics in much of the initial optimisation stages of the process, thereby greatly increasing the throughput of the laboratory. We present several examples of the use of deep-well block expression studies in the optimisation of therapeutically relevant protein targets. We also discuss how the enhanced throughput offered by this approach can be adapted to robotic handling systems and the implications this has on the capacity to conduct multi-parallel protein expression studies.

  19. MS-REDUCE: an ultrafast technique for reduction of big mass spectrometry data for high-throughput processing.

    PubMed

    Awan, Muaaz Gul; Saeed, Fahad

    2016-05-15

    Modern proteomics studies utilize high-throughput mass spectrometers which can produce data at an astonishing rate. These big mass spectrometry (MS) datasets can easily reach peta-scale level creating storage and analytic problems for large-scale systems biology studies. Each spectrum consists of thousands of peaks which have to be processed to deduce the peptide. However, only a small percentage of peaks in a spectrum are useful for peptide deduction as most of the peaks are either noise or not useful for a given spectrum. This redundant processing of non-useful peaks is a bottleneck for streaming high-throughput processing of big MS data. One way to reduce the amount of computation required in a high-throughput environment is to eliminate non-useful peaks. Existing noise removing algorithms are limited in their data-reduction capability and are compute intensive making them unsuitable for big data and high-throughput environments. In this paper we introduce a novel low-complexity technique based on classification, quantization and sampling of MS peaks. We present a novel data-reductive strategy for analysis of Big MS data. Our algorithm, called MS-REDUCE, is capable of eliminating noisy peaks as well as peaks that do not contribute to peptide deduction before any peptide deduction is attempted. Our experiments have shown up to 100× speed up over existing state of the art noise elimination algorithms while maintaining comparable high quality matches. Using our approach we were able to process a million spectra in just under an hour on a moderate server. The developed tool and strategy has been made available to wider proteomics and parallel computing community and the code can be found at https://github.com/pcdslab/MSREDUCE CONTACT: : fahad.saeed@wmich.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Control structures for high speed processors

    NASA Technical Reports Server (NTRS)

    Maki, G. K.; Mankin, R.; Owsley, P. A.; Kim, G. M.

    1982-01-01

    A special processor was designed to function as a Reed Solomon decoder with throughput data rate in the Mhz range. This data rate is significantly greater than is possible with conventional digital architectures. To achieve this rate, the processor design includes sequential, pipelined, distributed, and parallel processing. The processor was designed using a high level language register transfer language. The RTL can be used to describe how the different processes are implemented by the hardware. One problem of special interest was the development of dependent processes which are analogous to software subroutines. For greater flexibility, the RTL control structure was implemented in ROM. The special purpose hardware required approximately 1000 SSI and MSI components. The data rate throughput is 2.5 megabits/second. This data rate is achieved through the use of pipelined and distributed processing. This data rate can be compared with 800 kilobits/second in a recently proposed very large scale integration design of a Reed Solomon encoder.

  1. High-throughput detection of ethanol-producing cyanobacteria in a microdroplet platform.

    PubMed

    Abalde-Cela, Sara; Gould, Anna; Liu, Xin; Kazamia, Elena; Smith, Alison G; Abell, Chris

    2015-05-06

    Ethanol production by microorganisms is an important renewable energy source. Most processes involve fermentation of sugars from plant feedstock, but there is increasing interest in direct ethanol production by photosynthetic organisms. To facilitate this, a high-throughput screening technique for the detection of ethanol is required. Here, a method for the quantitative detection of ethanol in a microdroplet-based platform is described that can be used for screening cyanobacterial strains to identify those with the highest ethanol productivity levels. The detection of ethanol by enzymatic assay was optimized both in bulk and in microdroplets. In parallel, the encapsulation of engineered ethanol-producing cyanobacteria in microdroplets and their growth dynamics in microdroplet reservoirs were demonstrated. The combination of modular microdroplet operations including droplet generation for cyanobacteria encapsulation, droplet re-injection and pico-injection, and laser-induced fluorescence, were used to create this new platform to screen genetically engineered strains of cyanobacteria with different levels of ethanol production.

  2. Observing with HST V: Improvements to the Scheduling of HST Parallel Observations

    NASA Astrophysics Data System (ADS)

    Taylor, D. K.; Vanorsow, D.; Lucks, M.; Henry, R.; Ratnatunga, K.; Patterson, A.

    1994-12-01

    Recent improvements to the Hubble Space Telescope (HST) ground system have significantly increased the frequency of pure parallel observations, i.e. the simultaneous use of multiple HST instruments by different observers. Opportunities for parallel observations are limited by a variety of timing, hardware, and scientific constraints. Formerly, such opportunities were heuristically predicted prior to the construction of the primary schedule (or calendar), and lack of complete information resulted in high rates of scheduling failures and missed opportunities. In the current process the search for parallel opportunities is delayed until the primary schedule is complete, at which point new software tools are employed to identify places where parallel observations are supported. The result has been a considerable increase in parallel throughput. A new technique, known as ``parallel crafting,'' is currently under development to streamline further the parallel scheduling process. This radically new method will replace the standard exposure logsheet with a set of abstract rules from which observation parameters will be constructed ``on the fly'' to best match the constraints of the parallel opportunity. Currently, parallel observers must specify a huge (and highly redundant) set of exposure types in order to cover all possible types of parallel opportunities. Crafting rules permit the observer to express timing, filter, and splitting preferences in a far more succinct manner. The issue of coordinated parallel observations (same PI using different instruments simultaneously), long a troublesome aspect of the ground system, is also being addressed. For Cycle 5, the Phase II Proposal Instructions now have an exposure-level PAR WITH special requirement. While only the primary's alignment will be scheduled on the calendar, new commanding will provide for parallel exposures with both instruments.

  3. Optimizing Crawler4j using MapReduce Programming Model

    NASA Astrophysics Data System (ADS)

    Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.

    2017-06-01

    World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.

  4. Measurements of file transfer rates over dedicated long-haul connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Settlemyer, Bradley W; Imam, Neena

    2016-01-01

    Wide-area file transfers are an integral part of several High-Performance Computing (HPC) scenarios. Dedicated network connections with high capacity, low loss rate and low competing traffic, are increasingly being provisioned over current HPC infrastructures to support such transfers. To gain insights into these file transfers, we collected transfer rate measurements for Lustre and xfs file systems between dedicated multi-core servers over emulated 10 Gbps connections with round trip times (rtt) in 0-366 ms range. Memory transfer throughput over these connections is measured using iperf, and file IO throughput on host systems is measured using xddprof. We consider two file systemmore » configurations: Lustre over IB network and xfs over SSD connected to PCI bus. Files are transferred using xdd across these connections, and the transfer rates are measured, which indicate the need to jointly optimize the connection and host file IO parameters to achieve peak transfer rates. In particular, these measurements indicate that (i) peak file transfer rate is lower than peak connection and host IO throughput, in some cases by as much as 50% or lower, (ii) xdd request sizes that achieve peak throughput for host file IO do not necessarily lead to peak file transfer rates, and (iii) parallelism in host IO and TCP transport does not always improve the file transfer rates.« less

  5. Genecentric: a package to uncover graph-theoretic structure in high-throughput epistasis data.

    PubMed

    Gallant, Andrew; Leiserson, Mark D M; Kachalov, Maxim; Cowen, Lenore J; Hescott, Benjamin J

    2013-01-18

    New technology has resulted in high-throughput screens for pairwise genetic interactions in yeast and other model organisms. For each pair in a collection of non-essential genes, an epistasis score is obtained, representing how much sicker (or healthier) the double-knockout organism will be compared to what would be expected from the sickness of the component single knockouts. Recent algorithmic work has identified graph-theoretic patterns in this data that can indicate functional modules, and even sets of genes that may occur in compensatory pathways, such as a BPM-type schema first introduced by Kelley and Ideker. However, to date, any algorithms for finding such patterns in the data were implemented internally, with no software being made publically available. Genecentric is a new package that implements a parallelized version of the Leiserson et al. algorithm (J Comput Biol 18:1399-1409, 2011) for generating generalized BPMs from high-throughput genetic interaction data. Given a matrix of weighted epistasis values for a set of double knock-outs, Genecentric returns a list of generalized BPMs that may represent compensatory pathways. Genecentric also has an extension, GenecentricGO, to query FuncAssociate (Bioinformatics 25:3043-3044, 2009) to retrieve GO enrichment statistics on generated BPMs. Python is the only dependency, and our web site provides working examples and documentation. We find that Genecentric can be used to find coherent functional and perhaps compensatory gene sets from high throughput genetic interaction data. Genecentric is made freely available for download under the GPLv2 from http://bcb.cs.tufts.edu/genecentric.

  6. Genecentric: a package to uncover graph-theoretic structure in high-throughput epistasis data

    PubMed Central

    2013-01-01

    Background New technology has resulted in high-throughput screens for pairwise genetic interactions in yeast and other model organisms. For each pair in a collection of non-essential genes, an epistasis score is obtained, representing how much sicker (or healthier) the double-knockout organism will be compared to what would be expected from the sickness of the component single knockouts. Recent algorithmic work has identified graph-theoretic patterns in this data that can indicate functional modules, and even sets of genes that may occur in compensatory pathways, such as a BPM-type schema first introduced by Kelley and Ideker. However, to date, any algorithms for finding such patterns in the data were implemented internally, with no software being made publically available. Results Genecentric is a new package that implements a parallelized version of the Leiserson et al. algorithm (J Comput Biol 18:1399-1409, 2011) for generating generalized BPMs from high-throughput genetic interaction data. Given a matrix of weighted epistasis values for a set of double knock-outs, Genecentric returns a list of generalized BPMs that may represent compensatory pathways. Genecentric also has an extension, GenecentricGO, to query FuncAssociate (Bioinformatics 25:3043-3044, 2009) to retrieve GO enrichment statistics on generated BPMs. Python is the only dependency, and our web site provides working examples and documentation. Conclusion We find that Genecentric can be used to find coherent functional and perhaps compensatory gene sets from high throughput genetic interaction data. Genecentric is made freely available for download under the GPLv2 from http://bcb.cs.tufts.edu/genecentric. PMID:23331614

  7. Optimal processor assignment for pipeline computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  8. Dissecting Cell-Type Composition and Activity-Dependent Transcriptional State in Mammalian Brains by Massively Parallel Single-Nucleus RNA-Seq.

    PubMed

    Hu, Peng; Fabyanic, Emily; Kwon, Deborah Y; Tang, Sheng; Zhou, Zhaolan; Wu, Hao

    2017-12-07

    Massively parallel single-cell RNA sequencing can precisely resolve cellular diversity in a high-throughput manner at low cost, but unbiased isolation of intact single cells from complex tissues such as adult mammalian brains is challenging. Here, we integrate sucrose-gradient-assisted purification of nuclei with droplet microfluidics to develop a highly scalable single-nucleus RNA-seq approach (sNucDrop-seq), which is free of enzymatic dissociation and nucleus sorting. By profiling ∼18,000 nuclei isolated from cortical tissues of adult mice, we demonstrate that sNucDrop-seq not only accurately reveals neuronal and non-neuronal subtype composition with high sensitivity but also enables in-depth analysis of transient transcriptional states driven by neuronal activity, at single-cell resolution, in vivo. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Dynamic Environmental Photosynthetic Imaging Reveals Emergent Phenotypes

    DOE PAGES

    Cruz, Jeffrey A.; Savage, Linda J.; Zegarac, Robert; ...

    2016-06-22

    Understanding and improving the productivity and robustness of plant photosynthesis requires high-throughput phenotyping under environmental conditions that are relevant to the field. Here we demonstrate the dynamic environmental photosynthesis imager (DEPI), an experimental platform for integrated, continuous, and high-throughput measurements of photosynthetic parameters during plant growth under reproducible yet dynamic environmental conditions. Using parallel imagers obviates the need to move plants or sensors, reducing artifacts and allowing simultaneous measurement on large numbers of plants. As a result, DEPI can reveal phenotypes that are not evident under standard laboratory conditions but emerge under progressively more dynamic illumination. We show examples inmore » mutants of Arabidopsis of such “emergent phenotypes” that are highly transient and heterogeneous, appearing in different leaves under different conditions and depending in complex ways on both environmental conditions and plant developmental age. Finally, these emergent phenotypes appear to be caused by a range of phenomena, suggesting that such previously unseen processes are critical for plant responses to dynamic environments.« less

  10. TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics.

    PubMed

    Röst, Hannes L; Liu, Yansheng; D'Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi

    2016-09-01

    Next-generation mass spectrometric (MS) techniques such as SWATH-MS have substantially increased the throughput and reproducibility of proteomic analysis, but ensuring consistent quantification of thousands of peptide analytes across multiple liquid chromatography-tandem MS (LC-MS/MS) runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we developed TRIC (http://proteomics.ethz.ch/tric/), a software tool that utilizes fragment-ion data to perform cross-run alignment, consistent peak-picking and quantification for high-throughput targeted proteomics. TRIC reduced the identification error compared to a state-of-the-art SWATH-MS analysis without alignment by more than threefold at constant recall while correcting for highly nonlinear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups. Thus, TRIC fills a gap in the pipeline for automated analysis of massively parallel targeted proteomics data sets.

  11. Information management systems for pharmacogenomics.

    PubMed

    Thallinger, Gerhard G; Trajanoski, Slave; Stocker, Gernot; Trajanoski, Zlatko

    2002-09-01

    The value of high-throughput genomic research is dramatically enhanced by association with key patient data. These data are generally available but of disparate quality and not typically directly associated. A system that could bring these disparate data sources into a common resource connected with functional genomic data would be tremendously advantageous. However, the integration of clinical and accurate interpretation of the generated functional genomic data requires the development of information management systems capable of effectively capturing the data as well as tools to make that data accessible to the laboratory scientist or to the clinician. In this review these challenges and current information technology solutions associated with the management, storage and analysis of high-throughput data are highlighted. It is suggested that the development of a pharmacogenomic data management system which integrates public and proprietary databases, clinical datasets, and data mining tools embedded in a high-performance computing environment should include the following components: parallel processing systems, storage technologies, network technologies, databases and database management systems (DBMS), and application services.

  12. Database-Centric Method for Automated High-Throughput Deconvolution and Analysis of Kinetic Antibody Screening Data.

    PubMed

    Nobrega, R Paul; Brown, Michael; Williams, Cody; Sumner, Chris; Estep, Patricia; Caffry, Isabelle; Yu, Yao; Lynaugh, Heather; Burnina, Irina; Lilov, Asparouh; Desroches, Jordan; Bukowski, John; Sun, Tingwan; Belk, Jonathan P; Johnson, Kirt; Xu, Yingda

    2017-10-01

    The state-of-the-art industrial drug discovery approach is the empirical interrogation of a library of drug candidates against a target molecule. The advantage of high-throughput kinetic measurements over equilibrium assessments is the ability to measure each of the kinetic components of binding affinity. Although high-throughput capabilities have improved with advances in instrument hardware, three bottlenecks in data processing remain: (1) intrinsic molecular properties that lead to poor biophysical quality in vitro are not accounted for in commercially available analysis models, (2) processing data through a user interface is time-consuming and not amenable to parallelized data collection, and (3) a commercial solution that includes historical kinetic data in the analysis of kinetic competition data does not exist. Herein, we describe a generally applicable method for the automated analysis, storage, and retrieval of kinetic binding data. This analysis can deconvolve poor quality data on-the-fly and store and organize historical data in a queryable format for use in future analyses. Such database-centric strategies afford greater insight into the molecular mechanisms of kinetic competition, allowing for the rapid identification of allosteric effectors and the presentation of kinetic competition data in absolute terms of percent bound to antigen on the biosensor.

  13. ChemHTPS - A virtual high-throughput screening program suite for the chemical and materials sciences

    NASA Astrophysics Data System (ADS)

    Afzal, Mohammad Atif Faiz; Evangelista, William; Hachmann, Johannes

    The discovery of new compounds, materials, and chemical reactions with exceptional properties is the key for the grand challenges in innovation, energy and sustainability. This process can be dramatically accelerated by means of the virtual high-throughput screening (HTPS) of large-scale candidate libraries. The resulting data can further be used to study the underlying structure-property relationships and thus facilitate rational design capability. This approach has been extensively used for many years in the drug discovery community. However, the lack of openly available virtual HTPS tools is limiting the use of these techniques in various other applications such as photovoltaics, optoelectronics, and catalysis. Thus, we developed ChemHTPS, a general-purpose, comprehensive and user-friendly suite, that will allow users to efficiently perform large in silico modeling studies and high-throughput analyses in these applications. ChemHTPS also includes a massively parallel molecular library generator which offers a multitude of options to customize and restrict the scope of the enumerated chemical space and thus tailor it for the demands of specific applications. To streamline the non-combinatorial exploration of chemical space, we incorporate genetic algorithms into the framework. In addition to implementing smarter algorithms, we also focus on the ease of use, workflow, and code integration to make this technology more accessible to the community.

  14. GPU Lossless Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.

    2014-01-01

    Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.

  15. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  16. Developing science gateways for drug discovery in a grid environment.

    PubMed

    Pérez-Sánchez, Horacio; Rezaei, Vahid; Mezhuyev, Vitaliy; Man, Duhu; Peña-García, Jorge; den-Haan, Helena; Gesing, Sandra

    2016-01-01

    Methods for in silico screening of large databases of molecules increasingly complement and replace experimental techniques to discover novel compounds to combat diseases. As these techniques become more complex and computationally costly we are faced with an increasing problem to provide the research community of life sciences with a convenient tool for high-throughput virtual screening on distributed computing resources. To this end, we recently integrated the biophysics-based drug-screening program FlexScreen into a service, applicable for large-scale parallel screening and reusable in the context of scientific workflows. Our implementation is based on Pipeline Pilot and Simple Object Access Protocol and provides an easy-to-use graphical user interface to construct complex workflows, which can be executed on distributed computing resources, thus accelerating the throughput by several orders of magnitude.

  17. Ultra-short pulse laser micro patterning with highest throughput by utilization of a novel multi-beam processing head

    NASA Astrophysics Data System (ADS)

    Homburg, Oliver; Jarczynski, Manfred; Mitra, Thomas; Brüning, Stephan

    2017-02-01

    In the last decade much improvement has been achieved for ultra-short pulse lasers with high repetition rates. This laser technology has vastly matured so that it entered a manifold of industrial applications recently compared to mainly scientific use in the past. Compared to ns-pulse ablation ultra-short pulses in the ps- or even fs regime lead to still colder ablation and further reduced heat-affected zones. This is crucial for micro patterning when structure sizes are getting smaller and requirements are getting stronger at the same time. An additional advantage of ultra-fast processing is its applicability to a large variety of materials, e.g. metals and several high bandgap materials like glass and ceramics. One challenge for ultra-fast micro machining is throughput. The operational capacity of these processes can be maximized by increasing the scan rate or the number of beams - parallel processing. This contribution focuses on process parallelism of ultra-short pulsed lasers with high repetition rate and individually addressable acousto-optical beam modulation. The core of the multi-beam generation is a smooth diffractive beam splitter component with high uniform spots and negligible loss, and a prismatic array compressor to match beam size and pitch. The optical design and the practical realization of an 8 beam processing head in combination with a high average power single mode ultra-short pulsed laser source are presented as well as the currently on-going and promising laboratory research and micro machining results. Finally, an outlook of scaling the processing head to several tens of beams is given.

  18. Massively Parallel Rogue Cell Detection using Serial Time-Encoded Amplified Microscopy of Inertially Ordered Cells in High Throughput Flow

    DTIC Science & Technology

    2013-06-01

    couples  the  high-­‐speed  capability  of  the   STEAM  imager  and  differential  phase... air  bubbles  in  the  TPE  mix.  Moreover,  TPE  chips  were  also  successfully  sealed  to  other  substrates...dynamics,  and  microelectromechanical  systems  (MEMS)  via  laser-­‐scanning  surface   vibrometry ,  and   observation

  19. Parallelization of a spatial random field characterization process using the Method of Anchored Distributions and the HTCondor high throughput computing system

    NASA Astrophysics Data System (ADS)

    Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.

    2013-12-01

    A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)

  20. Two-dimensional parallel array technology as a new approach to automated combinatorial solid-phase organic synthesis

    PubMed

    Brennan; Biddison; Frauendorf; Schwarcz; Keen; Ecker; Davis; Tinder; Swayze

    1998-01-01

    An automated, 96-well parallel array synthesizer for solid-phase organic synthesis has been designed and constructed. The instrument employs a unique reagent array delivery format, in which each reagent utilized has a dedicated plumbing system. An inert atmosphere is maintained during all phases of a synthesis, and temperature can be controlled via a thermal transfer plate which holds the injection molded reaction block. The reaction plate assembly slides in the X-axis direction, while eight nozzle blocks holding the reagent lines slide in the Y-axis direction, allowing for the extremely rapid delivery of any of 64 reagents to 96 wells. In addition, there are six banks of fixed nozzle blocks, which deliver the same reagent or solvent to eight wells at once, for a total of 72 possible reagents. The instrument is controlled by software which allows the straightforward programming of the synthesis of a larger number of compounds. This is accomplished by supplying a general synthetic procedure in the form of a command file, which calls upon certain reagents to be added to specific wells via lookup in a sequence file. The bottle position, flow rate, and concentration of each reagent is stored in a separate reagent table file. To demonstrate the utility of the parallel array synthesizer, a small combinatorial library of hydroxamic acids was prepared in high throughput mode for biological screening. Approximately 1300 compounds were prepared on a 10 μmole scale (3-5 mg) in a few weeks. The resulting crude compounds were generally >80% pure, and were utilized directly for high throughput screening in antibacterial assays. Several active wells were found, and the activity was verified by solution-phase synthesis of analytically pure material, indicating that the system described herein is an efficient means for the parallel synthesis of compounds for lead discovery. Copyright 1998 John Wiley & Sons, Inc.

  1. Study on the SPR responses of various DNA probe concentrations by parallel scan spectral SPR imaging

    NASA Astrophysics Data System (ADS)

    Ma, Suihua; Liu, Le; Lu, Weiping; Zhang, Yaou; He, Yonghong; Guo, Jihua

    2008-12-01

    SPR sensors have become a high sensitive and label free method for characterizing and quantifying chemical and biochemical interactions. However, the relations between the SPR refractive index response and the property (such as concentrations) of biochemical probes are still lacking. In this paper, an experimental study on the SPR responses of varies concentrations of Legionella pneumophila mip DNA probes is presented. We developed a novel two-dimensional SPR sensing technique-parallel scan spectral SPR imaging-to detect an array of mip gene probes. This technique offers quantitative refractive index information with a high sensing throughput. By detecting mip DNA probes with different concentrations, we obtained the relations between the SPR refractive index response and the concentrations of mip DNA probes. These results are valuable for design and developing SPR based mip gene biochips.

  2. A Concept for Airborne Precision Spacing for Dependent Parallel Approaches

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Baxley, Brian T.; Abbott, Terence S.; Capron, William R.; Smith, Colin L.; Shay, Richard F.; Hubbs, Clay

    2012-01-01

    The Airborne Precision Spacing concept of operations has been previously developed to support the precise delivery of aircraft landing successively on the same runway. The high-precision and consistent delivery of inter-aircraft spacing allows for increased runway throughput and the use of energy-efficient arrivals routes such as Continuous Descent Arrivals and Optimized Profile Descents. This paper describes an extension to the Airborne Precision Spacing concept to enable dependent parallel approach operations where the spacing aircraft must manage their in-trail spacing from a leading aircraft on approach to the same runway and spacing from an aircraft on approach to a parallel runway. Functionality for supporting automation is discussed as well as procedures for pilots and controllers. An analysis is performed to identify the required information and a new ADS-B report is proposed to support these information needs. Finally, several scenarios are described in detail.

  3. Prediction of the Passive Intestinal Absorption of Medicinal Plant Extract Constituents with the Parallel Artificial Membrane Permeability Assay (PAMPA).

    PubMed

    Petit, Charlotte; Bujard, Alban; Skalicka-Woźniak, Krystyna; Cretton, Sylvian; Houriet, Joëlle; Christen, Philippe; Carrupt, Pierre-Alain; Wolfender, Jean-Luc

    2016-03-01

    At the early drug discovery stage, the high-throughput parallel artificial membrane permeability assay is one of the most frequently used in vitro models to predict transcellular passive absorption. While thousands of new chemical entities have been screened with the parallel artificial membrane permeability assay, in general, permeation properties of natural products have been scarcely evaluated. In this study, the parallel artificial membrane permeability assay through a hexadecane membrane was used to predict the passive intestinal absorption of a representative set of frequently occurring natural products. Since natural products are usually ingested for medicinal use as components of complex extracts in traditional herbal preparations or as phytopharmaceuticals, the applicability of such an assay to study the constituents directly in medicinal crude plant extracts was further investigated. Three representative crude plant extracts with different natural product compositions were chosen for this study. The first extract was composed of furanocoumarins (Angelica archangelica), the second extract included alkaloids (Waltheria indica), and the third extract contained flavonoid glycosides (Pueraria montana var. lobata). For each medicinal plant, the effective passive permeability values Pe (cm/s) of the main natural products of interest were rapidly calculated thanks to a generic ultrahigh-pressure liquid chromatography-UV detection method and because Pe calculations do not require knowing precisely the concentration of each natural product within the extracts. The original parallel artificial membrane permeability assay through a hexadecane membrane was found to keep its predictive power when applied to constituents directly in crude plant extracts provided that higher quantities of the extract were initially loaded in the assay in order to ensure suitable detection of the individual constituents of the extracts. Such an approach is thus valuable for the high-throughput, cost-effective, and early evaluation of passive intestinal absorption of active principles in medicinal plants. In phytochemical studies, obtaining effective passive permeability values of pharmacologically active natural products is important to predict if natural products showing interesting activities in vitro may have a chance to reach their target in vivo. Georg Thieme Verlag KG Stuttgart · New York.

  4. Noncoherent parallel optical processor for discrete two-dimensional linear transformations.

    PubMed

    Glaser, I

    1980-10-01

    We describe a parallel optical processor, based on a lenslet array, that provides general linear two-dimensional transformations using noncoherent light. Such a processor could become useful in image- and signal-processing applications in which the throughput requirements cannot be adequately satisfied by state-of-the-art digital processors. Experimental results that illustrate the feasibility of the processor by demonstrating its use in parallel optical computation of the two-dimensional Walsh-Hadamard transformation are presented.

  5. Parallel fabrication of macroporous scaffolds.

    PubMed

    Dobos, Andrew; Grandhi, Taraka Sai Pavan; Godeshala, Sudhakar; Meldrum, Deirdre R; Rege, Kaushal

    2018-07-01

    Scaffolds generated from naturally occurring and synthetic polymers have been investigated in several applications because of their biocompatibility and tunable chemo-mechanical properties. Existing methods for generation of 3D polymeric scaffolds typically cannot be parallelized, suffer from low throughputs, and do not allow for quick and easy removal of the fragile structures that are formed. Current molds used in hydrogel and scaffold fabrication using solvent casting and porogen leaching are often single-use and do not facilitate 3D scaffold formation in parallel. Here, we describe a simple device and related approaches for the parallel fabrication of macroporous scaffolds. This approach was employed for the generation of macroporous and non-macroporous materials in parallel, in higher throughput and allowed for easy retrieval of these 3D scaffolds once formed. In addition, macroporous scaffolds with interconnected as well as non-interconnected pores were generated, and the versatility of this approach was employed for the generation of 3D scaffolds from diverse materials including an aminoglycoside-derived cationic hydrogel ("Amikagel"), poly(lactic-co-glycolic acid) or PLGA, and collagen. Macroporous scaffolds generated using the device were investigated for plasmid DNA binding and cell loading, indicating the use of this approach for developing materials for different applications in biotechnology. Our results demonstrate that the device-based approach is a simple technology for generating scaffolds in parallel, which can enhance the toolbox of current fabrication techniques. © 2018 Wiley Periodicals, Inc.

  6. MrGrid: A Portable Grid Based Molecular Replacement Pipeline

    PubMed Central

    Reboul, Cyril F.; Androulakis, Steve G.; Phan, Jennifer M. N.; Whisstock, James C.; Goscinski, Wojtek J.; Abramson, David; Buckle, Ashley M.

    2010-01-01

    Background The crystallographic determination of protein structures can be computationally demanding and for difficult cases can benefit from user-friendly interfaces to high-performance computing resources. Molecular replacement (MR) is a popular protein crystallographic technique that exploits the structural similarity between proteins that share some sequence similarity. But the need to trial permutations of search models, space group symmetries and other parameters makes MR time- and labour-intensive. However, MR calculations are embarrassingly parallel and thus ideally suited to distributed computing. In order to address this problem we have developed MrGrid, web-based software that allows multiple MR calculations to be executed across a grid of networked computers, allowing high-throughput MR. Methodology/Principal Findings MrGrid is a portable web based application written in Java/JSP and Ruby, and taking advantage of Apple Xgrid technology. Designed to interface with a user defined Xgrid resource the package manages the distribution of multiple MR runs to the available nodes on the Xgrid. We evaluated MrGrid using 10 different protein test cases on a network of 13 computers, and achieved an average speed up factor of 5.69. Conclusions MrGrid enables the user to retrieve and manage the results of tens to hundreds of MR calculations quickly and via a single web interface, as well as broadening the range of strategies that can be attempted. This high-throughput approach allows parameter sweeps to be performed in parallel, improving the chances of MR success. PMID:20386612

  7. Current status and future prospects for enabling chemistry technology in the drug discovery process.

    PubMed

    Djuric, Stevan W; Hutchins, Charles W; Talaty, Nari N

    2016-01-01

    This review covers recent advances in the implementation of enabling chemistry technologies into the drug discovery process. Areas covered include parallel synthesis chemistry, high-throughput experimentation, automated synthesis and purification methods, flow chemistry methodology including photochemistry, electrochemistry, and the handling of "dangerous" reagents. Also featured are advances in the "computer-assisted drug design" area and the expanding application of novel mass spectrometry-based techniques to a wide range of drug discovery activities.

  8. MPRAnator: a web-based tool for the design of massively parallel reporter assay experiments

    PubMed Central

    Georgakopoulos-Soares, Ilias; Jain, Naman; Gray, Jesse M; Hemberg, Martin

    2017-01-01

    Motivation: With the rapid advances in DNA synthesis and sequencing technologies and the continuing decline in the associated costs, high-throughput experiments can be performed to investigate the regulatory role of thousands of oligonucleotide sequences simultaneously. Nevertheless, designing high-throughput reporter assay experiments such as massively parallel reporter assays (MPRAs) and similar methods remains challenging. Results: We introduce MPRAnator, a set of tools that facilitate rapid design of MPRA experiments. With MPRA Motif design, a set of variables provides fine control of how motifs are placed into sequences, thereby allowing the investigation of the rules that govern transcription factor (TF) occupancy. MPRA single-nucleotide polymorphism design can be used to systematically examine the functional effects of single or combinations of single-nucleotide polymorphisms at regulatory sequences. Finally, the Transmutation tool allows for the design of negative controls by permitting scrambling, reversing, complementing or introducing multiple random mutations in the input sequences or motifs. Availability and implementation: MPRAnator tool set is implemented in Python, Perl and Javascript and is freely available at www.genomegeek.com and www.sanger.ac.uk/science/tools/mpranator. The source code is available on www.github.com/hemberg-lab/MPRAnator/ under the MIT license. The REST API allows programmatic access to MPRAnator using simple URLs. Contact: igs@sanger.ac.uk or mh26@sanger.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27605100

  9. MPRAnator: a web-based tool for the design of massively parallel reporter assay experiments.

    PubMed

    Georgakopoulos-Soares, Ilias; Jain, Naman; Gray, Jesse M; Hemberg, Martin

    2017-01-01

    With the rapid advances in DNA synthesis and sequencing technologies and the continuing decline in the associated costs, high-throughput experiments can be performed to investigate the regulatory role of thousands of oligonucleotide sequences simultaneously. Nevertheless, designing high-throughput reporter assay experiments such as massively parallel reporter assays (MPRAs) and similar methods remains challenging. We introduce MPRAnator, a set of tools that facilitate rapid design of MPRA experiments. With MPRA Motif design, a set of variables provides fine control of how motifs are placed into sequences, thereby allowing the investigation of the rules that govern transcription factor (TF) occupancy. MPRA single-nucleotide polymorphism design can be used to systematically examine the functional effects of single or combinations of single-nucleotide polymorphisms at regulatory sequences. Finally, the Transmutation tool allows for the design of negative controls by permitting scrambling, reversing, complementing or introducing multiple random mutations in the input sequences or motifs. MPRAnator tool set is implemented in Python, Perl and Javascript and is freely available at www.genomegeek.com and www.sanger.ac.uk/science/tools/mpranator The source code is available on www.github.com/hemberg-lab/MPRAnator/ under the MIT license. The REST API allows programmatic access to MPRAnator using simple URLs. igs@sanger.ac.uk or mh26@sanger.ac.ukSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  10. Digital imaging of root traits (DIRT): a high-throughput computing and collaboration platform for field-based root phenomics.

    PubMed

    Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander

    2015-01-01

    Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots. It enables scientists to store, manage and share crop root images with metadata and compute RSA traits from thousands of images in parallel. It makes high-throughput RSA trait computation available to the community with just a few button clicks. As such it enables plant scientists to spend more time on science rather than on technology. All stored and computed data is easily accessible to the public and broader scientific community. We hope that easy data accessibility will attract new tool developers and spur creative data usage that may even be applied to other fields of science.

  11. Holographic Associative Memory Employing Phase Conjugation

    NASA Astrophysics Data System (ADS)

    Soffer, B. H.; Marom, E.; Owechko, Y.; Dunning, G.

    1986-12-01

    The principle of information retrieval by association has been suggested as a basis for parallel computing and as the process by which human memory functions.1 Various associative processors have been proposed that use electronic or optical means. Optical schemes,2-7 in particular, those based on holographic principles,8'8' are well suited to associative processing because of their high parallelism and information throughput. Previous workers8 demonstrated that holographically stored images can be recalled by using relatively complicated reference images but did not utilize nonlinear feedback to reduce the large cross talk that results when multiple objects are stored and a partial or distorted input is used for retrieval. These earlier approaches were limited in their ability to reconstruct the output object faithfully from a partial input.

  12. Enabling a high throughput real time data pipeline for a large radio telescope array with GPUs

    NASA Astrophysics Data System (ADS)

    Edgar, R. G.; Clark, M. A.; Dale, K.; Mitchell, D. A.; Ord, S. M.; Wayth, R. B.; Pfister, H.; Greenhill, L. J.

    2010-10-01

    The Murchison Widefield Array (MWA) is a next-generation radio telescope currently under construction in the remote Western Australia Outback. Raw data will be generated continuously at 5 GiB s-1, grouped into 8 s cadences. This high throughput motivates the development of on-site, real time processing and reduction in preference to archiving, transport and off-line processing. Each batch of 8 s data must be completely reduced before the next batch arrives. Maintaining real time operation will require a sustained performance of around 2.5 TFLOP s-1 (including convolutions, FFTs, interpolations and matrix multiplications). We describe a scalable heterogeneous computing pipeline implementation, exploiting both the high computing density and FLOP-per-Watt ratio of modern GPUs. The architecture is highly parallel within and across nodes, with all major processing elements performed by GPUs. Necessary scatter-gather operations along the pipeline are loosely synchronized between the nodes hosting the GPUs. The MWA will be a frontier scientific instrument and a pathfinder for planned peta- and exa-scale facilities.

  13. ElectroTaxis-on-a-Chip (ETC): an integrated quantitative high-throughput screening platform for electrical field-directed cell migration.

    PubMed

    Zhao, Siwei; Zhu, Kan; Zhang, Yan; Zhu, Zijie; Xu, Zhengping; Zhao, Min; Pan, Tingrui

    2014-11-21

    Both endogenous and externally applied electrical stimulation can affect a wide range of cellular functions, including growth, migration, differentiation and division. Among those effects, the electrical field (EF)-directed cell migration, also known as electrotaxis, has received broad attention because it holds great potential in facilitating clinical wound healing. Electrotaxis experiment is conventionally conducted in centimetre-sized flow chambers built in Petri dishes. Despite the recent efforts to adapt microfluidics for electrotaxis studies, the current electrotaxis experimental setup is still cumbersome due to the needs of an external power supply and EF controlling/monitoring systems. There is also a lack of parallel experimental systems for high-throughput electrotaxis studies. In this paper, we present a first independently operable microfluidic platform for high-throughput electrotaxis studies, integrating all functional components for cell migration under EF stimulation (except microscopy) on a compact footprint (the same as a credit card), referred to as ElectroTaxis-on-a-Chip (ETC). Inspired by the R-2R resistor ladder topology in digital signal processing, we develop a systematic approach to design an infinitely expandable microfluidic generator of EF gradients for high-throughput and quantitative studies of EF-directed cell migration. Furthermore, a vacuum-assisted assembly method is utilized to allow direct and reversible attachment of our device to existing cell culture media on biological surfaces, which separates the cell culture and device preparation/fabrication steps. We have demonstrated that our ETC platform is capable of screening human cornea epithelial cell migration under the stimulation of an EF gradient spanning over three orders of magnitude. The screening results lead to the identification of the EF-sensitive range of that cell type, which can provide valuable guidance to the clinical application of EF-facilitated wound healing.

  14. GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.

    PubMed

    Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A

    2016-01-01

    In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.

  15. Hybrid MPI/OpenMP Implementation of the ORAC Molecular Dynamics Program for Generalized Ensemble and Fast Switching Alchemical Simulations.

    PubMed

    Procacci, Piero

    2016-06-27

    We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac .

  16. Evaluating System Parameters on a Dragonfly using Simulation and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatele, Abhinav; Jain, Nikhil; Livnat, Yarden

    The dragon y topology is becoming a popular choice for build- ing high-radix, low-diameter networks with high-bandwidth links. Even with a powerful network, preliminary experi- ments on Edison at NERSC have shown that for communica- tion heavy applications, job interference and thus presumably job placement remains an important factor. In this paper, we explore the e ects of job placement, job sizes, parallel workloads and network con gurations on network through- put to better understand inter-job interference. We use a simulation tool called Damsel y to model the network be- havior of Edison and study the impact of various systemmore » parameters on network throughput. Parallel workloads based on ve representative communication patters are used and the simulation studies on up to 131,072 cores are aided by a new visualization of the dragon y network.« less

  17. Notes on implementation of sparsely distributed memory

    NASA Technical Reports Server (NTRS)

    Keeler, J. D.; Denning, P. J.

    1986-01-01

    The Sparsely Distributed Memory (SDM) developed by Kanerva is an unconventional memory design with very interesting and desirable properties. The memory works in a manner that is closely related to modern theories of human memory. The SDM model is discussed in terms of its implementation in hardware. Two appendices discuss the unconventional approaches of the SDM: Appendix A treats a resistive circuit for fast, parallel address decoding; and Appendix B treats a systolic array for high throughput read and write operations.

  18. Multichannel microscale system for high throughput preparative separation with comprehensive collection and analysis

    DOEpatents

    Karger, Barry L.; Kotler, Lev; Foret, Frantisek; Minarik, Marek; Kleparnik, Karel

    2003-12-09

    A modular multiple lane or capillary electrophoresis (chromatography) system that permits automated parallel separation and comprehensive collection of all fractions from samples in all lanes or columns, with the option of further on-line automated sample fraction analysis, is disclosed. Preferably, fractions are collected in a multi-well fraction collection unit, or plate (40). The multi-well collection plate (40) is preferably made of a solvent permeable gel, most preferably a hydrophilic, polymeric gel such as agarose or cross-linked polyacrylamide.

  19. Current status and future prospects for enabling chemistry technology in the drug discovery process

    PubMed Central

    Djuric, Stevan W.; Hutchins, Charles W.; Talaty, Nari N.

    2016-01-01

    This review covers recent advances in the implementation of enabling chemistry technologies into the drug discovery process. Areas covered include parallel synthesis chemistry, high-throughput experimentation, automated synthesis and purification methods, flow chemistry methodology including photochemistry, electrochemistry, and the handling of “dangerous” reagents. Also featured are advances in the “computer-assisted drug design” area and the expanding application of novel mass spectrometry-based techniques to a wide range of drug discovery activities. PMID:27781094

  20. Seq-Well: portable, low-cost RNA sequencing of single cells at high throughput.

    PubMed

    Gierahn, Todd M; Wadsworth, Marc H; Hughes, Travis K; Bryson, Bryan D; Butler, Andrew; Satija, Rahul; Fortune, Sarah; Love, J Christopher; Shalek, Alex K

    2017-04-01

    Single-cell RNA-seq can precisely resolve cellular states, but applying this method to low-input samples is challenging. Here, we present Seq-Well, a portable, low-cost platform for massively parallel single-cell RNA-seq. Barcoded mRNA capture beads and single cells are sealed in an array of subnanoliter wells using a semipermeable membrane, enabling efficient cell lysis and transcript capture. We use Seq-Well to profile thousands of primary human macrophages exposed to Mycobacterium tuberculosis.

  1. Xi-cam: Flexible High Throughput Data Processing for GISAXS

    NASA Astrophysics Data System (ADS)

    Pandolfi, Ronald; Kumar, Dinesh; Venkatakrishnan, Singanallur; Sarje, Abinav; Krishnan, Hari; Pellouchoud, Lenson; Ren, Fang; Fournier, Amanda; Jiang, Zhang; Tassone, Christopher; Mehta, Apurva; Sethian, James; Hexemer, Alexander

    With increasing capabilities and data demand for GISAXS beamlines, supporting software is under development to handle larger data rates, volumes, and processing needs. We aim to provide a flexible and extensible approach to GISAXS data treatment as a solution to these rising needs. Xi-cam is the CAMERA platform for data management, analysis, and visualization. The core of Xi-cam is an extensible plugin-based GUI platform which provides users an interactive interface to processing algorithms. Plugins are available for SAXS/GISAXS data and data series visualization, as well as forward modeling and simulation through HipGISAXS. With Xi-cam's advanced mode, data processing steps are designed as a graph-based workflow, which can be executed locally or remotely. Remote execution utilizes HPC or de-localized resources, allowing for effective reduction of high-throughput data. Xi-cam is open-source and cross-platform. The processing algorithms in Xi-cam include parallel cpu and gpu processing optimizations, also taking advantage of external processing packages such as pyFAI. Xi-cam is available for download online.

  2. DASH-2: Flexible, Low-Cost, and High-Throughput SNP Genotyping by Dynamic Allele-Specific Hybridization on Membrane Arrays

    PubMed Central

    Jobs, Magnus; Howell, W. Mathias; Strömqvist, Linda; Mayr, Torsten; Brookes, Anthony J.

    2003-01-01

    Genotyping technologies need to be continually improved in terms of their flexibility, cost-efficiency, and throughput, to push forward genome variation analysis. To this end, we have leveraged the inherent simplicity of dynamic allele-specific hybridization (DASH) and coupled it to recent innovations of centrifugal arrays and iFRET. We have thereby created a new genotyping platform we term DASH-2, which we demonstrate and evaluate in this report. The system is highly flexible in many ways (any plate format, PCR multiplexing, serial and parallel array processing, spectral-multiplexing of hybridization probes), thus supporting a wide range of application scales and objectives. Precision is demonstrated to be in the range 99.8–100%, and assay costs are 0.05 USD or less per genotype assignment. DASH-2 thus provides a powerful new alternative for genotyping practice, which can be used without the need for expensive robotics support. PMID:12727908

  3. Building biochips: a protein production pipeline

    NASA Astrophysics Data System (ADS)

    de Carvalho-Kavanagh, Marianne G. S.; Albala, Joanna S.

    2004-06-01

    Protein arrays are emerging as a practical format in which to study proteins in high-throughput using many of the same techniques as that of the DNA microarray. The key advantage to array-based methods for protein study is the potential for parallel analysis of thousands of samples in an automated, high-throughput fashion. Building protein arrays capable of this analysis capacity requires a robust expression and purification system capable of generating hundreds to thousands of purified recombinant proteins. We have developed a method to utilize LLNL-I.M.A.G.E. cDNAs to generate recombinant protein libraries using a baculovirus-insect cell expression system. We have used this strategy to produce proteins for analysis of protein/DNA and protein/protein interactions using protein microarrays in order to understand the complex interactions of proteins involved in homologous recombination and DNA repair. Using protein array techniques, a novel interaction between the DNA repair protein, Rad51B, and histones has been identified.

  4. High-Throughput Characterization of Porous Materials Using Graphics Processing Units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jihan; Martin, Richard L.; Rübel, Oliver

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CHmore » $$_{4}$$ and CO$$_{2}$$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.« less

  5. Advances in the Study of Heart Development and Disease Using Zebrafish

    PubMed Central

    Brown, Daniel R.; Samsa, Leigh Ann; Qian, Li; Liu, Jiandong

    2016-01-01

    Animal models of cardiovascular disease are key players in the translational medicine pipeline used to define the conserved genetic and molecular basis of disease. Congenital heart diseases (CHDs) are the most common type of human birth defect and feature structural abnormalities that arise during cardiac development and maturation. The zebrafish, Danio rerio, is a valuable vertebrate model organism, offering advantages over traditional mammalian models. These advantages include the rapid, stereotyped and external development of transparent embryos produced in large numbers from inexpensively housed adults, vast capacity for genetic manipulation, and amenability to high-throughput screening. With the help of modern genetics and a sequenced genome, zebrafish have led to insights in cardiovascular diseases ranging from CHDs to arrhythmia and cardiomyopathy. Here, we discuss the utility of zebrafish as a model system and summarize zebrafish cardiac morphogenesis with emphasis on parallels to human heart diseases. Additionally, we discuss the specific tools and experimental platforms utilized in the zebrafish model including forward screens, functional characterization of candidate genes, and high throughput applications. PMID:27335817

  6. High-throughput detection of ethanol-producing cyanobacteria in a microdroplet platform

    PubMed Central

    Abalde-Cela, Sara; Gould, Anna; Liu, Xin; Kazamia, Elena; Smith, Alison G.; Abell, Chris

    2015-01-01

    Ethanol production by microorganisms is an important renewable energy source. Most processes involve fermentation of sugars from plant feedstock, but there is increasing interest in direct ethanol production by photosynthetic organisms. To facilitate this, a high-throughput screening technique for the detection of ethanol is required. Here, a method for the quantitative detection of ethanol in a microdroplet-based platform is described that can be used for screening cyanobacterial strains to identify those with the highest ethanol productivity levels. The detection of ethanol by enzymatic assay was optimized both in bulk and in microdroplets. In parallel, the encapsulation of engineered ethanol-producing cyanobacteria in microdroplets and their growth dynamics in microdroplet reservoirs were demonstrated. The combination of modular microdroplet operations including droplet generation for cyanobacteria encapsulation, droplet re-injection and pico-injection, and laser-induced fluorescence, were used to create this new platform to screen genetically engineered strains of cyanobacteria with different levels of ethanol production. PMID:25878135

  7. Combinatorial Strategies for the Development of Bulk Metallic Glasses

    NASA Astrophysics Data System (ADS)

    Ding, Shiyan

    The systematic identification of multi-component alloys out of the vast composition space is still a daunting task, especially in the development of bulk metallic glasses that are typically based on three or more elements. In order to address this challenge, combinatorial approaches have been proposed. However, previous attempts have not successfully coupled the synthesis of combinatorial libraries with high-throughput characterization methods. The goal of my dissertation is to develop efficient high-throughput characterization methods, optimized to identify glass formers systematically. Here, two innovative approaches have been invented. One is to measure the nucleation temperature in parallel for up-to 800 compositions. The composition with the lowest nucleation temperature has a reasonable agreement with the best-known glass forming composition. In addition, the thermoplastic formability of a metallic glass forming system is determined through blow molding a compositional library. Our results reveal that the composition with the largest thermoplastic deformation correlates well with the best-known formability composition. I have demonstrated both methods as powerful tools to develop new bulk metallic glasses.

  8. HybPiper: Extracting coding sequence and introns for phylogenetics from high-throughput sequencing reads using target enrichment1

    PubMed Central

    Johnson, Matthew G.; Gardner, Elliot M.; Liu, Yang; Medina, Rafael; Goffinet, Bernard; Shaw, A. Jonathan; Zerega, Nyree J. C.; Wickett, Norman J.

    2016-01-01

    Premise of the study: Using sequence data generated via target enrichment for phylogenetics requires reassembly of high-throughput sequence reads into loci, presenting a number of bioinformatics challenges. We developed HybPiper as a user-friendly platform for assembly of gene regions, extraction of exon and intron sequences, and identification of paralogous gene copies. We test HybPiper using baits designed to target 333 phylogenetic markers and 125 genes of functional significance in Artocarpus (Moraceae). Methods and Results: HybPiper implements parallel execution of sequence assembly in three phases: read mapping, contig assembly, and target sequence extraction. The pipeline was able to recover nearly complete gene sequences for all genes in 22 species of Artocarpus. HybPiper also recovered more than 500 bp of nontargeted intron sequence in over half of the phylogenetic markers and identified paralogous gene copies in Artocarpus. Conclusions: HybPiper was designed for Linux and Mac OS X and is freely available at https://github.com/mossmatters/HybPiper. PMID:27437175

  9. High-Throughput, Data-Rich Cellular RNA Device Engineering

    PubMed Central

    Townshend, Brent; Kennedy, Andrew B.; Xiang, Joy S.; Smolke, Christina D.

    2015-01-01

    Methods for rapidly assessing sequence-structure-function landscapes and developing conditional gene-regulatory devices are critical to our ability to manipulate and interface with biology. We describe a framework for engineering RNA devices from preexisting aptamers that exhibit ligand-responsive ribozyme tertiary interactions. Our methodology utilizes cell sorting, high-throughput sequencing, and statistical data analyses to enable parallel measurements of the activities of hundreds of thousands of sequences from RNA device libraries in the absence and presence of ligands. Our tertiary interaction RNA devices exhibit improved performance in terms of gene silencing, activation ratio, and ligand sensitivity as compared to optimized RNA devices that rely on secondary structure changes. We apply our method to building biosensors for diverse ligands and determine consensus sequences that enable ligand-responsive tertiary interactions. These methods advance our ability to develop broadly applicable genetic tools and to elucidate understanding of the underlying sequence-structure-function relationships that empower rational design of complex biomolecules. PMID:26258292

  10. Optimization of a micro-scale, high throughput process development tool and the demonstration of comparable process performance and product quality with biopharmaceutical manufacturing processes.

    PubMed

    Evans, Steven T; Stewart, Kevin D; Afdahl, Chris; Patel, Rohan; Newell, Kelcy J

    2017-07-14

    In this paper, we discuss the optimization and implementation of a high throughput process development (HTPD) tool that utilizes commercially available micro-liter sized column technology for the purification of multiple clinically significant monoclonal antibodies. Chromatographic profiles generated using this optimized tool are shown to overlay with comparable profiles from the conventional bench-scale and clinical manufacturing scale. Further, all product quality attributes measured are comparable across scales for the mAb purifications. In addition to supporting chromatography process development efforts (e.g., optimization screening), comparable product quality results at all scales makes this tool is an appropriate scale model to enable purification and product quality comparisons of HTPD bioreactors conditions. The ability to perform up to 8 chromatography purifications in parallel with reduced material requirements per run creates opportunities for gathering more process knowledge in less time. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  11. High-throughput Identification of Bacteria Repellent Polymers for Medical Devices

    PubMed Central

    Wu, Mei; Hardman, Ailsa; Lilienkampf, Annamaria; Pernagallo, Salvatore; Blakely, Garry; Swann, David G.; Bradley, Mark; Gallagher, Maurice P.

    2016-01-01

    Medical devices are often associated with hospital-acquired infections, which place enormous strain on patients and the healthcare system as well as contributing to antimicrobial resistance. One possible avenue for the reduction of device-associated infections is the identification of bacteria-repellent polymer coatings for these devices, which would prevent bacterial binding at the initial attachment step. A method for the identification of such repellent polymers, based on the parallel screening of hundreds of polymers using a microarray, is described here. This high-throughput method resulted in the identification of a range of promising polymers that resisted binding of various clinically relevant bacterial species individually and also as multi-species communities. One polymer, PA13 (poly(methylmethacrylate-co-dimethylacrylamide)), demonstrated significant reduction in attachment of a number of hospital isolates when coated onto two commercially available central venous catheters. The method described could be applied to identify polymers for a wide range of applications in which modification of bacterial attachment is important. PMID:27842360

  12. High-performance parallel interface to synchronous optical network gateway

    DOEpatents

    St. John, Wallace B.; DuBois, David H.

    1996-01-01

    A system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway.

  13. Droplet Array-Based 3D Coculture System for High-Throughput Tumor Angiogenesis Assay.

    PubMed

    Du, Xiaohui; Li, Wanming; Du, Guansheng; Cho, Hansang; Yu, Min; Fang, Qun; Lee, Luke P; Fang, Jin

    2018-03-06

    Angiogenesis is critical for tumor progression and metastasis, and it progresses through orchestral multicellular interactions. Thus, there is urgent demand for high-throughput tumor angiogenesis assays for concurrent examination of multiple factors. For investigating tumor angiogenesis, we developed a microfluidic droplet array-based cell-coculture system comprising a two-layer polydimethylsiloxane chip featuring 6 × 9 paired-well arrays and an automated droplet-manipulation device. In each droplet-pair unit, tumor cells were cultured in 3D in one droplet by mixing cell suspensions with Matrigel, and in the other droplet, human umbilical vein endothelial cells (HUVECs) were cultured in 2D. Droplets were fused by a newly developed fusion method, and tumor angiogenesis was assayed by coculturing tumor cells and HUVECs in the fused droplet units. The 3D-cultured tumor cells formed aggregates harboring a hypoxic center-as observed in vivo-and secreted more vascular endothelial growth factor (VEGF) and more strongly induced HUVEC tubule formation than did 2D-cultured tumor cells. Our single array supported 54 assays in parallel. The angiogenic potentials of distinct tumor cells and their differential responses to antiangiogenesis agent, Fingolimod, could be investigated without mutual interference in a single array. Our droplet-based assay is convenient to evaluate multicellular interaction in high throughput in the context of tumor sprouting angiogenesis, and we envision that the assay can be extensively implementable for studying other cell-cell interactions.

  14. Quantitative assessment of RNA-protein interactions with high-throughput sequencing-RNA affinity profiling.

    PubMed

    Ozer, Abdullah; Tome, Jacob M; Friedman, Robin C; Gheba, Dan; Schroth, Gary P; Lis, John T

    2015-08-01

    Because RNA-protein interactions have a central role in a wide array of biological processes, methods that enable a quantitative assessment of these interactions in a high-throughput manner are in great demand. Recently, we developed the high-throughput sequencing-RNA affinity profiling (HiTS-RAP) assay that couples sequencing on an Illumina GAIIx genome analyzer with the quantitative assessment of protein-RNA interactions. This assay is able to analyze interactions between one or possibly several proteins with millions of different RNAs in a single experiment. We have successfully used HiTS-RAP to analyze interactions of the EGFP and negative elongation factor subunit E (NELF-E) proteins with their corresponding canonical and mutant RNA aptamers. Here we provide a detailed protocol for HiTS-RAP that can be completed in about a month (8 d hands-on time). This includes the preparation and testing of recombinant proteins and DNA templates, clustering DNA templates on a flowcell, HiTS and protein binding with a GAIIx instrument, and finally data analysis. We also highlight aspects of HiTS-RAP that can be further improved and points of comparison between HiTS-RAP and two other recently developed methods, quantitative analysis of RNA on a massively parallel array (RNA-MaP) and RNA Bind-n-Seq (RBNS), for quantitative analysis of RNA-protein interactions.

  15. A high-throughput solid-phase extraction microchip combined with inductively coupled plasma-mass spectrometry for rapid determination of trace heavy metals in natural water.

    PubMed

    Shih, Tsung-Ting; Hsieh, Cheng-Chuan; Luo, Yu-Ting; Su, Yi-An; Chen, Ping-Hung; Chuang, Yu-Chen; Sun, Yuh-Chang

    2016-04-15

    Herein, a hyphenated system combining a high-throughput solid-phase extraction (htSPE) microchip with inductively coupled plasma-mass spectrometry (ICP-MS) for rapid determination of trace heavy metals was developed. Rather than performing multiple analyses in parallel for the enhancement of analytical throughput, we improved the processing speed for individual samples by increasing the operation flow rate during SPE procedures. To this end, an innovative device combining a micromixer and a multi-channeled extraction unit was designed. Furthermore, a programmable valve manifold was used to interface the developed microchip and ICP-MS instrumentation in order to fully automate the system, leading to a dramatic reduction in operation time and human error. Under the optimized operation conditions for the established system, detection limits of 1.64-42.54 ng L(-1) for the analyte ions were achieved. Validation procedures demonstrated that the developed method could be satisfactorily applied to the determination of trace heavy metals in natural water. Each analysis could be readily accomplished within just 186 s using the established system. This represents, to the best of our knowledge, an unprecedented speed for the analysis of trace heavy metal ions. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Massively Parallel Rogue Cell Detection Using Serial Time-Encoded Amplified Microscopy of Inertially Ordered Cells in High-Throughput Flow

    DTIC Science & Technology

    2011-08-01

    screening of budding yeast and detection of rare breast cancer cells in blood, our method should also be amenable to other applications in which high...to UV light with a power of 8.0 mW/cm2 through the transparency mask for 90 seconds. The wafer was baked again at 95°C for 4 minutes then developed...separated from the replica and sonicated in isopropanol for 5 minutes, sonicated in deionized H2O for 5 minutes, and baked at 65°C for at least 30

  17. Six-flow operations for catalyst development in Fischer-Tropsch synthesis: Bridging the gap between high-throughput experimentation and extensive product evaluation

    NASA Astrophysics Data System (ADS)

    Sartipi, Sina; Jansma, Harrie; Bosma, Duco; Boshuizen, Bart; Makkee, Michiel; Gascon, Jorge; Kapteijn, Freek

    2013-12-01

    Design and operation of a "six-flow fixed-bed microreactor" setup for Fischer-Tropsch synthesis (FTS) is described. The unit consists of feed and mixing, flow division, reaction, separation, and analysis sections. The reactor system is made of five heating blocks with individual temperature controllers, assuring an identical isothermal zone of at least 10 cm along six fixed-bed microreactor inserts (4 mm inner diameter). Such a lab-scale setup allows running six experiments in parallel, under equal feed composition, reaction temperature, and conditions of separation and analysis equipment. It permits separate collection of wax and liquid samples (from each flow line), allowing operation with high productivities of C5+ hydrocarbons. The latter is crucial for a complete understanding of FTS product compositions and will represent an advantage over high-throughput setups with more than ten flows where such instrumental considerations lead to elevated equipment volume, cost, and operation complexity. The identical performance (of the six flows) under similar reaction conditions was assured by testing a same catalyst batch, loaded in all microreactors.

  18. SELMAP - SELEX affinity landscape MAPping of transcription factor binding sites using integrated microfluidics

    PubMed Central

    Chen, Dana; Orenstein, Yaron; Golodnitsky, Rada; Pellach, Michal; Avrahami, Dorit; Wachtel, Chaim; Ovadia-Shochat, Avital; Shir-Shapira, Hila; Kedmi, Adi; Juven-Gershon, Tamar; Shamir, Ron; Gerber, Doron

    2016-01-01

    Transcription factors (TFs) alter gene expression in response to changes in the environment through sequence-specific interactions with the DNA. These interactions are best portrayed as a landscape of TF binding affinities. Current methods to study sequence-specific binding preferences suffer from limited dynamic range, sequence bias, lack of specificity and limited throughput. We have developed a microfluidic-based device for SELEX Affinity Landscape MAPping (SELMAP) of TF binding, which allows high-throughput measurement of 16 proteins in parallel. We used it to measure the relative affinities of Pho4, AtERF2 and Btd full-length proteins to millions of different DNA binding sites, and detected both high and low-affinity interactions in equilibrium conditions, generating a comprehensive landscape of the relative TF affinities to all possible DNA 6-mers, and even DNA10-mers with increased sequencing depth. Low quantities of both the TFs and DNA oligomers were sufficient for obtaining high-quality results, significantly reducing experimental costs. SELMAP allows in-depth screening of hundreds of TFs, and provides a means for better understanding of the regulatory processes that govern gene expression. PMID:27628341

  19. Performance-scalable volumetric data classification for online industrial inspection

    NASA Astrophysics Data System (ADS)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  20. Multipurpose HTS Coagulation Analysis: Assay Development and Assessment of Coagulopathic Snake Venoms

    PubMed Central

    Still, Kristina B. M.; Nandlal, Randjana S. S.; Slagboom, Julien; Somsen, Govert W.; Kool, Jeroen

    2017-01-01

    Coagulation assays currently employed are often low throughput, require specialized equipment and/or require large blood/plasma samples. This study describes the development, optimization and early application of a generic low-volume and high-throughput screening (HTS) assay for coagulation activity. The assay is a time-course spectrophotometric measurement which kinetically measures the clotting profile of bovine or human plasma incubated with Ca2+ and a test compound. The HTS assay can be a valuable new tool for coagulation diagnostics in hospitals, for research in coagulation disorders, for drug discovery and for venom research. A major effect following envenomation by many venomous snakes is perturbation of blood coagulation caused by haemotoxic compounds present in the venom. These compounds, such as anticoagulants, are potential leads in drug discovery for cardiovascular diseases. The assay was implemented in an integrated analytical approach consisting of reversed-phase liquid chromatography (LC) for separation of crude venom components in combination with parallel post-column coagulation screening and mass spectrometry (MS). The approach was applied for the rapid assessment and identification of profiles of haemotoxic compounds in snake venoms. Procoagulant and anticoagulant activities were correlated with accurate masses from the parallel MS measurements, facilitating the detection of peptides showing strong anticoagulant activity. PMID:29186818

  1. FPGA implementation of low complexity LDPC iterative decoder

    NASA Astrophysics Data System (ADS)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  2. Microbial communities of biomethanization digesters fed with raw and heat pre-treated microalgae biomasses.

    PubMed

    Sanz, Jose Luis; Rojas, Patricia; Morato, Ana; Mendez, Lara; Ballesteros, Mercedes; González-Fernández, Cristina

    2017-02-01

    Microalgae biomasses are considered promising feedstocks for biofuel and methane productions. Two Continuously Stirred Tank Reactors (CSTR), fed with fresh (CSTR-C) and heat pre-treated (CSTR-T) Chlorella biomass were run in parallel in order to determine methane productions. The methane yield was 1.5 times higher in CSTR-T with regard to CSTR-C. Aiming to understand the microorganism roles within of the reactors, the sludge used as an inoculum (I), plus raw (CSTR-C) and heat pre-treated (CSTR-T) samples were analyzed by high-throughput pyrosequencing. The bacterial communities were dominated by Proteobacteria, Bacteroidetes, Chloroflexi and Firmicutes. Spirochaetae and Actinobacteria were only detected in sample I. Proteobacteria, mainly Alfaproteobacteria, were by far the dominant phylum within of the CSTR-C bioreactor. Many of the sequences retrieved were related to bacteria present in activated sludge treatment plants and they were absent after thermal pre-treatment. Most of the sequences affiliated to the Bacteroidetes were related to uncultured groups. Anaerolineaceae was the sole family found of the Chloroflexi phylum. All of the genera identified of the Firmicutes phylum carried out macromolecule hydrolysis and by-product fermentation. The proteolytic bacteria were prevalent over the saccharolytic microbes. The percentage of the proteolytic genera increased from the inoculum to the CSTR-T sample in a parallel fashion with an available protein increase owing to the high protein content of Chlorella. To relate the taxa identified by high-throughput sequencing to their functional roles remains a future challenge. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Associative Memory In A Phase Conjugate Resonator Cavity Utilizing A Hologram

    NASA Astrophysics Data System (ADS)

    Owechko, Y.; Marom, E.; Soffer, B. H.; Dunning, G.

    1987-01-01

    The principle of information retrieval by association has been suggested as a basis for parallel computing and as the process by which human memory functions.1 Various associative processors have been proposed that use electronic or optical means. Optical schemes,2-7 in particular, those based on holographic principles,3,6,7 are well suited to associative processing because of their high parallelism and information throughput. Previous workers8 demonstrated that holographically stored images can be recalled by using relatively complicated reference images but did not utilize nonlinear feedback to reduce the large cross talk that results when multiple objects are stored and a partial or distorted input is used for retrieval. These earlier approaches were limited in their ability to reconstruct the output object faithfully from a partial input.

  4. Selective recognition of parallel and anti-parallel thrombin-binding aptamer G-quadruplexes by different fluorescent dyes

    PubMed Central

    Zhao, Dan; Dong, Xiongwei; Jiang, Nan; Zhang, Dan; Liu, Changlin

    2014-01-01

    G-quadruplexes (G4) have been found increasing potential in applications, such as molecular therapeutics, diagnostics and sensing. Both Thioflavin T (ThT) and N-Methyl mesoporphyrin IX (NMM) become fluorescent in the presence of most G4, but thrombin-binding aptamer (TBA) has been reported as the only exception of the known G4-forming oligonucleotides when ThT is used as a high-throughput assay to identify G4 formation. Here, we investigate the interactions between ThT/NMM and TBA through fluorescence spectroscopy, circular dichroism and molecular docking simulation experiments in the absence or presence of cations. The results display that a large ThT fluorescence enhancement can be observed only when ThT bind to the parallel TBA quadruplex, which is induced to form by ThT in the absence of cations. On the other hand, great promotion in NMM fluorescence can be obtained only in the presence of anti-parallel TBA quadruplex, which is induced to fold by K+ or thrombin. The highly selective recognition of TBA quadruplex with different topologies by the two probes may be useful to investigate the interactions between conformation-specific G4 and the associated proteins, and could also be applied in label-free fluorescent sensing of other biomolecules. PMID:25245945

  5. High-throughput microfluidic single-cell digital polymerase chain reaction.

    PubMed

    White, A K; Heyries, K A; Doolin, C; Vaninsberghe, M; Hansen, C L

    2013-08-06

    Here we present an integrated microfluidic device for the high-throughput digital polymerase chain reaction (dPCR) analysis of single cells. This device allows for the parallel processing of single cells and executes all steps of analysis, including cell capture, washing, lysis, reverse transcription, and dPCR analysis. The cDNA from each single cell is distributed into a dedicated dPCR array consisting of 1020 chambers, each having a volume of 25 pL, using surface-tension-based sample partitioning. The high density of this dPCR format (118,900 chambers/cm(2)) allows the analysis of 200 single cells per run, for a total of 204,000 PCR reactions using a device footprint of 10 cm(2). Experiments using RNA dilutions show this device achieves shot-noise-limited performance in quantifying single molecules, with a dynamic range of 10(4). We performed over 1200 single-cell measurements, demonstrating the use of this platform in the absolute quantification of both high- and low-abundance mRNA transcripts, as well as micro-RNAs that are not easily measured using alternative hybridization methods. We further apply the specificity and sensitivity of single-cell dPCR to performing measurements of RNA editing events in single cells. High-throughput dPCR provides a new tool in the arsenal of single-cell analysis methods, with a unique combination of speed, precision, sensitivity, and specificity. We anticipate this approach will enable new studies where high-performance single-cell measurements are essential, including the analysis of transcriptional noise, allelic imbalance, and RNA processing.

  6. QPatch: the missing link between HTS and ion channel drug discovery.

    PubMed

    Mathes, Chris; Friis, Søren; Finley, Michael; Liu, Yi

    2009-01-01

    The conventional patch clamp has long been considered the best approach for studying ion channel function and pharmacology. However, its low throughput has been a major hurdle to overcome for ion channel drug discovery. The recent emergence of higher throughput, automated patch clamp technology begins to break this bottleneck by providing medicinal chemists with high-quality, information-rich data in a more timely fashion. As such, these technologies have the potential to bridge a critical missing link between high-throughput primary screening and meaningful ion channel drug discovery programs. One of these technologies, the QPatch automated patch clamp system developed by Sophion Bioscience, records whole-cell ion channel currents from 16 or 48 individual cells in a parallel fashion. Here, we review the general applicability of the QPatch to studying a wide variety of ion channel types (voltage-/ligand-gated cationic/anionic channels) in various expression systems. The success rate of gigaseals, formation of the whole-cell configuration and usable cells ranged from 40-80%, depending on a number of factors including the cell line used, ion channel expressed, assay development or optimization time and expression level in these studies. We present detailed analyses of the QPatch features and results in case studies in which secondary screening assays were successfully developed for a voltage-gated calcium channel and a ligand-gated TRP channel. The increase in throughput compared to conventional patch clamp with the same cells was approximately 10-fold. We conclude that the QPatch, combining high data quality and speed with user friendliness and suitability for a wide array of ion channels, resides on the cutting edge of automated patch clamp technology and plays a pivotal role in expediting ion channel drug discovery.

  7. Rapid, automated, parallel quantitative immunoassays using highly integrated microfluidics and AlphaLISA

    PubMed Central

    Tak For Yu, Zeta; Guan, Huijiao; Ki Cheung, Mei; McHugh, Walker M.; Cornell, Timothy T.; Shanley, Thomas P.; Kurabayashi, Katsuo; Fu, Jianping

    2015-01-01

    Immunoassays represent one of the most popular analytical methods for detection and quantification of biomolecules. However, conventional immunoassays such as ELISA and flow cytometry, even though providing high sensitivity and specificity and multiplexing capability, can be labor-intensive and prone to human error, making them unsuitable for standardized clinical diagnoses. Using a commercialized no-wash, homogeneous immunoassay technology (‘AlphaLISA’) in conjunction with integrated microfluidics, herein we developed a microfluidic immunoassay chip capable of rapid, automated, parallel immunoassays of microliter quantities of samples. Operation of the microfluidic immunoassay chip entailed rapid mixing and conjugation of AlphaLISA components with target analytes before quantitative imaging for analyte detections in up to eight samples simultaneously. Aspects such as fluid handling and operation, surface passivation, imaging uniformity, and detection sensitivity of the microfluidic immunoassay chip using AlphaLISA were investigated. The microfluidic immunoassay chip could detect one target analyte simultaneously for up to eight samples in 45 min with a limit of detection down to 10 pg mL−1. The microfluidic immunoassay chip was further utilized for functional immunophenotyping to examine cytokine secretion from human immune cells stimulated ex vivo. Together, the microfluidic immunoassay chip provides a promising high-throughput, high-content platform for rapid, automated, parallel quantitative immunosensing applications. PMID:26074253

  8. Microfluidic biolector-microfluidic bioprocess control in microtiter plates.

    PubMed

    Funke, Matthias; Buchenauer, Andreas; Schnakenberg, Uwe; Mokwa, Wilfried; Diederichs, Sylvia; Mertens, Alan; Müller, Carsten; Kensy, Frank; Büchs, Jochen

    2010-10-15

    In industrial-scale biotechnological processes, the active control of the pH-value combined with the controlled feeding of substrate solutions (fed-batch) is the standard strategy to cultivate both prokaryotic and eukaryotic cells. On the contrary, for small-scale cultivations, much simpler batch experiments with no process control are performed. This lack of process control often hinders researchers to scale-up and scale-down fermentation experiments, because the microbial metabolism and thereby the growth and production kinetics drastically changes depending on the cultivation strategy applied. While small-scale batches are typically performed highly parallel and in high throughput, large-scale cultivations demand sophisticated equipment for process control which is in most cases costly and difficult to handle. Currently, there is no technical system on the market that realizes simple process control in high throughput. The novel concept of a microfermentation system described in this work combines a fiber-optic online-monitoring device for microtiter plates (MTPs)--the BioLector technology--together with microfluidic control of cultivation processes in volumes below 1 mL. In the microfluidic chip, a micropump is integrated to realize distinct substrate flow rates during fed-batch cultivation in microscale. Hence, a cultivation system with several distinct advantages could be established: (1) high information output on a microscale; (2) many experiments can be performed in parallel and be automated using MTPs; (3) this system is user-friendly and can easily be transferred to a disposable single-use system. This article elucidates this new concept and illustrates applications in fermentations of Escherichia coli under pH-controlled and fed-batch conditions in shaken MTPs. Copyright 2010 Wiley Periodicals, Inc.

  9. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2012-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.

  10. Parallel changes of taxonomic interaction networks in lacustrine bacterial communities induced by a polymetallic perturbation

    PubMed Central

    Laplante, Karine; Sébastien, Boutin; Derome, Nicolas

    2013-01-01

    Heavy metals released by anthropogenic activities such as mining trigger profound changes to bacterial communities. In this study we used 16S SSU rRNA gene high-throughput sequencing to characterize the impact of a polymetallic perturbation and other environmental parameters on taxonomic networks within five lacustrine bacterial communities from sites located near Rouyn-Noranda, Quebec, Canada. The results showed that community equilibrium was disturbed in terms of both diversity and structure. Moreover, heavy metals, especially cadmium combined with water acidity, induced parallel changes among sites via the selection of resistant OTUs (Operational Taxonomic Unit) and taxonomic dominance perturbations favoring the Alphaproteobacteria. Furthermore, under a similar selective pressure, covariation trends between phyla revealed conservation and parallelism within interphylum interactions. Our study sheds light on the importance of analyzing communities not only from a phylogenetic perspective but also including a quantitative approach to provide significant insights into the evolutionary forces that shape the dynamic of the taxonomic interaction networks in bacterial communities. PMID:23789031

  11. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    NASA Astrophysics Data System (ADS)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  12. Use of the melting curve assay as a means for high-throughput quantification of Illumina sequencing libraries.

    PubMed

    Shinozuka, Hiroshi; Forster, John W

    2016-01-01

    Background. Multiplexed sequencing is commonly performed on massively parallel short-read sequencing platforms such as Illumina, and the efficiency of library normalisation can affect the quality of the output dataset. Although several library normalisation approaches have been established, none are ideal for highly multiplexed sequencing due to issues of cost and/or processing time. Methods. An inexpensive and high-throughput library quantification method has been developed, based on an adaptation of the melting curve assay. Sequencing libraries were subjected to the assay using the Bio-Rad Laboratories CFX Connect(TM) Real-Time PCR Detection System. The library quantity was calculated through summation of reduction of relative fluorescence units between 86 and 95 °C. Results.PCR-enriched sequencing libraries are suitable for this quantification without pre-purification of DNA. Short DNA molecules, which ideally should be eliminated from the library for subsequent processing, were differentiated from the target DNA in a mixture on the basis of differences in melting temperature. Quantification results for long sequences targeted using the melting curve assay were correlated with those from existing methods (R (2) > 0.77), and that observed from MiSeq sequencing (R (2) = 0.82). Discussion.The results of multiplexed sequencing suggested that the normalisation performance of the described method is equivalent to that of another recently reported high-throughput bead-based method, BeNUS. However, costs for the melting curve assay are considerably lower and processing times shorter than those of other existing methods, suggesting greater suitability for highly multiplexed sequencing applications.

  13. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  14. A high-throughput platform for low-volume high-temperature/pressure sealed vessel solvent extractions.

    PubMed

    Damm, Markus; Kappe, C Oliver

    2011-11-30

    A high-throughput platform for performing parallel solvent extractions in sealed HPLC/GC vials inside a microwave reactor is described. The system consist of a strongly microwave-absorbing silicon carbide plate with 20 cylindrical wells of appropriate dimensions to be fitted with standard HPLC/GC autosampler vials serving as extraction vessels. Due to the possibility of heating up to four heating platforms simultaneously (80 vials), efficient parallel analytical-scale solvent extractions can be performed using volumes of 0.5-1.5 mL at a maximum temperature/pressure limit of 200°C/20 bar. Since the extraction and subsequent analysis by either gas chromatography or liquid chromatography coupled with mass detection (GC-MS or LC-MS) is performed directly from the autosampler vial, errors caused by sample transfer can be minimized. The platform was evaluated for the extraction and quantification of caffeine from commercial coffee powders assessing different solvent types, extraction temperatures and times. For example, 141±11 μg caffeine (5 mg coffee powder) were extracted during a single extraction cycle using methanol as extraction solvent, whereas only 90±11 were obtained performing the extraction in methylene chloride, applying the same reaction conditions (90°C, 10 min). In multiple extraction experiments a total of ~150 μg caffeine was extracted from 5 mg commercial coffee powder. In addition to the quantitative caffeine determination, a comparative qualitative analysis of the liquid phase coffee extracts and the headspace volatiles was performed, placing special emphasis on headspace analysis using solid-phase microextraction (SPME) techniques. The miniaturized parallel extraction technique introduced herein allows solvent extractions to be performed at significantly expanded temperature/pressure limits and shortened extraction times, using standard HPLC autosampler vials as reaction vessels. Remarkable differences regarding peak pattern and main peaks were observed when low-temperature extraction (60°C) and high-temperature extraction (160°C) are compared prior to headspace-SPME-GC-MS performed in the same HPLC/GC vials. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. SNP discovery by high-throughput sequencing in soybean

    PubMed Central

    2010-01-01

    Background With the advance of new massively parallel genotyping technologies, quantitative trait loci (QTL) fine mapping and map-based cloning become more achievable in identifying genes for important and complex traits. Development of high-density genetic markers in the QTL regions of specific mapping populations is essential for fine-mapping and map-based cloning of economically important genes. Single nucleotide polymorphisms (SNPs) are the most abundant form of genetic variation existing between any diverse genotypes that are usually used for QTL mapping studies. The massively parallel sequencing technologies (Roche GS/454, Illumina GA/Solexa, and ABI/SOLiD), have been widely applied to identify genome-wide sequence variations. However, it is still remains unclear whether sequence data at a low sequencing depth are enough to detect the variations existing in any QTL regions of interest in a crop genome, and how to prepare sequencing samples for a complex genome such as soybean. Therefore, with the aims of identifying SNP markers in a cost effective way for fine-mapping several QTL regions, and testing the validation rate of the putative SNPs predicted with Solexa short sequence reads at a low sequencing depth, we evaluated a pooled DNA fragment reduced representation library and SNP detection methods applied to short read sequences generated by Solexa high-throughput sequencing technology. Results A total of 39,022 putative SNPs were identified by the Illumina/Solexa sequencing system using a reduced representation DNA library of two parental lines of a mapping population. The validation rates of these putative SNPs predicted with low and high stringency were 72% and 85%, respectively. One hundred sixty four SNP markers resulted from the validation of putative SNPs and have been selectively chosen to target a known QTL, thereby increasing the marker density of the targeted region to one marker per 42 K bp. Conclusions We have demonstrated how to quickly identify large numbers of SNPs for fine mapping of QTL regions by applying massively parallel sequencing combined with genome complexity reduction techniques. This SNP discovery approach is more efficient for targeting multiple QTL regions in a same genetic population, which can be applied to other crops. PMID:20701770

  16. Light fluorous-tagged traceless one-pot synthesis of benzimidazoles facilitated by microwave irradiation.

    PubMed

    Tseng, Chih-Chung; Tasi, Cheng-Hsun; Sun, Chung-Ming

    2012-06-01

    A novel protocol for rapid assemble of benzimidazole framework has been demonstrated. This method incorporated with light fluorous-tag provides a convenient method for diversification of benzimidazoles and for easy purification via fluorous solid-phase extraction (F-SPE) in a parallel manner. The key transformation of this study involves in situ reduction of aromatic nitro compound, amide formation, cyclization and aromatization promoted by microwave irradiation in a one-pot fashion. The strategy is envisaged to be applied for the establishment of drug-like small molecule libraries for high throughput screening.

  17. Hybrid optoelectronic neural networks using a mutually pumped phase-conjugate mirror

    NASA Astrophysics Data System (ADS)

    Dunning, G. J.; Owechko, Y.; Soffer, B. H.

    1991-06-01

    A method is described for interconnecting hybrid optoelectronic neural networks by using a mutually pumped phase conjugate mirror (MP-PCM). In this method, cross talk due to Bragg degeneracies is greatly reduced by storing each weight among many spatially and angularly multiplexed gratings. The effective weight throughput is increased by the parallel updating of weights using outer-product learning. Experiments demonstrated a high degree of interconnectivity between adjacent pixels. A diagram is presented showing the architecture for the optoelectronic neural network using an MP-PCM.

  18. Micro-differential scanning calorimeter for liquid biological samples

    DOE PAGES

    Wang, Shuyu; Yu, Shifeng; Siedler, Michael S.; ...

    2016-10-20

    Here, we developed an ultrasensitive micro-DSC (differential scanning calorimeter) for liquid protein sample characterization. Our design integrated vanadium oxide thermistors and flexible polymer substrates with microfluidics chambers to achieve a high sensitivity (6 V/W), low thermal conductivity (0.7 mW/K), high power resolutions (40 nW), and well-defined liquid volume (1 μl) calorimeter sensor in a compact and cost-effective way. Furthermore, we demonstrated the performance of the sensor with lysozyme unfolding. The measured transition temperature and enthalpy change were in accordance with the previous literature data. This micro-DSC could potentially raise the prospect of high-throughput biochemical measurement by parallel operation with miniaturizedmore » sample consumption.« less

  19. Construction of human antibody gene libraries and selection of antibodies by phage display.

    PubMed

    Frenzel, André; Kügler, Jonas; Wilke, Sonja; Schirrmann, Thomas; Hust, Michael

    2014-01-01

    Antibody phage display is the most commonly used in vitro selection technology and has yielded thousands of useful antibodies for research, diagnostics, and therapy.The prerequisite for successful generation and development of human recombinant antibodies using phage display is the construction of a high-quality antibody gene library. Here, we describe the methods for the construction of human immune and naive scFv gene libraries.The success also depends on the panning strategy for the selection of binders from these libraries. In this article, we describe a panning strategy that is high-throughput compatible and allows parallel selection in microtiter plates.

  20. High-performance parallel interface to synchronous optical network gateway

    DOEpatents

    St. John, W.B.; DuBois, D.H.

    1996-12-03

    Disclosed is a system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway. 7 figs.

  1. PGAS in-memory data processing for the Processing Unit of the Upgraded Electronics of the Tile Calorimeter of the ATLAS Detector

    NASA Astrophysics Data System (ADS)

    Ohene-Kwofie, Daniel; Otoo, Ekow

    2015-10-01

    The ATLAS detector, operated at the Large Hadron Collider (LHC) records proton-proton collisions at CERN every 50ns resulting in a sustained data flow up to PB/s. The upgraded Tile Calorimeter of the ATLAS experiment will sustain about 5PB/s of digital throughput. These massive data rates require extremely fast data capture and processing. Although there has been a steady increase in the processing speed of CPU/GPGPU assembled for high performance computing, the rate of data input and output, even under parallel I/O, has not kept up with the general increase in computing speeds. The problem then is whether one can implement an I/O subsystem infrastructure capable of meeting the computational speeds of the advanced computing systems at the petascale and exascale level. We propose a system architecture that leverages the Partitioned Global Address Space (PGAS) model of computing to maintain an in-memory data-store for the Processing Unit (PU) of the upgraded electronics of the Tile Calorimeter which is proposed to be used as a high throughput general purpose co-processor to the sROD of the upgraded Tile Calorimeter. The physical memory of the PUs are aggregated into a large global logical address space using RDMA- capable interconnects such as PCI- Express to enhance data processing throughput.

  2. Holographic femtosecond laser processing and its application to biological materials (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hayasaki, Yoshio

    2017-02-01

    Femtosecond laser processing is a promising tool for fabricating novel and useful structures on the surfaces of and inside materials. An enormous number of pulse irradiation points will be required for fabricating actual structures with millimeter scale, and therefore, the throughput of femtosecond laser processing must be improved for practical adoption of this technique. One promising method to improve throughput is parallel pulse generation based on a computer-generated hologram (CGH) displayed on a spatial light modulator (SLM), a technique called holographic femtosecond laser processing. The holographic method has the advantages such as high throughput, high light use efficiency, and variable, instantaneous, and 3D patterning. Furthermore, the use of an SLM gives an ability to correct unknown imperfections of the optical system and inhomogeneity in a sample using in-system optimization of the CGH. Furthermore, the CGH can adaptively compensate in response to dynamic unpredictable mechanical movements, air and liquid disturbances, a shape variation and deformation of the target sample, as well as adaptive wavefront control for environmental changes. Therefore, it is a powerful tool for the fabrication of biological cells and tissues, because they have free form, variable, and deformable structures. In this paper, we present the principle and the experimental setup of holographic femtosecond laser processing, and the effective way for processing the biological sample. We demonstrate the femtosecond laser processing of biological materials and the processing properties.

  3. Data Partitioning and Load Balancing in Parallel Disk Systems

    NASA Technical Reports Server (NTRS)

    Scheuermann, Peter; Weikum, Gerhard; Zabback, Peter

    1997-01-01

    Parallel disk systems provide opportunities for exploiting I/O parallelism in two possible waves, namely via inter-request and intra-request parallelism. In this paper we discuss the main issues in performance tuning of such systems, namely striping and load balancing, and show their relationship to response time and throughput. We outline the main components of an intelligent, self-reliant file system that aims to optimize striping by taking into account the requirements of the applications and performs load balancing by judicious file allocation and dynamic redistributions of the data when access patterns change. Our system uses simple but effective heuristics that incur only little overhead. We present performance experiments based on synthetic workloads and real-life traces.

  4. Performance of GeantV EM Physics Models

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2017-10-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  5. Transmissive Nanohole Arrays for Massively-Parallel Optical Biosensing

    PubMed Central

    2015-01-01

    A high-throughput optical biosensing technique is proposed and demonstrated. This hybrid technique combines optical transmission of nanoholes with colorimetric silver staining. The size and spacing of the nanoholes are chosen so that individual nanoholes can be independently resolved in massive parallel using an ordinary transmission optical microscope, and, in place of determining a spectral shift, the brightness of each nanohole is recorded to greatly simplify the readout. Each nanohole then acts as an independent sensor, and the blocking of nanohole optical transmission by enzymatic silver staining defines the specific detection of a biological agent. Nearly 10000 nanoholes can be simultaneously monitored under the field of view of a typical microscope. As an initial proof of concept, biotinylated lysozyme (biotin-HEL) was used as a model analyte, giving a detection limit as low as 0.1 ng/mL. PMID:25530982

  6. RAMICS: trainable, high-speed and biologically relevant alignment of high-throughput sequencing reads to coding DNA

    PubMed Central

    Wright, Imogen A.; Travers, Simon A.

    2014-01-01

    The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. PMID:24861618

  7. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)-A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes.

    PubMed

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare . However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes.

  8. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)—A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes

    PubMed Central

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes. PMID:29250096

  9. High-Throughput Incubation and Quantification of Agglutination Assays in a Microfluidic System.

    PubMed

    Castro, David; Conchouso, David; Kodzius, Rimantas; Arevalo, Arpys; Foulds, Ian G

    2018-06-04

    In this paper, we present a two-phase microfluidic system capable of incubating and quantifying microbead-based agglutination assays. The microfluidic system is based on a simple fabrication solution, which requires only laboratory tubing filled with carrier oil, driven by negative pressure using a syringe pump. We provide a user-friendly interface, in which a pipette is used to insert single droplets of a 1.25-µL volume into a system that is continuously running and therefore works entirely on demand without the need for stopping, resetting or washing the system. These assays are incubated by highly efficient passive mixing with a sample-to-answer time of 2.5 min, a 5⁻10-fold improvement over traditional agglutination assays. We study system parameters such as channel length, incubation time and flow speed to select optimal assay conditions, using the streptavidin-biotin interaction as a model analyte quantified using optical image processing. We then investigate the effect of changing the concentration of both analyte and microbead concentrations, with a minimum detection limit of 100 ng/mL. The system can be both low- and high-throughput, depending on the rate at which assays are inserted. In our experiments, we were able to easily produce throughputs of 360 assays per hour by simple manual pipetting, which could be increased even further by automation and parallelization. Agglutination assays are a versatile tool, capable of detecting an ever-growing catalog of infectious diseases, proteins and metabolites. A system such as this one is a step towards being able to produce high-throughput microfluidic diagnostic solutions with widespread adoption. The development of analytical techniques in the microfluidic format, such as the one presented in this work, is an important step in being able to continuously monitor the performance and microfluidic outputs of organ-on-chip devices.

  10. High performance computing environment for multidimensional image analysis

    PubMed Central

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-01-01

    Background The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. Results We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup. Conclusion Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets. PMID:17634099

  11. High performance computing environment for multidimensional image analysis.

    PubMed

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  12. The JCSG high-throughput structural biology pipeline.

    PubMed

    Elsliger, Marc André; Deacon, Ashley M; Godzik, Adam; Lesley, Scott A; Wooley, John; Wüthrich, Kurt; Wilson, Ian A

    2010-10-01

    The Joint Center for Structural Genomics high-throughput structural biology pipeline has delivered more than 1000 structures to the community over the past ten years. The JCSG has made a significant contribution to the overall goal of the NIH Protein Structure Initiative (PSI) of expanding structural coverage of the protein universe, as well as making substantial inroads into structural coverage of an entire organism. Targets are processed through an extensive combination of bioinformatics and biophysical analyses to efficiently characterize and optimize each target prior to selection for structure determination. The pipeline uses parallel processing methods at almost every step in the process and can adapt to a wide range of protein targets from bacterial to human. The construction, expansion and optimization of the JCSG gene-to-structure pipeline over the years have resulted in many technological and methodological advances and developments. The vast number of targets and the enormous amounts of associated data processed through the multiple stages of the experimental pipeline required the development of variety of valuable resources that, wherever feasible, have been converted to free-access web-based tools and applications.

  13. Performances of multiprocessor multidisk architectures for continuous media storage

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Messerli, Vincent; Hersch, Roger D.

    1996-03-01

    Multimedia interfaces increase the need for large image databases, capable of storing and reading streams of data with strict synchronicity and isochronicity requirements. In order to fulfill these requirements, we consider a parallel image server architecture which relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one or more disks. This contribution analyzes through bottleneck performance evaluation and simulation the behavior of two multi-processor multi-disk architectures: a point-to-point architecture and a shared-bus architecture similar to current multiprocessor workstation architectures. We compare the two architectures on the basis of two multimedia algorithms: the compute-bound frame resizing by resampling and the data-bound disk-to-client stream transfer. The results suggest that the shared bus is a potential bottleneck despite its very high hardware throughput (400Mbytes/s) and that an architecture with addressable local memories located closely to their respective processors could partially remove this bottleneck. The point- to-point architecture is scalable and able to sustain high throughputs for simultaneous compute- bound and data-bound operations.

  14. Encapsulating model complexity and landscape-scale analyses of state-and-transition simulation models: an application of ecoinformatics and juniper encroachment in sagebrush steppe ecosystems

    USGS Publications Warehouse

    O'Donnell, Michael

    2015-01-01

    State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf

  15. Acoustic Microfluidics for Bioanalytical Application

    NASA Astrophysics Data System (ADS)

    Lopez, Gabriel

    2013-03-01

    This talk will present new methods the use of ultrasonic standing waves in microfluidic systems to manipulate microparticles for the purpose of bioassays and bioseparations. We have recently developed multi-node acoustic focusing flow cells that can position particles into many parallel flow streams and have demonstrated the potential of such flow cells in the development of high throughput, parallel flow cytometers. These experiments show the potential for the creation of high throughput flow cytometers in applications requiring high flow rates and rapid detection of rare cells. This talk will also present the development of elastomeric capture microparticles and their use in acoustophoretic separations. We have developed simple methods to form elastomeric particles that are surface functionalized with biomolecular recognition reagents. These compressible particles exhibit negative acoustic contrast in ultrasound when suspended in aqueous media, blood serum or diluted blood. These particles can be continuously separated from cells by flowing them through a microfluidic device that uses an ultrasonic standing wave to align the blood cells, which exhibit positive acoustic contrast, at a node in the acoustic pressure distribution while aligning the negative acoustic contrast elastomeric particles at the antinodes. Laminar flow of the separated particles to downstream collection ports allows for collection of the separated negative contrast particles and cells. Separated elastomeric particles were analyzed via flow cytometry to demonstrate nanomolar detection for prostate specific antigen in aqueous buffer and picomolar detection for IgG in plasma and diluted blood samples. This approach has potential applications in the development of rapid assays that detect the presence of low concentrations of biomarkers (including biomolecules and cells) in a number of biological sample types. We acknowledge support through the NSF Research Triangle MRSEC.

  16. The High-Throughput Protein Sample Production Platform of the Northeast Structural Genomics Consortium

    PubMed Central

    Xiao, Rong; Anderson, Stephen; Aramini, James; Belote, Rachel; Buchwald, William A.; Ciccosanti, Colleen; Conover, Ken; Everett, John K.; Hamilton, Keith; Huang, Yuanpeng Janet; Janjua, Haleema; Jiang, Mei; Kornhaber, Gregory J.; Lee, Dong Yup; Locke, Jessica Y.; Ma, Li-Chung; Maglaqui, Melissa; Mao, Lei; Mitra, Saheli; Patel, Dayaban; Rossi, Paolo; Sahdev, Seema; Sharma, Seema; Shastry, Ritu; Swapna, G.V.T.; Tong, Saichu N.; Wang, Dongyan; Wang, Huang; Zhao, Li; Montelione, Gaetano T.; Acton, Thomas B.

    2014-01-01

    We describe the core Protein Production Platform of the Northeast Structural Genomics Consortium (NESG) and outline the strategies used for producing high-quality protein samples. The platform is centered on the cloning, expression and purification of 6X-His-tagged proteins using T7-based Escherichia coli systems. The 6X-His tag allows for similar purification procedures for most targets and implementation of high-throughput (HTP) parallel methods. In most cases, the 6X-His-tagged proteins are sufficiently purified (> 97% homogeneity) using a HTP two-step purification protocol for most structural studies. Using this platform, the open reading frames of over 16,000 different targeted proteins (or domains) have been cloned as > 26,000 constructs. Over the past nine years, more than 16,000 of these expressed protein, and more than 4,400 proteins (or domains) have been purified to homogeneity in tens of milligram quantities (see Summary Statistics, http://nesg.org/statistics.html). Using these samples, the NESG has deposited more than 900 new protein structures to the Protein Data Bank (PDB). The methods described here are effective in producing eukaryotic and prokaryotic protein samples in E. coli. This paper summarizes some of the updates made to the protein production pipeline in the last five years, corresponding to phase 2 of the NIGMS Protein Structure Initiative (PSI-2) project. The NESG Protein Production Platform is suitable for implementation in a large individual laboratory or by a small group of collaborating investigators. These advanced automated and/or parallel cloning, expression, purification, and biophysical screening technologies are of broad value to the structural biology, functional proteomics, and structural genomics communities. PMID:20688167

  17. Genome-wide mapping of mutations at single-nucleotide resolution for protein, metabolic and genome engineering.

    PubMed

    Garst, Andrew D; Bassalo, Marcelo C; Pines, Gur; Lynch, Sean A; Halweg-Edwards, Andrea L; Liu, Rongming; Liang, Liya; Wang, Zhiwen; Zeitoun, Ramsey; Alexander, William G; Gill, Ryan T

    2017-01-01

    Improvements in DNA synthesis and sequencing have underpinned comprehensive assessment of gene function in bacteria and eukaryotes. Genome-wide analyses require high-throughput methods to generate mutations and analyze their phenotypes, but approaches to date have been unable to efficiently link the effects of mutations in coding regions or promoter elements in a highly parallel fashion. We report that CRISPR-Cas9 gene editing in combination with massively parallel oligomer synthesis can enable trackable editing on a genome-wide scale. Our method, CRISPR-enabled trackable genome engineering (CREATE), links each guide RNA to homologous repair cassettes that both edit loci and function as barcodes to track genotype-phenotype relationships. We apply CREATE to site saturation mutagenesis for protein engineering, reconstruction of adaptive laboratory evolution experiments, and identification of stress tolerance and antibiotic resistance genes in bacteria. We provide preliminary evidence that CREATE will work in yeast. We also provide a webtool to design multiplex CREATE libraries.

  18. Quantitative analysis of RNA-protein interactions on a massively parallel array for mapping biophysical and evolutionary landscapes

    PubMed Central

    Buenrostro, Jason D.; Chircus, Lauren M.; Araya, Carlos L.; Layton, Curtis J.; Chang, Howard Y.; Snyder, Michael P.; Greenleaf, William J.

    2015-01-01

    RNA-protein interactions drive fundamental biological processes and are targets for molecular engineering, yet quantitative and comprehensive understanding of the sequence determinants of affinity remains limited. Here we repurpose a high-throughput sequencing instrument to quantitatively measure binding and dissociation of MS2 coat protein to >107 RNA targets generated on a flow-cell surface by in situ transcription and inter-molecular tethering of RNA to DNA. We decompose the binding energy contributions from primary and secondary RNA structure, finding that differences in affinity are often driven by sequence-specific changes in association rates. By analyzing the biophysical constraints and modeling mutational paths describing the molecular evolution of MS2 from low- to high-affinity hairpins, we quantify widespread molecular epistasis, and a long-hypothesized structure-dependent preference for G:U base pairs over C:A intermediates in evolutionary trajectories. Our results suggest that quantitative analysis of RNA on a massively parallel array (RNAMaP) relationships across molecular variants. PMID:24727714

  19. Multi-petascale highly efficient parallel supercomputer

    DOEpatents

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  20. Photonics for aerospace sensors

    NASA Astrophysics Data System (ADS)

    Pellegrino, John; Adler, Eric D.; Filipov, Andree N.; Harrison, Lorna J.; van der Gracht, Joseph; Smith, Dale J.; Tayag, Tristan J.; Viveiros, Edward A.

    1992-11-01

    The maturation in the state-of-the-art of optical components is enabling increased applications for the technology. Most notable is the ever-expanding market for fiber optic data and communications links, familiar in both commercial and military markets. The inherent properties of optics and photonics, however, have suggested that components and processors may be designed that offer advantages over more commonly considered digital approaches for a variety of airborne sensor and signal processing applications. Various academic, industrial, and governmental research groups have been actively investigating and exploiting these properties of high bandwidth, large degree of parallelism in computation (e.g., processing in parallel over a two-dimensional field), and interconnectivity, and have succeeded in advancing the technology to the stage of systems demonstration. Such advantages as computational throughput and low operating power consumption are highly attractive for many computationally intensive problems. This review covers the key devices necessary for optical signal and image processors, some of the system application demonstration programs currently in progress, and active research directions for the implementation of next-generation architectures.

  1. Aircraft Configuration and Flight Crew Compliance with Procedures While Conducting Flight Deck Based Interval Management (FIM) Operations

    NASA Technical Reports Server (NTRS)

    Shay, Rick; Swieringa, Kurt A.; Baxley, Brian T.

    2012-01-01

    Flight deck based Interval Management (FIM) applications using ADS-B are being developed to improve both the safety and capacity of the National Airspace System (NAS). FIM is expected to improve the safety and efficiency of the NAS by giving pilots the technology and procedures to precisely achieve an interval behind the preceding aircraft by a specific point. Concurrently but independently, Optimized Profile Descents (OPD) are being developed to help reduce fuel consumption and noise, however, the range of speeds available when flying an OPD results in a decrease in the delivery precision of aircraft to the runway. This requires the addition of a spacing buffer between aircraft, reducing system throughput. FIM addresses this problem by providing pilots with speed guidance to achieve a precise interval behind another aircraft, even while flying optimized descents. The Interval Management with Spacing to Parallel Dependent Runways (IMSPiDR) human-in-the-loop experiment employed 24 commercial pilots to explore the use of FIM equipment to conduct spacing operations behind two aircraft arriving to parallel runways, while flying an OPD during high-density operations. This paper describes the impact of variations in pilot operations; in particular configuring the aircraft, their compliance with FIM operating procedures, and their response to changes of the FIM speed. An example of the displayed FIM speeds used incorrectly by a pilot is also discussed. Finally, this paper examines the relationship between achieving airline operational goals for individual aircraft and the need for ATC to deliver aircraft to the runway with greater precision. The results show that aircraft can fly an OPD and conduct FIM operations to dependent parallel runways, enabling operational goals to be achieved efficiently while maintaining system throughput.

  2. Nonpreemptive run-time scheduling issues on a multitasked, multiprogrammed multiprocessor with dependencies, bidimensional tasks, folding and dynamic graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Allan Ray

    1987-05-01

    Increases in high speed hardware have mandated studies in software techniques to exploit the parallel capabilities. This thesis examines the effects a run-time scheduler has on a multiprocessor. The model consists of directed, acyclic graphs, generated from serial FORTRAN benchmark programs by the parallel compiler Parafrase. A multitasked, multiprogrammed environment is created. Dependencies are generated by the compiler. Tasks are bidimensional, i.e., they may specify both time and processor requests. Processor requests may be folded into execution time by the scheduler. The graphs may arrive at arbitrary time intervals. The general case is NP-hard, thus, a variety of heuristics aremore » examined by a simulator. Multiprogramming demonstrates a greater need for a run-time scheduler than does monoprogramming for a variety of reasons, e.g., greater stress on the processors, a larger number of independent control paths, more variety in the task parameters, etc. The dynamic critical path series of algorithms perform well. Dynamic critical volume did not add much. Unfortunately, dynamic critical path maximizes turnaround time as well as throughput. Two schedulers are presented which balance throughput and turnaround time. The first requires classification of jobs by type; the second requires selection of a ratio value which is dependent upon system parameters. 45 refs., 19 figs., 20 tabs.« less

  3. Design of high-throughput and low-power true random number generator utilizing perpendicularly magnetized voltage-controlled magnetic tunnel junction

    NASA Astrophysics Data System (ADS)

    Lee, Hochul; Ebrahimi, Farbod; Amiri, Pedram Khalili; Wang, Kang L.

    2017-05-01

    A true random number generator based on perpendicularly magnetized voltage-controlled magnetic tunnel junction devices (MRNG) is presented. Unlike MTJs used in memory applications where a stable bit is needed to store information, in this work, the MTJ is intentionally designed with small perpendicular magnetic anisotropy (PMA). This allows one to take advantage of the thermally activated fluctuations of its free layer as a stochastic noise source. Furthermore, we take advantage of the voltage dependence of anisotropy to temporarily change the MTJ state into an unstable state when a voltage is applied. Since the MTJ has two energetically stable states, the final state is randomly chosen by thermal fluctuation. The voltage controlled magnetic anisotropy (VCMA) effect is used to generate the metastable state of the MTJ by lowering its energy barrier. The proposed MRNG achieves a high throughput (32 Gbps) by implementing a 64 ×64 MTJ array into CMOS circuits and executing operations in a parallel manner. Furthermore, the circuit consumes very low energy to generate a random bit (31.5 fJ/bit) due to the high energy efficiency of the voltage-controlled MTJ switching.

  4. Loss of heterozygosity assay for molecular detection of cancer using energy-transfer primers and capillary array electrophoresis.

    PubMed

    Medintz, I L; Lee, C C; Wong, W W; Pirkola, K; Sidransky, D; Mathies, R A

    2000-08-01

    Microsatellite DNA loci are useful markers for the detection of loss of heterozygosity (LOH) and microsatellite instability (MI) associated with primary cancers. To carry out large-scale studies of LOH and MI in cancer progression, high-throughput instrumentation and assays with high accuracy and sensitivity need to be validated. DNA was extracted from 26 renal tumor and paired lymphocyte samples and amplified with two-color energy-transfer (ET) fluorescent primers specific for loci associated with cancer-induced chromosomal changes. PCR amplicons were separated on the MegaBACE-1000 96 capillary array electrophoresis (CAE) instrument and analyzed with MegaBACE Genetic Profiler v.1.0 software. Ninety-six separations were achieved in parallel in 75 minutes. Loss of heterozygosity was easily detected in tumor samples as was the gain/loss of microsatellite core repeats. Allelic ratios were determined with a precision of +/- 10% or better. Prior analysis of these samples with slab gel electrophoresis and radioisotope labeling had not detected these changes with as much sensitivity or precision. This study establishes the validity of this assay and the MegaBACE instrument for large-scale, high-throughput studies of the molecular genetic changes associated with cancer.

  5. High-throughput microsphiltration to assess red blood cell deformability and screen for malaria transmission-blocking drugs.

    PubMed

    Duez, Julien; Carucci, Mario; Garcia-Barbazan, Irene; Corral, Matias; Perez, Oscar; Presa, Jesus Luis; Henry, Benoit; Roussel, Camille; Ndour, Papa Alioune; Rosa, Noemi Bahamontes; Sanz, Laura; Gamo, Francisco-Javier; Buffet, Pierre

    2018-06-01

    The mechanical retention of rigid erythrocytes in the spleen is central in major hematological diseases such as hereditary spherocytosis, sickle-cell disease and malaria. Here, we describe the use of microsphiltration (microsphere filtration) to assess erythrocyte deformability in hundreds to thousands of samples in parallel, by filtering them through microsphere layers in 384-well plates adapted for the discovery of compounds that stiffen Plasmodium falciparum gametocytes, with the aim of interrupting malaria transmission. Compound-exposed gametocytes are loaded into microsphiltration plates, filtered and then transferred to imaging plates for analysis. High-content imaging detects viable gametocytes upstream and downstream from filters and quantifies spleen-like retention. This screening assay takes 3-4 d. Unlike currently available methods used to assess red blood cell (RBC) deformability, microsphiltration enables high-throughput pharmacological screening (tens of thousands of compounds tested in a matter of months) and involves a cell mechanical challenge that induces a physiologically relevant dumbbell-shape deformation. It therefore directly assesses the ability of RBCs to cross inter-endothelial splenic slits in vivo. This protocol has potential applications in quality control for transfusion and in determination of phenotypic markers of erythrocytes in hematological diseases.

  6. Automation of a Nile red staining assay enables high throughput quantification of microalgal lipid production.

    PubMed

    Morschett, Holger; Wiechert, Wolfgang; Oldiges, Marco

    2016-02-09

    Within the context of microalgal lipid production for biofuels and bulk chemical applications, specialized higher throughput devices for small scale parallelized cultivation are expected to boost the time efficiency of phototrophic bioprocess development. However, the increasing number of possible experiments is directly coupled to the demand for lipid quantification protocols that enable reliably measuring large sets of samples within short time and that can deal with the reduced sample volume typically generated at screening scale. To meet these demands, a dye based assay was established using a liquid handling robot to provide reproducible high throughput quantification of lipids with minimized hands-on-time. Lipid production was monitored using the fluorescent dye Nile red with dimethyl sulfoxide as solvent facilitating dye permeation. The staining kinetics of cells at different concentrations and physiological states were investigated to successfully down-scale the assay to 96 well microtiter plates. Gravimetric calibration against a well-established extractive protocol enabled absolute quantification of intracellular lipids improving precision from ±8 to ±2 % on average. Implementation into an automated liquid handling platform allows for measuring up to 48 samples within 6.5 h, reducing hands-on-time to a third compared to manual operation. Moreover, it was shown that automation enhances accuracy and precision compared to manual preparation. It was revealed that established protocols relying on optical density or cell number for biomass adjustion prior to staining may suffer from errors due to significant changes of the cells' optical and physiological properties during cultivation. Alternatively, the biovolume was used as a measure for biomass concentration so that errors from morphological changes can be excluded. The newly established assay proved to be applicable for absolute quantification of algal lipids avoiding limitations of currently established protocols, namely biomass adjustment and limited throughput. Automation was shown to improve data reliability, as well as experimental throughput simultaneously minimizing the needed hands-on-time to a third. Thereby, the presented protocol meets the demands for the analysis of samples generated by the upcoming generation of devices for higher throughput phototrophic cultivation and thereby contributes to boosting the time efficiency for setting up algae lipid production processes.

  7. Accelerating semantic graph databases on commodity clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morari, Alessandro; Castellana, Vito G.; Haglin, David J.

    We are developing a full software system for accelerating semantic graph databases on commodity cluster that scales to hundreds of nodes while maintaining constant query throughput. Our framework comprises a SPARQL to C++ compiler, a library of parallel graph methods and a custom multithreaded runtime layer, which provides a Partitioned Global Address Space (PGAS) programming model with fork/join parallelism and automatic load balancing over a commodity clusters. We present preliminary results for the compiler and for the runtime.

  8. QoS support for end users of I/O-intensive applications using shared storage systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Marion Kei; Zhang, Xuechen; Jiang, Song

    2011-01-19

    I/O-intensive applications are becoming increasingly common on today's high-performance computing systems. While performance of compute-bound applications can be effectively guaranteed with techniques such as space sharing or QoS-aware process scheduling, it remains a challenge to meet QoS requirements for end users of I/O-intensive applications using shared storage systems because it is difficult to differentiate I/O services for different applications with individual quality requirements. Furthermore, it is difficult for end users to accurately specify performance goals to the storage system using I/O-related metrics such as request latency or throughput. As access patterns, request rates, and the system workload change in time,more » a fixed I/O performance goal, such as bounds on throughput or latency, can be expensive to achieve and may not lead to a meaningful performance guarantees such as bounded program execution time. We propose a scheme supporting end-users QoS goals, specified in terms of program execution time, in shared storage environments. We automatically translate the users performance goals into instantaneous I/O throughput bounds using a machine learning technique, and use dynamically determined service time windows to efficiently meet the throughput bounds. We have implemented this scheme in the PVFS2 parallel file system and have conducted an extensive evaluation. Our results show that this scheme can satisfy realistic end-user QoS requirements by making highly efficient use of the I/O resources. The scheme seeks to balance programs attainment of QoS requirements, and saves as much of the remaining I/O capacity as possible for best-effort programs.« less

  9. Adaptive efficient compression of genomes

    PubMed Central

    2012-01-01

    Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997

  10. Engineering 'cell robots' for parallel and highly sensitive screening of biomolecules under in vivo conditions.

    PubMed

    Song, Lifu; Zeng, An-Ping

    2017-11-09

    Cells are capable of rapid replication and performing tasks adaptively and ultra-sensitively and can be considered as cheap "biological-robots". Here we propose to engineer cells for screening biomolecules in parallel and with high sensitivity. Specifically, we place the biomolecule variants (library) on the bacterial phage M13. We then design cells to screen the library based on cell-phage interactions mediated by a specific intracellular signal change caused by the biomolecule of interest. For proof of concept, we used intracellular lysine concentration in E. coli as a signal to successfully screen variants of functional aspartate kinase III (AK-III) under in vivo conditions, a key enzyme in L-lysine biosynthesis which is strictly inhibited by L-lysine. Comparative studies with flow cytometry method failed to distinguish the wild-type from lysine resistance variants of AK-III, confirming a higher sensitivity of the method. It opens up a new and effective way of in vivo high-throughput screening for functional molecules and can be easily implemented at low costs.

  11. Immobilization of human papillomavirus DNA probe for surface plasmon resonance imaging

    NASA Astrophysics Data System (ADS)

    Chong, Xinyuan; Ji, Yanhong; Ma, Suihua; Liu, Le; Liu, Zhiyi; Li, Yao; He, Yonghong; Guo, Jihua

    2009-08-01

    Human papillomavirus (HPV) is a kind of double-stranded DNA virus whose subspecies have diversity. Near 40 kinds of subspecies can invade reproductive organ and cause some high risk disease, such as cervical carcinoma. In order to detect the type of the subspecies of the HPV DNA, we used the parallel scan spectral surface plasmon resonance (SPR) imaging technique, which is a novel type of two- dimensional bio-sensing method based on surface plasmon resonance and is proposed in our previous work, to study the immobilization of the HPV DNA probes on the gold film. In the experiment, four kinds of the subspecies of the HPV DNA (HPV16, HPV18, HPV31, HPV58) probes are fixed on one gold film, and incubate in the constant temperature condition to get a HPV DNA probe microarray. We use the parallel scan spectral SPR imaging system to detect the reflective indices of the HPV DNA subspecies probes. The benefits of this new approach are high sensitive, label-free, strong specificity and high through-put.

  12. A real-time spike sorting method based on the embedded GPU.

    PubMed

    Zelan Yang; Kedi Xu; Xiang Tian; Shaomin Zhang; Xiaoxiang Zheng

    2017-07-01

    Microelectrode arrays with hundreds of channels have been widely used to acquire neuron population signals in neuroscience studies. Online spike sorting is becoming one of the most important challenges for high-throughput neural signal acquisition systems. Graphic processing unit (GPU) with high parallel computing capability might provide an alternative solution for increasing real-time computational demands on spike sorting. This study reported a method of real-time spike sorting through computing unified device architecture (CUDA) which was implemented on an embedded GPU (NVIDIA JETSON Tegra K1, TK1). The sorting approach is based on the principal component analysis (PCA) and K-means. By analyzing the parallelism of each process, the method was further optimized in the thread memory model of GPU. Our results showed that the GPU-based classifier on TK1 is 37.92 times faster than the MATLAB-based classifier on PC while their accuracies were the same with each other. The high-performance computing features of embedded GPU demonstrated in our studies suggested that the embedded GPU provide a promising platform for the real-time neural signal processing.

  13. Ultra-High Density Holographic Memory Module with Solid-State Architecture

    NASA Technical Reports Server (NTRS)

    Markov, Vladimir B.

    2000-01-01

    NASA's terrestrial. space, and deep-space missions require technology that allows storing. retrieving, and processing a large volume of information. Holographic memory offers high-density data storage with parallel access and high throughput. Several methods exist for data multiplexing based on the fundamental principles of volume hologram selectivity. We recently demonstrated that a spatial (amplitude-phase) encoding of the reference wave (SERW) looks promising as a way to increase the storage density. The SERW hologram offers a method other than traditional methods of selectivity, such as spatial de-correlation between recorded and reconstruction fields, In this report we present the experimental results of the SERW-hologram memory module with solid-state architecture, which is of particular interest for space operations.

  14. Hierarchically Ordered Nanopatterns for Spatial Control of Biomolecules

    PubMed Central

    2015-01-01

    The development and study of a benchtop, high-throughput, and inexpensive fabrication strategy to obtain hierarchical patterns of biomolecules with sub-50 nm resolution is presented. A diblock copolymer of polystyrene-b-poly(ethylene oxide), PS-b-PEO, is synthesized with biotin capping the PEO block and 4-bromostyrene copolymerized within the polystyrene block at 5 wt %. These two handles allow thin films of the block copolymer to be postfunctionalized with biotinylated biomolecules of interest and to obtain micropatterns of nanoscale-ordered films via photolithography. The design of this single polymer further allows access to two distinct superficial nanopatterns (lines and dots), where the PEO cylinders are oriented parallel or perpendicular to the substrate. Moreover, we present a strategy to obtain hierarchical mixed morphologies: a thin-film coating of cylinders both parallel and perpendicular to the substrate can be obtained by tuning the solvent annealing and irradiation conditions. PMID:25363506

  15. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  16. Accelerating Pathology Image Data Cross-Comparison on CPU-GPU Hybrid Systems

    PubMed Central

    Wang, Kaibo; Huai, Yin; Lee, Rubao; Wang, Fusheng; Zhang, Xiaodong; Saltz, Joel H.

    2012-01-01

    As an important application of spatial databases in pathology imaging analysis, cross-comparing the spatial boundaries of a huge amount of segmented micro-anatomic objects demands extremely data- and compute-intensive operations, requiring high throughput at an affordable cost. However, the performance of spatial database systems has not been satisfactory since their implementations of spatial operations cannot fully utilize the power of modern parallel hardware. In this paper, we provide a customized software solution that exploits GPUs and multi-core CPUs to accelerate spatial cross-comparison in a cost-effective way. Our solution consists of an efficient GPU algorithm and a pipelined system framework with task migration support. Extensive experiments with real-world data sets demonstrate the effectiveness of our solution, which improves the performance of spatial cross-comparison by over 18 times compared with a parallelized spatial database approach. PMID:23355955

  17. Real time software tools and methodologies

    NASA Technical Reports Server (NTRS)

    Christofferson, M. J.

    1981-01-01

    Real time systems are characterized by high speed processing and throughput as well as asynchronous event processing requirements. These requirements give rise to particular implementations of parallel or pipeline multitasking structures, of intertask or interprocess communications mechanisms, and finally of message (buffer) routing or switching mechanisms. These mechanisms or structures, along with the data structue, describe the essential character of the system. These common structural elements and mechanisms are identified, their implementation in the form of routines, tasks or macros - in other words, tools are formalized. The tools developed support or make available the following: reentrant task creation, generalized message routing techniques, generalized task structures/task families, standardized intertask communications mechanisms, and pipeline and parallel processing architectures in a multitasking environment. Tools development raise some interesting prospects in the areas of software instrumentation and software portability. These issues are discussed following the description of the tools themselves.

  18. Hierarchically Ordered Nanopatterns for Spatial Control of Biomolecules

    DOE PAGES

    Tran, Helen; Ronaldson, Kacey; Bailey, Nevette A.; ...

    2014-11-04

    We present the development and study of a benchtop, high-throughput, and inexpensive fabrication strategy to obtain hierarchical patterns of biomolecules with sub-50 nm resolution. A diblock copolymer of polystyrene-b-poly(ethylene oxide), PS-b-PEO, is synthesized with biotin capping the PEO block and 4-bromostyrene copolymerized within the polystyrene block at 5 wt %. These two handles allow thin films of the block copolymer to be postfunctionalized with biotinylated biomolecules of interest and to obtain micropatterns of nanoscale-ordered films via photolithography. The design of this single polymer further allows access to two distinct superficial nanopatterns (lines and dots), where the PEO cylinders are orientedmore » parallel or perpendicular to the substrate. Moreover, we present a strategy to obtain hierarchical mixed morphologies: a thin-film coating of cylinders both parallel and perpendicular to the substrate can be obtained by tuning the solvent annealing and irradiation conditions.« less

  19. More IMPATIENT: A Gridding-Accelerated Toeplitz-based Strategy for Non-Cartesian High-Resolution 3D MRI on GPUs

    PubMed Central

    Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.

    2013-01-01

    Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203

  20. High throughput, parallel imaging and biomarker quantification of human spermatozoa by ImageStream flow cytometry.

    PubMed

    Buckman, Clayton; George, Thaddeus C; Friend, Sherree; Sutovsky, Miriam; Miranda-Vizuete, Antonio; Ozanon, Christophe; Morrissey, Phil; Sutovsky, Peter

    2009-12-01

    Spermatid specific thioredoxin-3 protein (SPTRX-3) accumulates in the superfluous cytoplasm of defective human spermatozoa. Novel ImageStream technology combining flow cytometry with cell imaging was used for parallel quantification and visualization of SPTRX-3 protein in defective spermatozoa of five men from infertile couples. The majority of the SPTRX-3 containing cells were overwhelmingly spermatozoa with a variety of morphological defects, detectable in the ImageStream recorded images. Quantitative parameters of relative SPTRX-3 induced fluorescence measured by ImageStream correlated closely with conventional flow cytometric measurements of the same sample set and reflected the results of clinical semen evaluation. Image Stream quantification of SPTRX-3 combines and surpasses the informative value of both conventional flow cytometry and light microscopic semen evaluation. The observed patterns of the retention of SPTRX-3 in the sperm samples from infertility patients support the view that SPTRX3 is a biomarker of male infertility.

  1. A parallel 3-D discrete wavelet transform architecture using pipelined lifting scheme approach for video coding

    NASA Astrophysics Data System (ADS)

    Hegde, Ganapathi; Vaya, Pukhraj

    2013-10-01

    This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.

  2. Parallel separations using capillary electrophoresis on a multilane microchip with multiplexed laser-induced fluorescence detection.

    PubMed

    Nikcevic, Irena; Piruska, Aigars; Wehmeyer, Kenneth R; Seliskar, Carl J; Limbach, Patrick A; Heineman, William R

    2010-08-01

    Parallel separations using CE on a multilane microchip with multiplexed LIF detection is demonstrated. The detection system was developed to simultaneously record data on all channels using an expanded laser beam for excitation, a camera lens to capture emission, and a CCD camera for detection. The detection system enables monitoring of each channel continuously and distinguishing individual lanes without significant crosstalk between adjacent lanes. Multiple analytes can be determined in parallel lanes within a single microchip in a single run, leading to increased sample throughput. The pK(a) determination of small molecule analytes is demonstrated with the multilane microchip.

  3. Parallel separations using capillary electrophoresis on a multilane microchip with multiplexed laser induced fluorescence detection

    PubMed Central

    Nikcevic, Irena; Piruska, Aigars; Wehmeyer, Kenneth R.; Seliskar, Carl J.; Limbach, Patrick A.; Heineman, William R.

    2010-01-01

    Parallel separations using capillary electrophoresis on a multilane microchip with multiplexed laser induced fluorescence detection is demonstrated. The detection system was developed to simultaneously record data on all channels using an expanded laser beam for excitation, a camera lens to capture emission, and a CCD camera for detection. The detection system enables monitoring of each channel continuously and distinguishing individual lanes without significant crosstalk between adjacent lanes. Multiple analytes can be analyzed on parallel lanes within a single microchip in a single run, leading to increased sample throughput. The pKa determination of small molecule analytes is demonstrated with the multilane microchip. PMID:20737446

  4. Highly Multiplexed RNA Aptamer Selection using a Microplate-based Microcolumn Device.

    PubMed

    Reinholt, Sarah J; Ozer, Abdullah; Lis, John T; Craighead, Harold G

    2016-07-19

    We describe a multiplexed RNA aptamer selection to 19 different targets simultaneously using a microcolumn-based device, MEDUSA (Microplate-based Enrichment Device Used for the Selection of Aptamers), as well as a modified selection process, that significantly reduce the time and reagents needed for selections. We exploited MEDUSA's reconfigurable design between parallel and serially-connected microcolumns to enable the use of just 2 aliquots of starting library, and its 96-well microplate compatibility to enable the continued use of high-throughput techniques in downstream processes. Our modified selection protocol allowed us to perform the equivalent of a 10-cycle selection in the time it takes for 4 traditional selection cycles. Several aptamers were discovered with nanomolar dissociation constants. Furthermore, aptamers were identified that not only bound with high affinity, but also acted as inhibitors to significantly reduce the activity of their target protein, mouse decapping exoribonuclease (DXO). The aptamers resisted DXO's exoribonuclease activity, and in studies monitoring DXO's degradation of a 30-nucleotide substrate, less than 1 μM of aptamer demonstrated significant inhibition of DXO activity. This aptamer selection method using MEDUSA helps to overcome some of the major challenges with traditional aptamer selections, and provides a platform for high-throughput selections that lends itself to process automation.

  5. A high-throughput cellular assay to quantify the p53-degradation activity of E6 from different human papillomavirus types.

    PubMed

    Gagnon, David; Archambault, Jacques

    2015-01-01

    A subset of human papillomaviruses (HPVs), known as the high-risk types, are the causative agents of cervical cancer and other malignancies of the anogenital region and oral mucosa. The capacity of these viruses to induce cancer and to immortalize cells in culture relies in part on a critical function of their E6 oncoprotein, that of promoting the poly-ubiquitination of the cellular tumor suppressor protein p53 and its subsequent degradation by the proteasome. Here, we describe a cellular assay to measure the p53-degradation activity of E6 from different HPV types. This assay is based on a translational fusion of p53 to Renilla luciferase (Rluc-p53) that remains sensitive to degradation by high-risk E6 and whose steady-state levels can be accurately measured in standard luciferase assays. The p53-degradation activity of any E6 protein can be tested and quantified in transiently transfected cells by determining the amount of E6-expression vector required to reduce by half the levels of RLuc-p53 luciferase activity (50 % effective concentration [EC50]). The high-throughput and quantitative nature of this assay makes it particularly useful to compare the p53-degradation activities of E6 from several HPV types in parallel.

  6. Tiered High-Throughput Screening Approach to Identify ...

    EPA Pesticide Factsheets

    High-throughput screening (HTS) for potential thyroid–disrupting chemicals requires a system of assays to capture multiple molecular-initiating events (MIEs) that converge on perturbed thyroid hormone (TH) homeostasis. Screening for MIEs specific to TH-disrupting pathways is limited in the US EPA ToxCast screening assay portfolio. To fill one critical screening gap, the Amplex UltraRed-thyroperoxidase (AUR-TPO) assay was developed to identify chemicals that inhibit TPO, as decreased TPO activity reduces TH synthesis. The ToxCast Phase I and II chemical libraries, comprised of 1,074 unique chemicals, were initially screened using a single, high concentration to identify potential TPO inhibitors. Chemicals positive in the single concentration screen were retested in concentration-response. Due to high false positive rates typically observed with loss-of-signal assays such as AUR-TPO, we also employed two additional assays in parallel to identify possible sources of nonspecific assay signal loss, enabling stratification of roughly 300 putative TPO inhibitors based upon selective AUR-TPO activity. A cell-free luciferase inhibition assay was used to identify nonspecific enzyme inhibition among the putative TPO inhibitors, and a cytotoxicity assay using a human cell line was used to estimate the cellular tolerance limit. Additionally, the TPO inhibition activities of 150 chemicals were compared between the AUR-TPO and an orthogonal peroxidase oxidation assay using

  7. Mapping of MPEG-4 decoding on a flexible architecture platform

    NASA Astrophysics Data System (ADS)

    van der Tol, Erik B.; Jaspers, Egbert G.

    2001-12-01

    In the field of consumer electronics, the advent of new features such as Internet, games, video conferencing, and mobile communication has triggered the convergence of television and computers technologies. This requires a generic media-processing platform that enables simultaneous execution of very diverse tasks such as high-throughput stream-oriented data processing and highly data-dependent irregular processing with complex control flows. As a representative application, this paper presents the mapping of a Main Visual profile MPEG-4 for High-Definition (HD) video onto a flexible architecture platform. A stepwise approach is taken, going from the decoder application toward an implementation proposal. First, the application is decomposed into separate tasks with self-contained functionality, clear interfaces, and distinct characteristics. Next, a hardware-software partitioning is derived by analyzing the characteristics of each task such as the amount of inherent parallelism, the throughput requirements, the complexity of control processing, and the reuse potential over different applications and different systems. Finally, a feasible implementation is proposed that includes amongst others a very-long-instruction-word (VLIW) media processor, one or more RISC processors, and some dedicated processors. The mapping study of the MPEG-4 decoder proves the flexibility and extensibility of the media-processing platform. This platform enables an effective HW/SW co-design yielding a high performance density.

  8. Microfluidic devices for the controlled manipulation of small volumes

    DOEpatents

    Ramsey, J Michael [Knoxville, TN; Jacobson, Stephen C [Knoxville, TN

    2003-02-25

    A method for conducting a broad range of biochemical analyses or manipulations on a series of nano- to subnanoliter reaction volumes and an apparatus for carrying out the same are disclosed. The method and apparatus are implemented on a fluidic microchip to provide high serial throughput. The method and device of the invention also lend themselves to multiple parallel analyses and manipulation to provide greater throughput for the generation of biochemical information. In particular, the disclosed device is a microfabricated channel device that can manipulate nanoliter or subnanoliter biochemical reaction volumes in a controlled manner to produce results at rates of 1 to 10 Hz per channel. The individual reaction volumes are manipulated in serial fashion analogous to a digital shift register. The method and apparatus according to this invention have application to such problems as screening molecular or cellular targets using single beads from split-synthesis combinatorial libraries, screening single cells for RNA or protein expression, genetic diagnostic screening at the single cell level, or performing single cell signal transduction studies.

  9. Development and clinical performance of high throughput loop-mediated isothermal amplification for detection of malaria

    PubMed Central

    Perera, Rushini S.; Ding, Xavier C.; Tully, Frank; Oliver, James; Bright, Nigel; Bell, David; Chiodini, Peter L.; Gonzalez, Iveth J.; Polley, Spencer D.

    2017-01-01

    Background Accurate and efficient detection of sub-microscopic malaria infections is crucial for enabling rapid treatment and interruption of transmission. Commercially available malaria LAMP kits have excellent diagnostic performance, though throughput is limited by the need to prepare samples individually. Here, we evaluate the clinical performance of a newly developed high throughput (HTP) sample processing system for use in conjunction with the Eiken malaria LAMP kit. Methods The HTP system utilised dried blood spots (DBS) and liquid whole blood (WB), with parallel sample processing of 94 samples per run. The system was evaluated using 699 samples of known infection status pre-determined by gold standard nested PCR. Results The sensitivity and specificity of WB-HTP-LAMP was 98.6% (95% CI, 95.7–100), and 99.7% (95% CI, 99.2–100); sensitivity of DBS-HTP-LAMP was 97.1% (95% CI, 93.1–100), and specificity 100% against PCR. At parasite densities greater or equal to 2 parasites/μL, WB and DBS HTP-LAMP showed 100% sensitivity and specificity against PCR. At densities less than 2 p/μL, WB-HTP-LAMP sensitivity was 88.9% (95% CI, 77.1–100) and specificity was 99.7% (95% CI, 99.2–100); sensitivity and specificity of DBS-HTP-LAMP was 77.8% (95% CI, 54.3–99.5) and 100% respectively. Conclusions The HTP-LAMP system is a highly sensitive diagnostic test, with the potential to allow large scale population screening in malaria elimination campaigns. PMID:28166235

  10. Opinion: Why we need a centralized repository for isotopic data

    USGS Publications Warehouse

    Pauli, Jonathan N.; Newsome, Seth D.; Cook, Joseph A.; Harrod, Chris; Steffan, Shawn A.; Baker, Christopher J. O.; Ben-David, Merav; Bloom, David; Bowen, Gabriel J.; Cerling, Thure E.; Cicero, Carla; Cook, Craig; Dohm, Michelle; Dharampal, Prarthana S.; Graves, Gary; Gropp, Robert; Hobson, Keith A.; Jordan, Chris; MacFadden, Bruce; Pilaar Birch, Suzanne; Poelen, Jorrit; Ratnasingham, Sujeevan; Russell, Laura; Stricker, Craig A.; Uhen, Mark D.; Yarnes, Christopher T.; Hayden, Brian

    2017-01-01

    Stable isotopes encode and integrate the origin of matter; thus, their analysis offers tremendous potential to address questions across diverse scientific disciplines (1, 2). Indeed, the broad applicability of stable isotopes, coupled with advancements in high-throughput analysis, have created a scientific field that is growing exponentially, and generating data at a rate paralleling the explosive rise of DNA sequencing and genomics (3). Centralized data repositories, such as GenBank, have become increasingly important as a means for archiving information, and “Big Data” analytics of these resources are revolutionizing science and everyday life.

  11. A Primer on High-Throughput Computing for Genomic Selection

    PubMed Central

    Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized genetic gain). Eventually, HTC may change our view of data analysis as well as decision-making in the post-genomic era of selection programs in animals and plants, or in the study of complex diseases in humans. PMID:22303303

  12. Magnetic Nickel iron Electroformed Trap (MagNET): a master/replica fabrication strategy for ultra-high throughput (>100 mL h−1) immunomagnetic sorting†

    PubMed Central

    Ko, Jina; Yelleswarapu, Venkata; Singh, Anup; Shah, Nishal

    2016-01-01

    Microfluidic devices can sort immunomagnetically labeled cells with sensitivity and specificity much greater than that of conventional methods, primarily because the size of microfluidic channels and micro-scale magnets can be matched to that of individual cells. However, these small feature sizes come at the expense of limited throughput (ϕ < 5 mL h−1) and susceptibility to clogging, which have hindered current microfluidic technology from processing relevant volumes of clinical samples, e.g. V > 10 mL whole blood. Here, we report a new approach to micromagnetic sorting that can achieve highly specific cell separation in unprocessed complex samples at a throughput (ϕ > 100 mL h−1) 100× greater than that of conventional microfluidics. To achieve this goal, we have devised a new approach to micromagnetic sorting, the magnetic nickel iron electroformed trap (MagNET), which enables high flow rates by having millions of micromagnetic traps operate in parallel. Our design rotates the conventional microfluidic approach by 90° to form magnetic traps at the edges of pores instead of in channels, enabling millions of the magnetic traps to be incorporated into a centimeter sized device. Unlike previous work, where magnetic structures were defined using conventional microfabrication, we take inspiration from soft lithography and create a master from which many replica electroformed magnetic micropore devices can be economically manufactured. These free-standing 12 µm thick permalloy (Ni80Fe20) films contain micropores of arbitrary shape and position, allowing the device to be tailored for maximal capture efficiency and throughput. We demonstrate MagNET's capabilities by fabricating devices with both circular and rectangular pores and use these devices to rapidly (ϕ = 180 mL h−1) and specifically sort rare tumor cells from white blood cells. PMID:27170379

  13. Template-directed atomically precise self-organization of perfectly ordered parallel cerium silicide nanowire arrays on Si(110)-16 × 2 surfaces.

    PubMed

    Hong, Ie-Hong; Liao, Yung-Cheng; Tsai, Yung-Feng

    2013-11-05

    The perfectly ordered parallel arrays of periodic Ce silicide nanowires can self-organize with atomic precision on single-domain Si(110)-16 × 2 surfaces. The growth evolution of self-ordered parallel Ce silicide nanowire arrays is investigated over a broad range of Ce coverages on single-domain Si(110)-16 × 2 surfaces by scanning tunneling microscopy (STM). Three different types of well-ordered parallel arrays, consisting of uniformly spaced and atomically identical Ce silicide nanowires, are self-organized through the heteroepitaxial growth of Ce silicides on a long-range grating-like 16 × 2 reconstruction at the deposition of various Ce coverages. Each atomically precise Ce silicide nanowire consists of a bundle of chains and rows with different atomic structures. The atomic-resolution dual-polarity STM images reveal that the interchain coupling leads to the formation of the registry-aligned chain bundles within individual Ce silicide nanowire. The nanowire width and the interchain coupling can be adjusted systematically by varying the Ce coverage on a Si(110) surface. This natural template-directed self-organization of perfectly regular parallel nanowire arrays allows for the precise control of the feature size and positions within ±0.2 nm over a large area. Thus, it is a promising route to produce parallel nanowire arrays in a straightforward, low-cost, high-throughput process.

  14. Template-directed atomically precise self-organization of perfectly ordered parallel cerium silicide nanowire arrays on Si(110)-16 × 2 surfaces

    PubMed Central

    2013-01-01

    The perfectly ordered parallel arrays of periodic Ce silicide nanowires can self-organize with atomic precision on single-domain Si(110)-16 × 2 surfaces. The growth evolution of self-ordered parallel Ce silicide nanowire arrays is investigated over a broad range of Ce coverages on single-domain Si(110)-16 × 2 surfaces by scanning tunneling microscopy (STM). Three different types of well-ordered parallel arrays, consisting of uniformly spaced and atomically identical Ce silicide nanowires, are self-organized through the heteroepitaxial growth of Ce silicides on a long-range grating-like 16 × 2 reconstruction at the deposition of various Ce coverages. Each atomically precise Ce silicide nanowire consists of a bundle of chains and rows with different atomic structures. The atomic-resolution dual-polarity STM images reveal that the interchain coupling leads to the formation of the registry-aligned chain bundles within individual Ce silicide nanowire. The nanowire width and the interchain coupling can be adjusted systematically by varying the Ce coverage on a Si(110) surface. This natural template-directed self-organization of perfectly regular parallel nanowire arrays allows for the precise control of the feature size and positions within ±0.2 nm over a large area. Thus, it is a promising route to produce parallel nanowire arrays in a straightforward, low-cost, high-throughput process. PMID:24188092

  15. Whole-Genome Sequencing and Assembly with High-Throughput, Short-Read Technologies

    PubMed Central

    Sundquist, Andreas; Ronaghi, Mostafa; Tang, Haixu; Pevzner, Pavel; Batzoglou, Serafim

    2007-01-01

    While recently developed short-read sequencing technologies may dramatically reduce the sequencing cost and eventually achieve the $1000 goal for re-sequencing, their limitations prevent the de novo sequencing of eukaryotic genomes with the standard shotgun sequencing protocol. We present SHRAP (SHort Read Assembly Protocol), a sequencing protocol and assembly methodology that utilizes high-throughput short-read technologies. We describe a variation on hierarchical sequencing with two crucial differences: (1) we select a clone library from the genome randomly rather than as a tiling path and (2) we sample clones from the genome at high coverage and reads from the clones at low coverage. We assume that 200 bp read lengths with a 1% error rate and inexpensive random fragment cloning on whole mammalian genomes is feasible. Our assembly methodology is based on first ordering the clones and subsequently performing read assembly in three stages: (1) local assemblies of regions significantly smaller than a clone size, (2) clone-sized assemblies of the results of stage 1, and (3) chromosome-sized assemblies. By aggressively localizing the assembly problem during the first stage, our method succeeds in assembling short, unpaired reads sampled from repetitive genomes. We tested our assembler using simulated reads from D. melanogaster and human chromosomes 1, 11, and 21, and produced assemblies with large sets of contiguous sequence and a misassembly rate comparable to other draft assemblies. Tested on D. melanogaster and the entire human genome, our clone-ordering method produces accurate maps, thereby localizing fragment assembly and enabling the parallelization of the subsequent steps of our pipeline. Thus, we have demonstrated that truly inexpensive de novo sequencing of mammalian genomes will soon be possible with high-throughput, short-read technologies using our methodology. PMID:17534434

  16. Tiered High-Throughput Screening Approach to Identify Thyroperoxidase Inhibitors Within the ToxCast Phase I and II Chemical Libraries

    PubMed Central

    Watt, Eric D.; Hornung, Michael W.; Hedge, Joan M.; Judson, Richard S.; Crofton, Kevin M.; Houck, Keith A.; Simmons, Steven O.

    2016-01-01

    High-throughput screening for potential thyroid-disrupting chemicals requires a system of assays to capture multiple molecular-initiating events (MIEs) that converge on perturbed thyroid hormone (TH) homeostasis. Screening for MIEs specific to TH-disrupting pathways is limited in the U.S. Environmental Protection Agency ToxCast screening assay portfolio. To fill 1 critical screening gap, the Amplex UltraRed-thyroperoxidase (AUR-TPO) assay was developed to identify chemicals that inhibit TPO, as decreased TPO activity reduces TH synthesis. The ToxCast phase I and II chemical libraries, comprised of 1074 unique chemicals, were initially screened using a single, high concentration to identify potential TPO inhibitors. Chemicals positive in the single-concentration screen were retested in concentration-response. Due to high false-positive rates typically observed with loss-of-signal assays such as AUR-TPO, we also employed 2 additional assays in parallel to identify possible sources of nonspecific assay signal loss, enabling stratification of roughly 300 putative TPO inhibitors based upon selective AUR-TPO activity. A cell-free luciferase inhibition assay was used to identify nonspecific enzyme inhibition among the putative TPO inhibitors, and a cytotoxicity assay using a human cell line was used to estimate the cellular tolerance limit. Additionally, the TPO inhibition activities of 150 chemicals were compared between the AUR-TPO and an orthogonal peroxidase oxidation assay using guaiacol as a substrate to confirm the activity profiles of putative TPO inhibitors. This effort represents the most extensive TPO inhibition screening campaign to date and illustrates a tiered screening approach that focuses resources, maximizes assay throughput, and reduces animal use. PMID:26884060

  17. High-Throughput Screening of Na(V)1.7 Modulators Using a Giga-Seal Automated Patch Clamp Instrument.

    PubMed

    Chambers, Chris; Witton, Ian; Adams, Cathryn; Marrington, Luke; Kammonen, Juha

    2016-03-01

    Voltage-gated sodium (Na(V)) channels have an essential role in the initiation and propagation of action potentials in excitable cells, such as neurons. Of these channels, Na(V)1.7 has been indicated as a key channel for pain sensation. While extensive efforts have gone into discovering novel Na(V)1.7 modulating compounds for the treatment of pain, none has reached the market yet. In the last two years, new compound screening technologies have been introduced, which may speed up the discovery of such compounds. The Sophion Qube(®) is a next-generation 384-well giga-seal automated patch clamp (APC) screening instrument, capable of testing thousands of compounds per day. By combining high-throughput screening and follow-up compound testing on the same APC platform, it should be possible to accelerate the hit-to-lead stage of ion channel drug discovery and help identify the most interesting compounds faster. Following a period of instrument beta-testing, a Na(V)1.7 high-throughput screen was run with two Pfizer plate-based compound subsets. In total, data were generated for 158,000 compounds at a median success rate of 83%, which can be considered high in APC screening. In parallel, IC50 assay validation and protocol optimization was completed with a set of reference compounds to understand how the IC50 potencies generated on the Qube correlate with data generated on the more established Sophion QPatch(®) APC platform. In summary, the results presented here demonstrate that the Qube provides a comparable but much faster approach to study Na(V)1.7 in a robust and reliable APC assay for compound screening.

  18. Parallel production and verification of protein products using a novel high-throughput screening method.

    PubMed

    Tegel, Hanna; Yderland, Louise; Boström, Tove; Eriksson, Cecilia; Ukkonen, Kaisa; Vasala, Antti; Neubauer, Peter; Ottosson, Jenny; Hober, Sophia

    2011-08-01

    Protein production and analysis in a parallel fashion is today applied in laboratories worldwide and there is a great need to improve the techniques and systems used for this purpose. In order to save time and money, a fast and reliable screening method for analysis of protein production and also verification of the protein product is desired. Here, a micro-scale protocol for the parallel production and screening of 96 proteins in plate format is described. Protein capture was achieved using immobilized metal affinity chromatography and the product was verified using matrix-assisted laser desorption ionization time-of-flight MS. In order to obtain sufficiently high cell densities and product yield in the small-volume cultivations, the EnBase® cultivation technology was applied, which enables cultivation in as small volumes as 150 μL. Here, the efficiency of the method is demonstrated by producing 96 human, recombinant proteins, both in micro-scale and using a standard full-scale protocol and comparing the results in regard to both protein identity and sample purity. The results obtained are highly comparable to those acquired through employing standard full-scale purification protocols, thus validating this method as a successful initial screening step before protein production at a larger scale. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. A massively parallel strategy for STR marker development, capture, and genotyping.

    PubMed

    Kistler, Logan; Johnson, Stephen M; Irwin, Mitchell T; Louis, Edward E; Ratan, Aakrosh; Perry, George H

    2017-09-06

    Short tandem repeat (STR) variants are highly polymorphic markers that facilitate powerful population genetic analyses. STRs are especially valuable in conservation and ecological genetic research, yielding detailed information on population structure and short-term demographic fluctuations. Massively parallel sequencing has not previously been leveraged for scalable, efficient STR recovery. Here, we present a pipeline for developing STR markers directly from high-throughput shotgun sequencing data without a reference genome, and an approach for highly parallel target STR recovery. We employed our approach to capture a panel of 5000 STRs from a test group of diademed sifakas (Propithecus diadema, n = 3), endangered Malagasy rainforest lemurs, and we report extremely efficient recovery of targeted loci-97.3-99.6% of STRs characterized with ≥10x non-redundant sequence coverage. We then tested our STR capture strategy on P. diadema fecal DNA, and report robust initial results and suggestions for future implementations. In addition to STR targets, this approach also generates large, genome-wide single nucleotide polymorphism (SNP) panels from flanking regions. Our method provides a cost-effective and scalable solution for rapid recovery of large STR and SNP datasets in any species without needing a reference genome, and can be used even with suboptimal DNA more easily acquired in conservation and ecological studies. Published by Oxford University Press on behalf of Nucleic Acids Research 2017.

  20. Ultralow-Power Electronic Trapping of Nanoparticles with Sub-10 nm Gold Nanogap Electrodes.

    PubMed

    Barik, Avijit; Chen, Xiaoshu; Oh, Sang-Hyun

    2016-10-12

    We demonstrate nanogap electrodes for rapid, parallel, and ultralow-power trapping of nanoparticles. Our device pushes the limit of dielectrophoresis by shrinking the separation between gold electrodes to sub-10 nm, thereby creating strong trapping forces at biases as low as the 100 mV ranges. Using high-throughput atomic layer lithography, we manufacture sub-10 nm gaps between 0.8 mm long gold electrodes and pattern them into individually addressable parallel electronic traps. Unlike pointlike junctions made by electron-beam lithography or larger micron-gap electrodes that are used for conventional dielectrophoresis, our sub-10 nm gold nanogap electrodes provide strong trapping forces over a mm-scale trapping zone. Importantly, our technology solves the key challenges associated with traditional dielectrophoresis experiments, such as high voltages that cause heat generation, bubble formation, and unwanted electrochemical reactions. The strongly enhanced fields around the nanogap induce particle-transport speed exceeding 10 μm/s and enable the trapping of 30 nm polystyrene nanoparticles using an ultralow bias of 200 mV. We also demonstrate rapid electronic trapping of quantum dots and nanodiamond particles on arrays of parallel traps. Our sub-10 nm gold nanogap electrodes can be combined with plasmonic sensors or nanophotonic circuitry, and their low-power electronic operation can potentially enable high-density integration on a chip as well as portable biosensing.

  1. Large-scale parallel genome assembler over cloud computing environment.

    PubMed

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  2. RAMICS: trainable, high-speed and biologically relevant alignment of high-throughput sequencing reads to coding DNA.

    PubMed

    Wright, Imogen A; Travers, Simon A

    2014-07-01

    The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Automated vector selection of SIVQ and parallel computing integration MATLAB™: Innovations supporting large-scale and high-throughput image analysis studies.

    PubMed

    Cheng, Jerome; Hipp, Jason; Monaco, James; Lucas, David R; Madabhushi, Anant; Balis, Ulysses J

    2011-01-01

    Spatially invariant vector quantization (SIVQ) is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector's sensitivity and specificity properties (typically by reviewing a resultant heat map). In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA) and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC) transfer function, with each assessment resulting in an associated area-under-the-curve (AUC) figure of merit. Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an additional effort directed towards attaining high-throughput capability for the SIVQ algorithm, we demonstrated the successful incorporation of it with the MATrix LABoratory (MATLAB™) application interface. The SIVQ algorithm is suitable for automated vector selection settings and high throughput computation.

  4. High-Throughput Identification of Loss-of-Function Mutations for Anti-Interferon Activity in the Influenza A Virus NS Segment

    PubMed Central

    Wu, Nicholas C.; Young, Arthur P.; Al-Mawsawi, Laith Q.; Olson, C. Anders; Feng, Jun; Qi, Hangfei; Luan, Harding H.; Li, Xinmin; Wu, Ting-Ting

    2014-01-01

    ABSTRACT Viral proteins often display several functions which require multiple assays to dissect their genetic basis. Here, we describe a systematic approach to screen for loss-of-function mutations that confer a fitness disadvantage under a specified growth condition. Our methodology was achieved by genetically monitoring a mutant library under two growth conditions, with and without interferon, by deep sequencing. We employed a molecular tagging technique to distinguish true mutations from sequencing error. This approach enabled us to identify mutations that were negatively selected against, in addition to those that were positively selected for. Using this technique, we identified loss-of-function mutations in the influenza A virus NS segment that were sensitive to type I interferon in a high-throughput fashion. Mechanistic characterization further showed that a single substitution, D92Y, resulted in the inability of NS to inhibit RIG-I ubiquitination. The approach described in this study can be applied under any specified condition for any virus that can be genetically manipulated. IMPORTANCE Traditional genetics focuses on a single genotype-phenotype relationship, whereas high-throughput genetics permits phenotypic characterization of numerous mutants in parallel. High-throughput genetics often involves monitoring of a mutant library with deep sequencing. However, deep sequencing suffers from a high error rate (∼0.1 to 1%), which is usually higher than the occurrence frequency for individual point mutations within a mutant library. Therefore, only mutations that confer a fitness advantage can be identified with confidence due to an enrichment in the occurrence frequency. In contrast, it is impossible to identify deleterious mutations using most next-generation sequencing techniques. In this study, we have applied a molecular tagging technique to distinguish true mutations from sequencing errors. It enabled us to identify mutations that underwent negative selection, in addition to mutations that experienced positive selection. This study provides a proof of concept by screening for loss-of-function mutations on the influenza A virus NS segment that are involved in its anti-interferon activity. PMID:24965464

  5. CoMiniGut-a small volume in vitro colon model for the screening of gut microbial fermentation processes.

    PubMed

    Wiese, Maria; Khakimov, Bekzod; Nielsen, Sebastian; Sørensen, Helena; van den Berg, Frans; Nielsen, Dennis Sandris

    2018-01-01

    Driven by the growing recognition of the influence of the gut microbiota (GM) on human health and disease, there is a rapidly increasing interest in understanding how dietary components, pharmaceuticals and pre- and probiotics influence GM. In vitro colon models represent an attractive tool for this purpose. With the dual objective of facilitating the investigation of rare and expensive compounds, as well as an increased throughput, we have developed a prototype in vitro parallel gut microbial fermentation screening tool with a working volume of only 5 ml consisting of five parallel reactor units that can be expanded with multiples of five to increase throughput. This allows e.g., the investigation of interpersonal variations in gut microbial dynamics and the acquisition of larger data sets with enhanced statistical inference. The functionality of the in vitro colon model, Copenhagen MiniGut (CoMiniGut) was first demonstrated in experiments with two common prebiotics using the oligosaccharide inulin and the disaccharide lactulose at 1% (w/v). We then investigated fermentation of the scarce and expensive human milk oligosaccharides (HMOs) 3-Fucosyllactose, 3-Sialyllactose, 6-Sialyllactose and the more common Fructooligosaccharide in fermentations with infant gut microbial communities. Investigations of microbial community composition dynamics in the CoMiniGut reactors by MiSeq-based 16S rRNA gene amplicon high throughput sequencing showed excellent experimental reproducibility and allowed us to extract significant differences in gut microbial composition after 24 h of fermentation for all investigated substrates and fecal donors. Furthermore, short chain fatty acids (SCFAs) were quantified for all treatments and donors. Fermentations with inulin and lactulose showed that inulin leads to a microbiota dominated by obligate anaerobes, with high relative abundance of Bacteroidetes, while the more easily fermented lactulose leads to higher relative abundance of Proteobacteria. The subsequent study on the influence of HMOs on two infant GM communities, revealed the strongest bifidogenic effect for 3'SL for both infants. Inter-individual differences of infant GM, especially with regards to the occurrence of Bacteroidetes and differences in bifidobacterial species composition, correlated with varying degrees of HMO utilization foremost of 6'SL and 3'FL, indicating species and strain related differences in HMO utilization which was also reflected in SCFAs concentrations, with 3'SL and 6'SL resulting in significantly higher butyrate production compared to 3'FL. In conclusion, the increased throughput of CoMiniGut strengthens experimental conclusions through elimination of statistical interferences originating from low number of repetitions. Its small working volume moreover allows the investigation of rare and expensive bioactives.

  6. Characterization of Capsicum annuum Genetic Diversity and Population Structure Based on Parallel Polymorphism Discovery with a 30K Unigene Pepper GeneChip

    PubMed Central

    Hill, Theresa A.; Ashrafi, Hamid; Reyes-Chin-Wo, Sebastian; Yao, JiQiang; Stoffel, Kevin; Truco, Maria-Jose; Kozik, Alexander; Michelmore, Richard W.; Van Deynze, Allen

    2013-01-01

    The widely cultivated pepper, Capsicum spp., important as a vegetable and spice crop world-wide, is one of the most diverse crops. To enhance breeding programs, a detailed characterization of Capsicum diversity including morphological, geographical and molecular data is required. Currently, molecular data characterizing Capsicum genetic diversity is limited. The development and application of high-throughput genome-wide markers in Capsicum will facilitate more detailed molecular characterization of germplasm collections, genetic relationships, and the generation of ultra-high density maps. We have developed the Pepper GeneChip® array from Affymetrix for polymorphism detection and expression analysis in Capsicum. Probes on the array were designed from 30,815 unigenes assembled from expressed sequence tags (ESTs). Our array design provides a maximum redundancy of 13 probes per base pair position allowing integration of multiple hybridization values per position to detect single position polymorphism (SPP). Hybridization of genomic DNA from 40 diverse C. annuum lines, used in breeding and research programs, and a representative from three additional cultivated species (C. frutescens, C. chinense and C. pubescens) detected 33,401 SPP markers within 13,323 unigenes. Among the C. annuum lines, 6,426 SPPs covering 3,818 unigenes were identified. An estimated three-fold reduction in diversity was detected in non-pungent compared with pungent lines, however, we were able to detect 251 highly informative markers across these C. annuum lines. In addition, an 8.7 cM region without polymorphism was detected around Pun1 in non-pungent C. annuum. An analysis of genetic relatedness and diversity using the software Structure revealed clustering of the germplasm which was confirmed with statistical support by principle components analysis (PCA) and phylogenetic analysis. This research demonstrates the effectiveness of parallel high-throughput discovery and application of genome-wide transcript-based markers to assess genetic and genomic features among Capsicum annuum. PMID:23409153

  7. Characterization of Capsicum annuum genetic diversity and population structure based on parallel polymorphism discovery with a 30K unigene Pepper GeneChip.

    PubMed

    Hill, Theresa A; Ashrafi, Hamid; Reyes-Chin-Wo, Sebastian; Yao, JiQiang; Stoffel, Kevin; Truco, Maria-Jose; Kozik, Alexander; Michelmore, Richard W; Van Deynze, Allen

    2013-01-01

    The widely cultivated pepper, Capsicum spp., important as a vegetable and spice crop world-wide, is one of the most diverse crops. To enhance breeding programs, a detailed characterization of Capsicum diversity including morphological, geographical and molecular data is required. Currently, molecular data characterizing Capsicum genetic diversity is limited. The development and application of high-throughput genome-wide markers in Capsicum will facilitate more detailed molecular characterization of germplasm collections, genetic relationships, and the generation of ultra-high density maps. We have developed the Pepper GeneChip® array from Affymetrix for polymorphism detection and expression analysis in Capsicum. Probes on the array were designed from 30,815 unigenes assembled from expressed sequence tags (ESTs). Our array design provides a maximum redundancy of 13 probes per base pair position allowing integration of multiple hybridization values per position to detect single position polymorphism (SPP). Hybridization of genomic DNA from 40 diverse C. annuum lines, used in breeding and research programs, and a representative from three additional cultivated species (C. frutescens, C. chinense and C. pubescens) detected 33,401 SPP markers within 13,323 unigenes. Among the C. annuum lines, 6,426 SPPs covering 3,818 unigenes were identified. An estimated three-fold reduction in diversity was detected in non-pungent compared with pungent lines, however, we were able to detect 251 highly informative markers across these C. annuum lines. In addition, an 8.7 cM region without polymorphism was detected around Pun1 in non-pungent C. annuum. An analysis of genetic relatedness and diversity using the software Structure revealed clustering of the germplasm which was confirmed with statistical support by principle components analysis (PCA) and phylogenetic analysis. This research demonstrates the effectiveness of parallel high-throughput discovery and application of genome-wide transcript-based markers to assess genetic and genomic features among Capsicum annuum.

  8. Automated Long-Term Monitoring of Parallel Microfluidic Operations Applying a Machine Vision-Assisted Positioning Method

    PubMed Central

    Yip, Hon Ming; Li, John C. S.; Cui, Xin; Gao, Qiannan; Leung, Chi Chiu

    2014-01-01

    As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet the device positions may vary at different time points throughout operations as the device moves back and forth on a motorized microscopic stage. Here, we report an image-based positioning strategy to realign the chamber position before every recording of microscopic image. We fabricate alignment marks at defined locations next to the chambers in the microfluidic device as reference positions. We also develop image processing algorithms to recognize the chamber positions in real-time, followed by realigning the chambers to their preset positions in the captured images. We perform experiments to validate and characterize the device functionality and the automated realignment operation. Together, this microfluidic realignment strategy can be a platform technology to achieve precise positioning of multiple chambers for general microfluidic applications requiring long-term parallel monitoring of cell and biochemical activities. PMID:25133248

  9. Parallel Hough Transform-Based Straight Line Detection and Its FPGA Implementation in Embedded Vision

    PubMed Central

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-01-01

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746

  10. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-07-17

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.

  11. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing

    PubMed Central

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678

  12. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.

    PubMed

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.

  13. The utility of micro-CT and MRI in the assessment of longitudinal growth of liver metastases in a preclinical model of colon carcinoma.

    PubMed

    Pandit, Prachi; Johnston, Samuel M; Qi, Yi; Story, Jennifer; Nelson, Rendon; Johnson, G Allan

    2013-04-01

    Liver is a common site for distal metastases in colon and rectal cancer. Numerous clinical studies have analyzed the relative merits of different imaging modalities for detection of liver metastases. Several exciting new therapies are being investigated in preclinical models. But, technical challenges in preclinical imaging make it difficult to translate conclusions from clinical studies to the preclinical environment. This study addresses the technical challenges of preclinical magnetic resonance imaging (MRI) and micro-computed tomography (CT) to enable comparison of state-of-the-art methods for following metastatic liver disease. We optimized two promising preclinical protocols to enable a parallel longitudinal study tracking metastatic human colon carcinoma growth in a mouse model: T2-weighted MRI using two-shot PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) and contrast-enhanced micro-CT using a liposomal contrast agent. Both methods were tailored for high throughput with attention to animal support and anesthesia to limit biological stress. Each modality has its strengths. Micro-CT permitted more rapid acquisition (<10 minutes) with the highest spatial resolution (88-micron isotropic resolution). But detection of metastatic lesions requires the use of a blood pool contrast agent, which could introduce a confound in the evaluation of new therapies. MRI was slower (30 minutes) and had lower anisotropic spatial resolution. But MRI eliminates the need for a contrast agent and the contrast-to-noise between tumor and normal parenchyma was higher, making earlier detection of small lesions possible. Both methods supported a relatively high-throughput, longitudinal study of the development of metastatic lesions. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  14. A low cost, simplified, and scaleable pneumotachograph and face mask for neonatal mouse respiratory measurements.

    PubMed

    Sun, Jenny J; Nanu, Roshan; Ray, Russell S

    2017-07-01

    Neonatal respiratory disorders are a leading cause of perinatal mortality due to complications resulting from premature births and prenatal exposure to drugs of abuse, but optimal treatments for these symptoms are still unclear due to a variety of confounds and risk factors. Mouse models present an opportunity to study the underlying mechanisms and efficacy of potential treatments of these conditions with controlled variables. However, measuring respiration in newborn mice is difficult and commercial components are expensive and often require modification, creating a barrier and limiting our understanding of the short and long-term effects of birth complications on respiratory function. Here, we present an inexpensive and simple flow through pneumotachograph and face mask design that can be easily scaled for parallel, high-throughput assays measuring respiration in neonatal mouse pups. The final apparatus consists of three main parts: a water-jacketed chamber, an integrated support tray for the pup, and a pneumotachograph consisting of a two side-arm air channel that is attached to a pressure transducer. The pneumotach showed a linear response and clean, steady respiratory traces in which apneas and sighs were clearly visible. Administration of caffeine in P0.5 CD1 wildtype neonates resulted in an increase in tidal volume, minute ventilation, and minute ventilation normalized to oxygen consumption as well as a decrease in periodic instability. The described methods offer a relatively simple and inexpensive approach to constructing a pneumotachograph for non-invasive measurements of neonatal mouse respiration, enhancing accessibility and enabling the high-throughput and parallel characterizations of neonatal respiratory disorders and potential pharmacological therapies. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Six-flow operations for catalyst development in Fischer-Tropsch synthesis: Bridging the gap between high-throughput experimentation and extensive product evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sartipi, Sina, E-mail: S.Sartipi@tudelft.nl, E-mail: J.Gascon@tudelft.nl; Jansma, Harrie; Bosma, Duco

    2013-12-15

    Design and operation of a “six-flow fixed-bed microreactor” setup for Fischer-Tropsch synthesis (FTS) is described. The unit consists of feed and mixing, flow division, reaction, separation, and analysis sections. The reactor system is made of five heating blocks with individual temperature controllers, assuring an identical isothermal zone of at least 10 cm along six fixed-bed microreactor inserts (4 mm inner diameter). Such a lab-scale setup allows running six experiments in parallel, under equal feed composition, reaction temperature, and conditions of separation and analysis equipment. It permits separate collection of wax and liquid samples (from each flow line), allowing operation with highmore » productivities of C5+ hydrocarbons. The latter is crucial for a complete understanding of FTS product compositions and will represent an advantage over high-throughput setups with more than ten flows where such instrumental considerations lead to elevated equipment volume, cost, and operation complexity. The identical performance (of the six flows) under similar reaction conditions was assured by testing a same catalyst batch, loaded in all microreactors.« less

  16. High-Throughput Fabrication of Nanocomplexes Using 3D-Printed Micromixers.

    PubMed

    Bohr, Adam; Boetker, Johan; Wang, Yingya; Jensen, Henrik; Rantanen, Jukka; Beck-Broichsitter, Moritz

    2017-03-01

    3D printing allows a rapid and inexpensive manufacturing of custom made and prototype devices. Micromixers are used for rapid and controlled production of nanoparticles intended for therapeutic delivery. In this study, we demonstrate the fabrication of micromixers using computational design and 3D printing, which enable a continuous and industrial scale production of nanocomplexes formed by electrostatic complexation, using the polymers poly(diallyldimethylammonium chloride) and poly(sodium 4-styrenesulfonate). Several parameters including polymer concentration, flow rate, and flow ratio were systematically varied and their effect on the properties of nanocomplexes was studied and compared with nanocomplexes prepared by bulk mixing. Particles fabricated using this cost effective device were equally small and homogenous but more consistent and controllable in size compared with those prepared manually via bulk mixing. Moreover, each micromixer could process more than 2 liters per hour with unaffected performance and the setup could easily be scaled-up by aligning several micromixers in parallel. This demonstrates that 3D printing can be used to prepare disposable high-throughput micromixers for production of therapeutic nanoparticles. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  17. Inventory management and reagent supply for automated chemistry.

    PubMed

    Kuzniar, E

    1999-08-01

    Developments in automated chemistry have kept pace with developments in HTS such that hundreds of thousands of new compounds can be rapidly synthesized in the belief that the greater the number and diversity of compounds that can be screened, the more successful HTS will be. The increasing use of automation for Multiple Parallel Synthesis (MPS) and the move to automated combinatorial library production is placing an overwhelming burden on the management of reagents. Although automation has improved the efficiency of the processes involved in compound synthesis, the bottleneck has shifted to ordering, collating and preparing reagents for automated chemistry resulting in loss of time, materials and momentum. Major efficiencies have already been made in the area of compound management for high throughput screening. Most of these efficiencies have been achieved with sophisticated library management systems using advanced engineering and data handling for the storage, tracking and retrieval of millions of compounds. The Automation Partnership has already provided many of the top pharmaceutical companies with modular automated storage, preparation and retrieval systems to manage compound libraries for high throughput screening. This article describes how these systems may be implemented to solve the specific problems of inventory management and reagent supply for automated chemistry.

  18. Surface modified alginate microcapsules for 3D cell culture

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Wen; Kuo, Chiung Wen; Chueh, Di-Yen; Chen, Peilin

    2016-06-01

    Culture as three dimensional cell aggregates or spheroids can offer an ideal platform for tissue engineering applications and for pharmaceutical screening. Such 3D culture models, however, may suffer from the problems such as immune response and ineffective and cumbersome culture. This paper describes a simple method for producing microcapsules with alginate cores and a thin shell of poly(L-lysine)-graft-poly(ethylene glycol) (PLL-g-PEG) to encapsulate mouse induced pluripotent stem (miPS) cells, generating a non-fouling surface as an effective immunoisolation barrier. We demonstrated the trapping of the alginate microcapsules in a microwell array for the continuous observation and culture of a large number of encapsulated miPS cells in parallel. miPS cells cultured in the microcapsules survived well and proliferated to form a single cell aggregate. Droplet formation of monodisperse microcapsules with controlled size combined with flow cytometry provided an efficient way to quantitatively analyze the growth of encapsulated cells in a high-throughput manner. The simple and cost-effective coating technique employed to produce the core-shell microcapsules could be used in the emerging field of cell therapy. The microwell array would provide a convenient, user friendly and high-throughput platform for long-term cell culture and monitoring.

  19. Human genetics and genomics a decade after the release of the draft sequence of the human genome.

    PubMed

    Naidoo, Nasheen; Pawitan, Yudi; Soong, Richie; Cooper, David N; Ku, Chee-Seng

    2011-10-01

    Substantial progress has been made in human genetics and genomics research over the past ten years since the publication of the draft sequence of the human genome in 2001. Findings emanating directly from the Human Genome Project, together with those from follow-on studies, have had an enormous impact on our understanding of the architecture and function of the human genome. Major developments have been made in cataloguing genetic variation, the International HapMap Project, and with respect to advances in genotyping technologies. These developments are vital for the emergence of genome-wide association studies in the investigation of complex diseases and traits. In parallel, the advent of high-throughput sequencing technologies has ushered in the 'personal genome sequencing' era for both normal and cancer genomes, and made possible large-scale genome sequencing studies such as the 1000 Genomes Project and the International Cancer Genome Consortium. The high-throughput sequencing and sequence-capture technologies are also providing new opportunities to study Mendelian disorders through exome sequencing and whole-genome sequencing. This paper reviews these major developments in human genetics and genomics over the past decade.

  20. Human genetics and genomics a decade after the release of the draft sequence of the human genome

    PubMed Central

    2011-01-01

    Substantial progress has been made in human genetics and genomics research over the past ten years since the publication of the draft sequence of the human genome in 2001. Findings emanating directly from the Human Genome Project, together with those from follow-on studies, have had an enormous impact on our understanding of the architecture and function of the human genome. Major developments have been made in cataloguing genetic variation, the International HapMap Project, and with respect to advances in genotyping technologies. These developments are vital for the emergence of genome-wide association studies in the investigation of complex diseases and traits. In parallel, the advent of high-throughput sequencing technologies has ushered in the 'personal genome sequencing' era for both normal and cancer genomes, and made possible large-scale genome sequencing studies such as the 1000 Genomes Project and the International Cancer Genome Consortium. The high-throughput sequencing and sequence-capture technologies are also providing new opportunities to study Mendelian disorders through exome sequencing and whole-genome sequencing. This paper reviews these major developments in human genetics and genomics over the past decade. PMID:22155605

  1. High-Throughput Effect-Directed Analysis Using Downscaled in Vitro Reporter Gene Assays To Identify Endocrine Disruptors in Surface Water

    PubMed Central

    2018-01-01

    Effect-directed analysis (EDA) is a commonly used approach for effect-based identification of endocrine disruptive chemicals in complex (environmental) mixtures. However, for routine toxicity assessment of, for example, water samples, current EDA approaches are considered time-consuming and laborious. We achieved faster EDA and identification by downscaling of sensitive cell-based hormone reporter gene assays and increasing fractionation resolution to allow testing of smaller fractions with reduced complexity. The high-resolution EDA approach is demonstrated by analysis of four environmental passive sampler extracts. Downscaling of the assays to a 384-well format allowed analysis of 64 fractions in triplicate (or 192 fractions without technical replicates) without affecting sensitivity compared to the standard 96-well format. Through a parallel exposure method, agonistic and antagonistic androgen and estrogen receptor activity could be measured in a single experiment following a single fractionation. From 16 selected candidate compounds, identified through nontargeted analysis, 13 could be confirmed chemically and 10 were found to be biologically active, of which the most potent nonsteroidal estrogens were identified as oxybenzone and piperine. The increased fractionation resolution and the higher throughput that downscaling provides allow for future application in routine high-resolution screening of large numbers of samples in order to accelerate identification of (emerging) endocrine disruptors. PMID:29547277

  2. High-Throughput Protein Expression Using a Combination of Ligation-Independent Cloning (LIC) and Infrared Fluorescent Protein (IFP) Detection

    PubMed Central

    Dortay, Hakan; Akula, Usha Madhuri; Westphal, Christin; Sittig, Marie; Mueller-Roeber, Bernd

    2011-01-01

    Protein expression in heterologous hosts for functional studies is a cumbersome effort. Here, we report a superior platform for parallel protein expression in vivo and in vitro. The platform combines highly efficient ligation-independent cloning (LIC) with instantaneous detection of expressed proteins through N- or C-terminal fusions to infrared fluorescent protein (IFP). For each open reading frame, only two PCR fragments are generated (with three PCR primers) and inserted by LIC into ten expression vectors suitable for protein expression in microbial hosts, including Escherichia coli, Kluyveromyces lactis, Pichia pastoris, the protozoon Leishmania tarentolae, and an in vitro transcription/translation system. Accumulation of IFP-fusion proteins is detected by infrared imaging of living cells or crude protein extracts directly after SDS-PAGE without additional processing. We successfully employed the LIC-IFP platform for in vivo and in vitro expression of ten plant and fungal proteins, including transcription factors and enzymes. Using the IFP reporter, we additionally established facile methods for the visualisation of protein-protein interactions and the detection of DNA-transcription factor interactions in microtiter and gel-free format. We conclude that IFP represents an excellent reporter for high-throughput protein expression and analysis, which can be easily extended to numerous other expression hosts using the setup reported here. PMID:21541323

  3. Parallel nanomanufacturing via electrohydrodynamic jetting from microfabricated externally-fed emitter arrays

    NASA Astrophysics Data System (ADS)

    Ponce de Leon, Philip J.; Hill, Frances A.; Heubel, Eric V.; Velásquez-García, Luis F.

    2015-06-01

    We report the design, fabrication, and characterization of planar arrays of externally-fed silicon electrospinning emitters for high-throughput generation of polymer nanofibers. Arrays with as many as 225 emitters and with emitter density as large as 100 emitters cm-2 were characterized using a solution of dissolved PEO in water and ethanol. Devices with emitter density as high as 25 emitters cm-2 deposit uniform imprints comprising fibers with diameters on the order of a few hundred nanometers. Mass flux rates as high as 417 g hr-1 m-2 were measured, i.e., four times the reported production rate of the leading commercial free-surface electrospinning sources. Throughput increases with increasing array size at constant emitter density, suggesting the design can be scaled up with no loss of productivity. Devices with emitter density equal to 100 emitters cm-2 fail to generate fibers but uniformly generate electrosprayed droplets. For the arrays tested, the largest measured mass flux resulted from arrays with larger emitter separation operating at larger bias voltages, indicating the strong influence of electrical field enhancement on the performance of the devices. Incorporation of a ground electrode surrounding the array tips helps equalize the emitter field enhancement across the array as well as control the spread of the imprints over larger distances.

  4. Automatic cassette to cassette radiant impulse processor

    NASA Astrophysics Data System (ADS)

    Sheets, Ronald E.

    1985-01-01

    Single wafer rapid annealing using high temperature isothermal processing has become increasingly popular in recent years. In addition to annealing, this process is also being investigated for suicide formation, passivation, glass reflow and alloying. Regardless of the application, there is a strong necessity to automate in order to maintain process control, repeatability, cleanliness and throughput. These requirements have been carefully addressed during the design and development of the Model 180 Radiant Impulse Processor which is a totally automatic cassette to cassette wafer processing system. Process control and repeatability are maintained by a closed loop optical pyrometer system which maintains the wafer at the programmed temperature-time conditions. Programmed recipes containing up to 10 steps may be easily entered on the computer keyboard or loaded in from a recipe library stored on a standard 5 {1}/{4″} floppy disk. Cold wall heating chamber construction, controlled environment (N 2, A, forming gas) and quartz wafer carriers prevent contamination of the wafer during high temperature processing. Throughputs of 150-240 wafers per hour are achieved by quickly heating the wafer to temperature (450-1400°C) in 3-6 s with a high intensity, uniform (± 1%) radiant flux of 100 {W}/{cm 2}, parallel wafer handling system and a wafer cool down stage.

  5. Log-less metadata management on metadata server for parallel file systems.

    PubMed

    Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  6. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    PubMed Central

    Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093

  7. Simultaneous mutation and copy number variation (CNV) detection by multiplex PCR-based GS-FLX sequencing.

    PubMed

    Goossens, Dirk; Moens, Lotte N; Nelis, Eva; Lenaerts, An-Sofie; Glassee, Wim; Kalbe, Andreas; Frey, Bruno; Kopal, Guido; De Jonghe, Peter; De Rijk, Peter; Del-Favero, Jurgen

    2009-03-01

    We evaluated multiplex PCR amplification as a front-end for high-throughput sequencing, to widen the applicability of massive parallel sequencers for the detailed analysis of complex genomes. Using multiplex PCR reactions, we sequenced the complete coding regions of seven genes implicated in peripheral neuropathies in 40 individuals on a GS-FLX genome sequencer (Roche). The resulting dataset showed highly specific and uniform amplification. Comparison of the GS-FLX sequencing data with the dataset generated by Sanger sequencing confirmed the detection of all variants present and proved the sensitivity of the method for mutation detection. In addition, we showed that we could exploit the multiplexed PCR amplicons to determine individual copy number variation (CNV), increasing the spectrum of detected variations to both genetic and genomic variants. We conclude that our straightforward procedure substantially expands the applicability of the massive parallel sequencers for sequencing projects of a moderate number of amplicons (50-500) with typical applications in resequencing exons in positional or functional candidate regions and molecular genetic diagnostics. 2008 Wiley-Liss, Inc.

  8. Systems-on-chip approach for real-time simulation of wheel-rail contact laws

    NASA Astrophysics Data System (ADS)

    Mei, T. X.; Zhou, Y. J.

    2013-04-01

    This paper presents the development of a systems-on-chip approach to speed up the simulation of wheel-rail contact laws, which can be used to reduce the requirement for high-performance computers and enable simulation in real time for the use of hardware-in-loop for experimental studies of the latest vehicle dynamic and control technologies. The wheel-rail contact laws are implemented using a field programmable gate array (FPGA) device with a design that substantially outperforms modern general-purpose PC platforms or fixed architecture digital signal processor devices in terms of processing time, configuration flexibility and cost. In order to utilise the FPGA's parallel-processing capability, the operations in the contact laws algorithms are arranged in a parallel manner and multi-contact patches are tackled simultaneously in the design. The interface between the FPGA device and the host PC is achieved by using a high-throughput and low-latency Ethernet link. The development is based on FASTSIM algorithms, although the design can be adapted and expanded for even more computationally demanding tasks.

  9. (abstract) A High Throughput 3-D Inner Product Processor

    NASA Technical Reports Server (NTRS)

    Daud, Tuan

    1996-01-01

    A particularily challenging image processing application is the real time scene acquisition and object discrimination. It requires spatio-temporal recognition of point and resolved objects at high speeds with parallel processing algorithms. Neural network paradigms provide fine grain parallism and, when implemented in hardware, offer orders of magnitude speed up. However, neural networks implemented on a VLSI chip are planer architectures capable of efficient processing of linear vector signals rather than 2-D images. Therefore, for processing of images, a 3-D stack of neural-net ICs receiving planar inputs and consuming minimal power are required. Details of the circuits with chip architectures will be described with need to develop ultralow-power electronics. Further, use of the architecture in a system for high-speed processing will be illustrated.

  10. Multishot PROPELLER for high-field preclinical MRI.

    PubMed

    Pandit, Prachi; Qi, Yi; Story, Jennifer; King, Kevin F; Johnson, G Allan

    2010-07-01

    With the development of numerous mouse models of cancer, there is a tremendous need for an appropriate imaging technique to study the disease evolution. High-field T(2)-weighted imaging using PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI meets this need. The two-shot PROPELLER technique presented here provides (a) high spatial resolution, (b) high contrast resolution, and (c) rapid and noninvasive imaging, which enables high-throughput, longitudinal studies in free-breathing mice. Unique data collection and reconstruction makes this method robust against motion artifacts. The two-shot modification introduced here retains more high-frequency information and provides higher signal-to-noise ratio than conventional single-shot PROPELLER, making this sequence feasible at high fields, where signal loss is rapid. Results are shown in a liver metastases model to demonstrate the utility of this technique in one of the more challenging regions of the mouse, which is the abdomen. (c) 2010 Wiley-Liss, Inc.

  11. High-Throughput Epitope Binning Assays on Label-Free Array-Based Biosensors Can Yield Exquisite Epitope Discrimination That Facilitates the Selection of Monoclonal Antibodies with Functional Activity

    PubMed Central

    Abdiche, Yasmina Noubia; Miles, Adam; Eckman, Josh; Foletti, Davide; Van Blarcom, Thomas J.; Yeung, Yik Andy; Pons, Jaume; Rajpal, Arvind

    2014-01-01

    Here, we demonstrate how array-based label-free biosensors can be applied to the multiplexed interaction analysis of large panels of analyte/ligand pairs, such as the epitope binning of monoclonal antibodies (mAbs). In this application, the larger the number of mAbs that are analyzed for cross-blocking in a pairwise and combinatorial manner against their specific antigen, the higher the probability of discriminating their epitopes. Since cross-blocking of two mAbs is necessary but not sufficient for them to bind an identical epitope, high-resolution epitope binning analysis determined by high-throughput experiments can enable the identification of mAbs with similar but unique epitopes. We demonstrate that a mAb's epitope and functional activity are correlated, thereby strengthening the relevance of epitope binning data to the discovery of therapeutic mAbs. We evaluated two state-of-the-art label-free biosensors that enable the parallel analysis of 96 unique analyte/ligand interactions and nearly ten thousand total interactions per unattended run. The IBIS-MX96 is a microarray-based surface plasmon resonance imager (SPRi) integrated with continuous flow microspotting technology whereas the Octet-HTX is equipped with disposable fiber optic sensors that use biolayer interferometry (BLI) detection. We compared their throughput, versatility, ease of sample preparation, and sample consumption in the context of epitope binning assays. We conclude that the main advantages of the SPRi technology are its exceptionally low sample consumption, facile sample preparation, and unparalleled unattended throughput. In contrast, the BLI technology is highly flexible because it allows for the simultaneous interaction analysis of 96 independent analyte/ligand pairs, ad hoc sensor replacement and on-line reloading of an analyte- or ligand-array. Thus, the complementary use of these two platforms can expedite applications that are relevant to the discovery of therapeutic mAbs, depending upon the sample availability, and the number and diversity of the interactions being studied. PMID:24651868

  12. From Lab to Fab: Developing a Nanoscale Delivery Tool for Scalable Nanomanufacturing

    NASA Astrophysics Data System (ADS)

    Safi, Asmahan A.

    The emergence of nanomaterials with unique properties at the nanoscale over the past two decades carries a capacity to impact society and transform or create new industries ranging from nanoelectronics to nanomedicine. However, a gap in nanomanufacturing technologies has prevented the translation of nanomaterial into real-world commercialized products. Bridging this gap requires a paradigm shift in methods for fabricating structured devices with a nanoscale resolution in a repeatable fashion. This thesis explores the new paradigms for fabricating nanoscale structures devices and systems for high throughput high registration applications. We present a robust and scalable nanoscale delivery platform, the Nanofountain Probe (NFP), for parallel direct-write of functional materials. The design and microfabrication of NFP is presented. The new generation addresses the challenges of throughput, resolution and ink replenishment characterizing tip-based nanomanufacturing. To achieve these goals, optimized probe geometry is integrated to the process along with channel sealing and cantilever bending. The capabilities of the newly fabricated probes are demonstrated through two type of delivery: protein nanopatterning and single cell nanoinjection. The broad applications of the NFP for single cell delivery are investigated. An external microfluidic packaging is developed to enable delivery in liquid environment. The system is integrated to a combined atomic force microscope and inverted fluorescence microscope. Intracellular delivery is demonstrated by injecting a fluorescent dextran into Hela cells in vitro while monitoring the injection forces. Such developments enable in vitro cellular delivery for single cell studies and high throughput gene expression. The nanomanufacturing capabilities of NFPs are explored. Nanofabrication of carbon nanotube-based electronics presents all the manufacturing challenges characterizing of assembling nanomaterials precisely onto devices. The presented study combines top-down and bottom-approaches by integrating the catalyst patterning and carbon nanotube growth directly on structures. Large array of iron-rich catalyst are patterned on an substrate for subsequent carbon nanotubes synthesis. The dependence of probe geometry and substrate wetting is assessed by modeling and experimental studies. Finally preliminary results on synthesis of carbon nanotube by catalyst assisted chemical vapor deposition suggest increasing the catalyst yield is critical. Such work will enable high throughput nanomanufacturing of carbon nanotube based devices.

  13. High-throughput methods for characterizing the mechanical properties of coatings

    NASA Astrophysics Data System (ADS)

    Siripirom, Chavanin

    The characterization of mechanical properties in a combinatorial and high-throughput workflow has been a bottleneck that reduced the speed of the materials development process. High-throughput characterization of the mechanical properties was applied in this research in order to reduce the amount of sample handling and to accelerate the output. A puncture tester was designed and built to evaluate the toughness of materials using an innovative template design coupled with automation. The test is in the form of a circular free-film indentation. A single template contains 12 samples which are tested in a rapid serial approach. Next, the operational principles of a novel parallel dynamic mechanical-thermal analysis instrument were analyzed in detail for potential sources of errors. The test uses a model of a circular bilayer fixed-edge plate deformation. A total of 96 samples can be analyzed simultaneously which provides a tremendous increase in efficiency compared with a conventional dynamic test. The modulus values determined by the system had considerable variation. The errors were observed and improvements to the system were made. A finite element analysis was used to analyze the accuracy given by the closed-form solution with respect to testing geometries, such as thicknesses of the samples. A good control of the thickness of the sample was proven to be crucial to the accuracy and precision of the output. Then, the attempt to correlate the high-throughput experiments and conventional coating testing methods was made. Automated nanoindentation in dynamic mode was found to provide information on the near-surface modulus and could potentially correlate with the pendulum hardness test using the loss tangent component. Lastly, surface characterization of stratified siloxane-polyurethane coatings was carried out with X-ray photoelectron spectroscopy, Rutherford backscattering spectroscopy, transmission electron microscopy, and nanoindentation. The siloxane component segregates to the surface during curing. The distribution of siloxane as a function of thickness into the sample showed differences depending on the formulation parameters. The coatings which had higher siloxane content near the surface were those coatings found to perform well in field tests.

  14. A whole-genome shotgun approach for assembling and anchoring the hexaploid bread wheat genome

    DOE PAGES

    Chapman, Jarrod A.; Mascher, Martin; Buluc, Aydin; ...

    2015-01-31

    We report that polyploid species have long been thought to be recalcitrant to whole-genome assembly. By combining high-throughput sequencing, recent developments in parallel computing, and genetic mapping, we derive, de novo, a sequence assembly representing 9.1 Gbp of the highly repetitive 16 Gbp genome of hexaploid wheat, Triticum aestivum, and assign 7.1 Gb of this assembly to chromosomal locations. The genome representation and accuracy of our assembly is comparable or even exceeds that of a chromosome-by-chromosome shotgun assembly. Our assembly and mapping strategy uses only short read sequencing technology and is applicable to any species where it is possible tomore » construct a mapping population.« less

  15. A whole-genome shotgun approach for assembling and anchoring the hexaploid bread wheat genome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Jarrod A.; Mascher, Martin; Buluc, Aydin

    We report that polyploid species have long been thought to be recalcitrant to whole-genome assembly. By combining high-throughput sequencing, recent developments in parallel computing, and genetic mapping, we derive, de novo, a sequence assembly representing 9.1 Gbp of the highly repetitive 16 Gbp genome of hexaploid wheat, Triticum aestivum, and assign 7.1 Gb of this assembly to chromosomal locations. The genome representation and accuracy of our assembly is comparable or even exceeds that of a chromosome-by-chromosome shotgun assembly. Our assembly and mapping strategy uses only short read sequencing technology and is applicable to any species where it is possible tomore » construct a mapping population.« less

  16. Sequence investigation of 34 forensic autosomal STRs with massively parallel sequencing.

    PubMed

    Zhang, Suhua; Niu, Yong; Bian, Yingnan; Dong, Rixia; Liu, Xiling; Bao, Yun; Jin, Chao; Zheng, Hancheng; Li, Chengtao

    2018-05-01

    STRs vary not only in the length of the repeat units and the number of repeats but also in the region with which they conform to an incremental repeat pattern. Massively parallel sequencing (MPS) offers new possibilities in the analysis of STRs since they can simultaneously sequence multiple targets in a single reaction and capture potential internal sequence variations. Here, we sequenced 34 STRs applied in the forensic community of China with a custom-designed panel. MPS performance were evaluated from sequencing reads analysis, concordance study and sensitivity testing. High coverage sequencing data were obtained to determine the constitute ratios and heterozygous balance. No actual inconsistent genotypes were observed between capillary electrophoresis (CE) and MPS, demonstrating the reliability of the panel and the MPS technology. With the sequencing data from the 200 investigated individuals, 346 and 418 alleles were obtained via CE and MPS technologies at the 34 STRs, indicating MPS technology provides higher discrimination than CE detection. The whole study demonstrated that STR genotyping with the custom panel and MPS technology has the potential not only to reveal length and sequence variations but also to satisfy the demands of high throughput and high multiplexing with acceptable sensitivity.

  17. A high-throughput microfluidic dental plaque biofilm system to visualize and quantify the effect of antimicrobials

    PubMed Central

    Nance, William C.; Dowd, Scot E.; Samarian, Derek; Chludzinski, Jeffrey; Delli, Joseph; Battista, John; Rickard, Alexander H.

    2013-01-01

    Objectives Few model systems are amenable to developing multi-species biofilms in parallel under environmentally germane conditions. This is a problem when evaluating the potential real-world effectiveness of antimicrobials in the laboratory. One such antimicrobial is cetylpyridinium chloride (CPC), which is used in numerous over-the-counter oral healthcare products. The aim of this work was to develop a high-throughput microfluidic system that is combined with a confocal laser scanning microscope (CLSM) to quantitatively evaluate the effectiveness of CPC against oral multi-species biofilms grown in human saliva. Methods Twenty-four-channel BioFlux microfluidic plates were inoculated with pooled human saliva and fed filter-sterilized saliva for 20 h at 37°C. The bacterial diversity of the biofilms was evaluated by bacterial tag-encoded FLX amplicon pyrosequencing (bTEFAP). The antimicrobial/anti-biofilm effect of CPC (0.5%–0.001% w/v) was examined using Live/Dead stain, CLSM and 3D imaging software. Results The analysis of biofilms by bTEFAP demonstrated that they contained genera typically found in human dental plaque. These included Aggregatibacter, Fusobacterium, Neisseria, Porphyromonas, Streptococcus and Veillonella. Using Live/Dead stain, clear gradations in killing were observed when the biofilms were treated with CPC between 0.5% and 0.001% w/v. At 0.5% (w/v) CPC, 90% of the total signal was from dead/damaged cells. Below this concentration range, less killing was observed. In the 0.5%–0.05% (w/v) range CPC penetration/killing was greatest and biofilm thickness was significantly reduced. Conclusions This work demonstrates the utility of a high-throughput microfluidic–CLSM system to grow multi-species oral biofilms, which are compositionally similar to naturally occurring biofilms, to assess the effectiveness of antimicrobials. PMID:23800904

  18. A set of ligation-independent in vitro translation vectors for eukaryotic protein production.

    PubMed

    Bardóczy, Viola; Géczi, Viktória; Sawasaki, Tatsuya; Endo, Yaeta; Mészáros, Tamás

    2008-03-27

    The last decade has brought the renaissance of protein studies and accelerated the development of high-throughput methods in all aspects of proteomics. Presently, most protein synthesis systems exploit the capacity of living cells to translate proteins, but their application is limited by several factors. A more flexible alternative protein production method is the cell-free in vitro protein translation. Currently available in vitro translation systems are suitable for high-throughput robotic protein production, fulfilling the requirements of proteomics studies. Wheat germ extract based in vitro translation system is likely the most promising method, since numerous eukaryotic proteins can be cost-efficiently synthesized in their native folded form. Although currently available vectors for wheat embryo in vitro translation systems ensure high productivity, they do not meet the requirements of state-of-the-art proteomics. Target genes have to be inserted using restriction endonucleases and the plasmids do not encode cleavable affinity purification tags. We designed four ligation independent cloning (LIC) vectors for wheat germ extract based in vitro protein translation. In these constructs, the RNA transcription is driven by T7 or SP6 phage polymerase and two TEV protease cleavable affinity tags can be added to aid protein purification. To evaluate our improved vectors, a plant mitogen activated protein kinase was cloned in all four constructs. Purification of this eukaryotic protein kinase demonstrated that all constructs functioned as intended: insertion of PCR fragment by LIC worked efficiently, affinity purification of translated proteins by GST-Sepharose or MagneHis particles resulted in high purity kinase, and the affinity tags could efficiently be removed under different reaction conditions. Furthermore, high in vitro kinase activity testified of proper folding of the purified protein. Four newly designed in vitro translation vectors have been constructed which allow fast and parallel cloning and protein purification, thus representing useful molecular tools for high-throughput production of eukaryotic proteins.

  19. Apparatus for combinatorial screening of electrochemical materials

    DOEpatents

    Kepler, Keith Douglas [Belmont, CA; Wang, Yu [Foster City, CA

    2009-12-15

    A high throughput combinatorial screening method and apparatus for the evaluation of electrochemical materials using a single voltage source (2) is disclosed wherein temperature changes arising from the application of an electrical load to a cell array (1) are used to evaluate the relative electrochemical efficiency of the materials comprising the array. The apparatus may include an array of electrochemical cells (1) that are connected to each other in parallel or in series, an electronic load (2) for applying a voltage or current to the electrochemical cells (1), and a device (3), external to the cells, for monitoring the relative temperature of each cell when the load is applied.

  20. Graphics Processing Units for HEP trigger systems

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-07-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  1. Matrix preconditioning: a robust operation for optical linear algebra processors.

    PubMed

    Ghosh, A; Paparao, P

    1987-07-15

    Analog electrooptical processors are best suited for applications demanding high computational throughput with tolerance for inaccuracies. Matrix preconditioning is one such application. Matrix preconditioning is a preprocessing step for reducing the condition number of a matrix and is used extensively with gradient algorithms for increasing the rate of convergence and improving the accuracy of the solution. In this paper, we describe a simple parallel algorithm for matrix preconditioning, which can be implemented efficiently on a pipelined optical linear algebra processor. From the results of our numerical experiments we show that the efficacy of the preconditioning algorithm is affected very little by the errors of the optical system.

  2. Evaluation of fault-tolerant parallel-processor architectures over long space missions

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1989-01-01

    The impact of a five year space mission environment on fault-tolerant parallel processor architectures is examined. The target application is a Strategic Defense Initiative (SDI) satellite requiring 256 parallel processors to provide the computation throughput. The reliability requirements are that the system still be operational after five years with .99 probability and that the probability of system failure during one-half hour of full operation be less than 10(-7). The fault tolerance features an architecture must possess to meet these reliability requirements are presented, many potential architectures are briefly evaluated, and one candidate architecture, the Charles Stark Draper Laboratory's Fault-Tolerant Parallel Processor (FTPP) is evaluated in detail. A methodology for designing a preliminary system configuration to meet the reliability and performance requirements of the mission is then presented and demonstrated by designing an FTPP configuration.

  3. Advances in Predictive Toxicology for Discovery Safety through High Content Screening.

    PubMed

    Persson, Mikael; Hornberg, Jorrit J

    2016-12-19

    High content screening enables parallel acquisition of multiple molecular and cellular readouts. In particular the predictive toxicology field has progressed from the advances in high content screening, as more refined end points that report on cellular health can be studied in combination, at the single cell level, and in relatively high throughput. Here, we discuss how high content screening has become an essential tool for Discovery Safety, the discipline that integrates safety and toxicology in the drug discovery process to identify and mitigate safety concerns with the aim to design drug candidates with a superior safety profile. In addition to customized mechanistic assays to evaluate target safety, routine screening assays can be applied to identify risk factors for frequently occurring organ toxicities. We discuss the current state of high content screening assays for hepatotoxicity, cardiotoxicity, neurotoxicity, nephrotoxicity, and genotoxicity, including recent developments and current advances.

  4. Real-time simultaneous and proportional myoelectric control using intramuscular EMG

    PubMed Central

    Kuiken, Todd A; Hargrove, Levi J

    2014-01-01

    Objective Myoelectric prostheses use electromyographic (EMG) signals to control movement of prosthetic joints. Clinically available myoelectric control strategies do not allow simultaneous movement of multiple degrees of freedom (DOFs); however, the use of implantable devices that record intramuscular EMG signals could overcome this constraint. The objective of this study was to evaluate the real-time simultaneous control of three DOFs (wrist rotation, wrist flexion/extension, and hand open/close) using intramuscular EMG. Approach We evaluated task performance of five able-bodied subjects in a virtual environment using two control strategies with fine-wire EMG: (i) parallel dual-site differential control, which enabled simultaneous control of three DOFs and (ii) pattern recognition control, which required sequential control of DOFs. Main Results Over the course of the experiment, subjects using parallel dual-site control demonstrated increased use of simultaneous control and improved performance in a Fitts' Law test. By the end of the experiment, performance using parallel dual-site control was significantly better (up to a 25% increase in throughput) than when using sequential pattern recognition control for tasks requiring multiple DOFs. The learning trends with parallel dual-site control suggested that further improvements in performance metrics were possible. Subjects occasionally experienced difficulty in performing isolated single-DOF movements with parallel dual-site control but were able to accomplish related Fitts' Law tasks with high levels of path efficiency. Significance These results suggest that intramuscular EMG, used in a parallel dual-site configuration, can provide simultaneous control of a multi-DOF prosthetic wrist and hand and may outperform current methods that enforce sequential control. PMID:25394366

  5. Real-time simultaneous and proportional myoelectric control using intramuscular EMG

    NASA Astrophysics Data System (ADS)

    Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.

    2014-12-01

    Objective. Myoelectric prostheses use electromyographic (EMG) signals to control movement of prosthetic joints. Clinically available myoelectric control strategies do not allow simultaneous movement of multiple degrees of freedom (DOFs); however, the use of implantable devices that record intramuscular EMG signals could overcome this constraint. The objective of this study was to evaluate the real-time simultaneous control of three DOFs (wrist rotation, wrist flexion/extension, and hand open/close) using intramuscular EMG. Approach. We evaluated task performance of five able-bodied subjects in a virtual environment using two control strategies with fine-wire EMG: (i) parallel dual-site differential control, which enabled simultaneous control of three DOFs and (ii) pattern recognition control, which required sequential control of DOFs. Main results. Over the course of the experiment, subjects using parallel dual-site control demonstrated increased use of simultaneous control and improved performance in a Fitts’ Law test. By the end of the experiment, performance using parallel dual-site control was significantly better (up to a 25% increase in throughput) than when using sequential pattern recognition control for tasks requiring multiple DOFs. The learning trends with parallel dual-site control suggested that further improvements in performance metrics were possible. Subjects occasionally experienced difficulty in performing isolated single-DOF movements with parallel dual-site control but were able to accomplish related Fitts’ Law tasks with high levels of path efficiency. Significance. These results suggest that intramuscular EMG, used in a parallel dual-site configuration, can provide simultaneous control of a multi-DOF prosthetic wrist and hand and may outperform current methods that enforce sequential control.

  6. Unambiguous metabolite identification in high-throughput metabolomics by hybrid 1D 1 H NMR/ESI MS 1 approach: Hybrid 1D 1 H NMR/ESI MS 1 metabolomics method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Lawrence R.; Hoyt, David W.; Walker, S. Michael

    We present a novel approach to improve accuracy of metabolite identification by combining direct infusion ESI MS1 with 1D 1H NMR spectroscopy. The new approach first applies standard 1D 1H NMR metabolite identification protocol by matching the chemical shift, J-coupling and intensity information of experimental NMR signals against the NMR signals of standard metabolites in metabolomics library. This generates a list of candidate metabolites. The list contains false positive and ambiguous identifications. Next, we constrained the list with the chemical formulas derived from high-resolution direct infusion ESI MS1 spectrum of the same sample. Detection of the signals of a metabolitemore » both in NMR and MS significantly improves the confidence of identification and eliminates false positive identification. 1D 1H NMR and direct infusion ESI MS1 spectra of a sample can be acquired in parallel in several minutes. This is highly beneficial for rapid and accurate screening of hundreds of samples in high-throughput metabolomics studies. In order to make this approach practical, we developed a software tool, which is integrated to Chenomx NMR Suite. The approach is demonstrated on a model mixture, tomato and Arabidopsis thaliana metabolite extracts, and human urine.« less

  7. Development of a high-performance multichannel system for time-correlated single photon counting

    NASA Astrophysics Data System (ADS)

    Peronio, P.; Cominelli, A.; Acconcia, G.; Rech, I.; Ghioni, M.

    2017-05-01

    Time-Correlated Single Photon Counting (TCSPC) is one of the most effective techniques for measuring weak and fast optical signals. It outperforms traditional "analog" techniques due to its high sensitivity along with high temporal resolution. Despite those significant advantages, a main drawback still exists, which is related to the long acquisition time needed to perform a measurement. In past years many TCSPC systems have been developed with higher and higher number of channels, aimed to dealing with that limitation. Nevertheless, modern systems suffer from a strong trade-off between parallelism level and performance: the higher the number of channels the poorer the performance. In this work we present the design of a 32x32 TCSPC system meant for overtaking the existing trade-off. To this aim different technologies has been employed, to get the best performance both from detectors and sensing circuits. The exploitation of different technologies will be enabled by Through Silicon Vias (TSVs) which will be investigated as a possible solution for connecting the detectors to the sensing circuits. When dealing with a high number of channels, the count rate is inevitably set by the affordable throughput to the external PC. We targeted a throughput of 10Gb/s, which is beyond the state of the art, and designed the number of TCSPC channels accordingly. A dynamic-routing logic will connect the detectors to the lower number of acquisition chains.

  8. Data from Tiered High-Throughput Screening Approach to Identify Thyroperoxidase Inhibitors within the ToxCast Phase I and II Chemical Libraries

    EPA Pesticide Factsheets

    High-throughput screening for potential thyroid-disrupting chemicals requires a system of assays to capture multiple molecular-initiating events (MIEs) that converge on perturbed thyroid hormone (TH) homeostasis. Screening for MIEs specific to TH-disrupting pathways is limited in the U.S. Environmental Protection Agency ToxCast screening assay portfolio. To fill 1 critical screening gap, the Amplex UltraRed-thyroperoxidase (AUR-TPO) assay was developed to identify chemicals that inhibit TPO, as decreased TPO activity reduces TH synthesis. The ToxCast phase I and II chemical libraries, comprised of 1074 unique chemicals, were initially screened using a single, high concentration to identify potential TPO inhibitors. Chemicals positive in the single-concentration screen were retested in concentration-response. Due to high false-positive rates typically observed with loss-of-signal assays such as AUR-TPO, we also employed 2 additional assays in parallel to identify possible sources of nonspecific assay signal loss, enabling stratification of roughly 300 putative TPO inhibitors based upon selective AUR-TPO activity. A cell-free luciferase inhibition assay was used to identify nonspecific enzyme inhibition among the putative TPO inhibitors, and a cytotoxicity assay using a human cell line was used to estimate the cellular tolerance limit. Additionally, the TPO inhibition activities of 150 chemicals were compared between the AUR-TPO and an orthogonal peroxidase oxidat

  9. In situ patterned micro 3D liver constructs for parallel toxicology testing in a fluidic device

    PubMed Central

    Skardal, Aleksander; Devarasetty, Mahesh; Soker, Shay; Hall, Adam R

    2017-01-01

    3D tissue models are increasingly being implemented for drug and toxicology testing. However, the creation of tissue-engineered constructs for this purpose often relies on complex biofabrication techniques that are time consuming, expensive, and difficult to scale up. Here, we describe a strategy for realizing multiple tissue constructs in a parallel microfluidic platform using an approach that is simple and can be easily scaled for high-throughput formats. Liver cells mixed with a UV-crosslinkable hydrogel solution are introduced into parallel channels of a sealed microfluidic device and photopatterned to produce stable tissue constructs in situ. The remaining uncrosslinked material is washed away, leaving the structures in place. By using a hydrogel that specifically mimics the properties of the natural extracellular matrix, we closely emulate native tissue, resulting in constructs that remain stable and functional in the device during a 7-day culture time course under recirculating media flow. As proof of principle for toxicology analysis, we expose the constructs to ethyl alcohol (0–500 mM) and show that the cell viability and the secretion of urea and albumin decrease with increasing alcohol exposure, while markers for cell damage increase. PMID:26355538

  10. In situ patterned micro 3D liver constructs for parallel toxicology testing in a fluidic device.

    PubMed

    Skardal, Aleksander; Devarasetty, Mahesh; Soker, Shay; Hall, Adam R

    2015-09-10

    3D tissue models are increasingly being implemented for drug and toxicology testing. However, the creation of tissue-engineered constructs for this purpose often relies on complex biofabrication techniques that are time consuming, expensive, and difficult to scale up. Here, we describe a strategy for realizing multiple tissue constructs in a parallel microfluidic platform using an approach that is simple and can be easily scaled for high-throughput formats. Liver cells mixed with a UV-crosslinkable hydrogel solution are introduced into parallel channels of a sealed microfluidic device and photopatterned to produce stable tissue constructs in situ. The remaining uncrosslinked material is washed away, leaving the structures in place. By using a hydrogel that specifically mimics the properties of the natural extracellular matrix, we closely emulate native tissue, resulting in constructs that remain stable and functional in the device during a 7-day culture time course under recirculating media flow. As proof of principle for toxicology analysis, we expose the constructs to ethyl alcohol (0-500 mM) and show that the cell viability and the secretion of urea and albumin decrease with increasing alcohol exposure, while markers for cell damage increase.

  11. Nanopore arrays in a silicon membrane for parallel single-molecule detection: DNA translocation

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Schmidt, Torsten; Jemt, Anders; Sahlén, Pelin; Sychugov, Ilya; Lundeberg, Joakim; Linnros, Jan

    2015-08-01

    Optical nanopore sensing offers great potential in single-molecule detection, genotyping, or DNA sequencing for high-throughput applications. However, one of the bottle-necks for fluorophore-based biomolecule sensing is the lack of an optically optimized membrane with a large array of nanopores, which has large pore-to-pore distance, small variation in pore size and low background photoluminescence (PL). Here, we demonstrate parallel detection of single-fluorophore-labeled DNA strands (450 bps) translocating through an array of silicon nanopores that fulfills the above-mentioned requirements for optical sensing. The nanopore array was fabricated using electron beam lithography and anisotropic etching followed by electrochemical etching resulting in pore diameters down to ∼7 nm. The DNA translocation measurements were performed in a conventional wide-field microscope tailored for effective background PL control. The individual nanopore diameter was found to have a substantial effect on the translocation velocity, where smaller openings slow the translocation enough for the event to be clearly detectable in the fluorescence. Our results demonstrate that a uniform silicon nanopore array combined with wide-field optical detection is a promising alternative with which to realize massively-parallel single-molecule detection.

  12. Microfluidic Pneumatic Logic Circuits and Digital Pneumatic Microprocessors for Integrated Microfluidic Systems

    PubMed Central

    Rhee, Minsoung

    2010-01-01

    We have developed pneumatic logic circuits and microprocessors built with microfluidic channels and valves in polydimethylsiloxane (PDMS). The pneumatic logic circuits perform various combinational and sequential logic calculations with binary pneumatic signals (atmosphere and vacuum), producing cascadable outputs based on Boolean operations. A complex microprocessor is constructed from combinations of various logic circuits and receives pneumatically encoded serial commands at a single input line. The device then decodes the temporal command sequence by spatial parallelization, computes necessary logic calculations between parallelized command bits, stores command information for signal transportation and maintenance, and finally executes the command for the target devices. Thus, such pneumatic microprocessors will function as a universal on-chip control platform to perform complex parallel operations for large-scale integrated microfluidic devices. To demonstrate the working principles, we have built 2-bit, 3-bit, 4-bit, and 8-bit microprecessors to control various target devices for applications such as four color dye mixing, and multiplexed channel fluidic control. By significantly reducing the need for external controllers, the digital pneumatic microprocessor can be used as a universal on-chip platform to autonomously manipulate microfluids in a high throughput manner. PMID:19823730

  13. QuASAR-MPRA: accurate allele-specific analysis for massively parallel reporter assays.

    PubMed

    Kalita, Cynthia A; Moyerbrailean, Gregory A; Brown, Christopher; Wen, Xiaoquan; Luca, Francesca; Pique-Regi, Roger

    2018-03-01

    The majority of the human genome is composed of non-coding regions containing regulatory elements such as enhancers, which are crucial for controlling gene expression. Many variants associated with complex traits are in these regions, and may disrupt gene regulatory sequences. Consequently, it is important to not only identify true enhancers but also to test if a variant within an enhancer affects gene regulation. Recently, allele-specific analysis in high-throughput reporter assays, such as massively parallel reporter assays (MPRAs), have been used to functionally validate non-coding variants. However, we are still missing high-quality and robust data analysis tools for these datasets. We have further developed our method for allele-specific analysis QuASAR (quantitative allele-specific analysis of reads) to analyze allele-specific signals in barcoded read counts data from MPRA. Using this approach, we can take into account the uncertainty on the original plasmid proportions, over-dispersion, and sequencing errors. The provided allelic skew estimate and its standard error also simplifies meta-analysis of replicate experiments. Additionally, we show that a beta-binomial distribution better models the variability present in the allelic imbalance of these synthetic reporters and results in a test that is statistically well calibrated under the null. Applying this approach to the MPRA data, we found 602 SNPs with significant (false discovery rate 10%) allele-specific regulatory function in LCLs. We also show that we can combine MPRA with QuASAR estimates to validate existing experimental and computational annotations of regulatory variants. Our study shows that with appropriate data analysis tools, we can improve the power to detect allelic effects in high-throughput reporter assays. http://github.com/piquelab/QuASAR/tree/master/mpra. fluca@wayne.edu or rpique@wayne.edu. Supplementary data are available online at Bioinformatics. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  14. On the design of turbo codes

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1995-01-01

    In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.

  15. Extending the BEAGLE library to a multi-FPGA platform.

    PubMed

    Jin, Zheming; Bakos, Jason D

    2013-01-19

    Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.

  16. Using Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data

    NASA Astrophysics Data System (ADS)

    O'Connor, A. S.; Justice, B.; Harris, A. T.

    2013-12-01

    Graphics Processing Units (GPUs) are high-performance multiple-core processors capable of very high computational speeds and large data throughput. Modern GPUs are inexpensive and widely available commercially. These are general-purpose parallel processors with support for a variety of programming interfaces, including industry standard languages such as C. GPU implementations of algorithms that are well suited for parallel processing can often achieve speedups of several orders of magnitude over optimized CPU codes. Significant improvements in speeds for imagery orthorectification, atmospheric correction, target detection and image transformations like Independent Components Analsyis (ICA) have been achieved using GPU-based implementations. Additional optimizations, when factored in with GPU processing capabilities, can provide 50x - 100x reduction in the time required to process large imagery. Exelis Visual Information Solutions (VIS) has implemented a CUDA based GPU processing frame work for accelerating ENVI and IDL processes that can best take advantage of parallelization. Testing Exelis VIS has performed shows that orthorectification can take as long as two hours with a WorldView1 35,0000 x 35,000 pixel image. With GPU orthorecification, the same orthorectification process takes three minutes. By speeding up image processing, imagery can successfully be used by first responders, scientists making rapid discoveries with near real time data, and provides an operational component to data centers needing to quickly process and disseminate data.

  17. Strategic and Operational Plan for Integrating Transcriptomics ...

    EPA Pesticide Factsheets

    Plans for incorporating high throughput transcriptomics into the current high throughput screening activities at NCCT; the details are in the attached slide presentation presentation on plans for incorporating high throughput transcriptomics into the current high throughput screening activities at NCCT, given at the OECD meeting on June 23, 2016

  18. High-Throughput Experimental Approach Capabilities | Materials Science |

    Science.gov Websites

    NREL High-Throughput Experimental Approach Capabilities High-Throughput Experimental Approach by yellow and is for materials in the upper right sector. NREL's high-throughput experimental ,Te) and oxysulfide sputtering Combi-5: Nitrides and oxynitride sputtering We also have several non

  19. Multi-gigabit optical interconnects for next-generation on-board digital equipment

    NASA Astrophysics Data System (ADS)

    Venet, Norbert; Favaro, Henri; Sotom, Michel; Maignan, Michel; Berthon, Jacques

    2017-11-01

    Parallel optical interconnects are experimentally assessed as a technology that may offer the high-throughput data communication capabilities required to the next-generation on-board digital processing units. An optical backplane interconnect was breadboarded, on the basis of a digital transparent processor that provides flexible connectivity and variable bandwidth in telecom missions with multi-beam antenna coverage. The unit selected for the demonstration required that more than tens of Gbit/s be supported by the backplane. The demonstration made use of commercial parallel optical link modules at 850 nm wavelength, with 12 channels running at up to 2.5 Gbit/s. A flexible optical fibre circuit was developed so as to route board-to-board connections. It was plugged to the optical transmitter and receiver modules through 12-fibre MPO connectors. BER below 10-14 and optical link budgets in excess of 12 dB were measured, which would enable to integrate broadcasting. Integration of the optical backplane interconnect was successfully demonstrated by validating the overall digital processor functionality.

  20. Multi-gigabit optical interconnects for next-generation on-board digital equipment

    NASA Astrophysics Data System (ADS)

    Venet, Norbert; Favaro, Henri; Sotom, Michel; Maignan, Michel; Berthon, Jacques

    2004-06-01

    Parallel optical interconnects are experimentally assessed as a technology that may offer the high-throughput data communication capabilities required to the next-generation on-board digital processing units. An optical backplane interconnect was breadboarded, on the basis of a digital transparent processor that provides flexible connectivity and variable bandwidth in telecom missions with multi-beam antenna coverage. The unit selected for the demonstration required that more than tens of Gbit/s be supported by the backplane. The demonstration made use of commercial parallel optical link modules at 850 nm wavelength, with 12 channels running at up to 2.5 Gbit/s. A flexible optical fibre circuit was developed so as to route board-to-board connections. It was plugged to the optical transmitter and receiver modules through 12-fibre MPO connectors. BER below 10-14 and optical link budgets in excess of 12 dB were measured, which would enable to integrate broadcasting. Integration of the optical backplane interconnect was successfully demonstrated by validating the overall digital processor functionality.

  1. A force-based, parallel assay for the quantification of protein-DNA interactions.

    PubMed

    Limmer, Katja; Pippig, Diana A; Aschenbrenner, Daniela; Gaub, Hermann E

    2014-01-01

    Analysis of transcription factor binding to DNA sequences is of utmost importance to understand the intricate regulatory mechanisms that underlie gene expression. Several techniques exist that quantify DNA-protein affinity, but they are either very time-consuming or suffer from possible misinterpretation due to complicated algorithms or approximations like many high-throughput techniques. We present a more direct method to quantify DNA-protein interaction in a force-based assay. In contrast to single-molecule force spectroscopy, our technique, the Molecular Force Assay (MFA), parallelizes force measurements so that it can test one or multiple proteins against several DNA sequences in a single experiment. The interaction strength is quantified by comparison to the well-defined rupture stability of different DNA duplexes. As a proof-of-principle, we measured the interaction of the zinc finger construct Zif268/NRE against six different DNA constructs. We could show the specificity of our approach and quantify the strength of the protein-DNA interaction.

  2. A 32-channel photon counting module with embedded auto/cross-correlators for real-time parallel fluorescence correlation spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, S.; Labanca, I.; Rech, I.

    2014-10-15

    Fluorescence correlation spectroscopy (FCS) is a well-established technique to study binding interactions or the diffusion of fluorescently labeled biomolecules in vitro and in vivo. Fast FCS experiments require parallel data acquisition and analysis which can be achieved by exploiting a multi-channel Single Photon Avalanche Diode (SPAD) array and a corresponding multi-input correlator. This paper reports a 32-channel FPGA based correlator able to perform 32 auto/cross-correlations simultaneously over a lag-time ranging from 10 ns up to 150 ms. The correlator is included in a 32 × 1 SPAD array module, providing a compact and flexible instrument for high throughput FCS experiments.more » However, some inherent features of SPAD arrays, namely afterpulsing and optical crosstalk effects, may introduce distortions in the measurement of auto- and cross-correlation functions. We investigated these limitations to assess their impact on the module and evaluate possible workarounds.« less

  3. A 32-bit Ultrafast Parallel Correlator using Resonant Tunneling Devices

    NASA Technical Reports Server (NTRS)

    Kulkarni, Shriram; Mazumder, Pinaki; Haddad, George I.

    1995-01-01

    An ultrafast 32-bit pipeline correlator has been implemented using resonant tunneling diodes (RTD) and hetero-junction bipolar transistors (HBT). The negative differential resistance (NDR) characteristics of RTD's is the basis of logic gates with the self-latching property that eliminates pipeline area and delay overheads which limit throughput in conventional technologies. The circuit topology also allows threshold logic functions such as minority/majority to be implemented in a compact manner resulting in reduction of the overall complexity and delay of arbitrary logic circuits. The parallel correlator is an essential component in code division multi-access (CDMA) transceivers used for the continuous calculation of correlation between an incoming data stream and a PN sequence. Simulation results show that a nano-pipelined correlator can provide and effective throughput of one 32-bit correlation every 100 picoseconds, using minimal hardware, with a power dissipation of 1.5 watts. RTD plus HBT based logic gates have been fabricated and the RTD plus HBT based correlator is compared with state of the art complementary metal oxide semiconductor (CMOS) implementations.

  4. Capillary array scanner for time-resolved detection and identification of fluorescently labelled DNA fragments.

    PubMed

    Neumann, M; Herten, D P; Dietrich, A; Wolfrum, J; Sauer, M

    2000-02-25

    The first capillary array scanner for time-resolved fluorescence detection in parallel capillary electrophoresis based on semiconductor technology is described. The system consists essentially of a confocal fluorescence microscope and a x,y-microscope scanning stage. Fluorescence of the labelled probe molecules was excited using a short-pulse diode laser emitting at 640 nm with a repetition rate of 50 MHz. Using a single filter system the fluorescence decays of different labels were detected by an avalanche photodiode in combination with a PC plug-in card for time-correlated single-photon counting (TCSPC). The time-resolved fluorescence signals were analyzed and identified by a maximum likelihood estimator (MLE). The x,y-microscope scanning stage allows for discontinuous, bidirectional scanning of up to 16 capillaries in an array, resulting in longer fluorescence collection times per capillary compared to scanners working in a continuous mode. Synchronization of the alignment and measurement process were developed to allow for data acquisition without overhead. Detection limits in the subzeptomol range for different dye molecules separated in parallel capillaries have been achieved. In addition, we report on parallel time-resolved detection and separation of more than 400 bases of single base extension DNA fragments in capillary array electrophoresis. Using only semiconductor technology the presented technique represents a low-cost alternative for high throughput DNA sequencing in parallel capillaries.

  5. Discovery of highly selective brain-penetrant vasopressin 1a antagonists for the potential treatment of autism via a chemogenomic and scaffold hopping approach.

    PubMed

    Ratni, Hasane; Rogers-Evans, Mark; Bissantz, Caterina; Grundschober, Christophe; Moreau, Jean-Luc; Schuler, Franz; Fischer, Holger; Alvarez Sanchez, Ruben; Schnider, Patrick

    2015-03-12

    From a micromolar high throughput screening hit 7, the successful complementary application of a chemogenomic approach and of a scaffold hopping exercise rapidly led to a low single digit nanomolar human vasopressin 1a (hV1a) receptor antagonist 38. Initial optimization of the mouse V1a activities delivered suitable tool compounds which demonstrated a V1a mediated central in vivo effect. This novel series was further optimized through parallel synthesis with a focus on balancing lipophilicity to achieve robust aqueous solubility while avoiding P-gp mediated efflux. These efforts led to the discovery of the highly potent and selective brain-penetrant hV1a antagonist RO5028442 (8) suitable for human clinical studies in people with autism.

  6. Coding for Parallel Links to Maximize the Expected Value of Decodable Messages

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew A.; Chang, Christopher S.

    2011-01-01

    When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from spacecraft under certain conditions.

  7. Using ALFA for high throughput, distributed data transmission in the ALICE O2 system

    NASA Astrophysics Data System (ADS)

    Wegrzynek, A.; ALICE Collaboration

    2017-10-01

    ALICE (A Large Ion Collider Experiment) is a heavy-ion detector designed to study the physics of strongly interacting matter (the Quark-Gluon Plasma at the CERN LHC (Large Hadron Collider). ALICE has been successfully collecting physics data in Run 2 since spring 2015. In parallel, preparations for a major upgrade of the computing system, called O2 (Online-Offline), scheduled for the Long Shutdown 2 in 2019-2020, are being made. One of the major requirements of the system is the capacity to transport data between so-called FLPs (First Level Processors), equipped with readout cards, and the EPNs (Event Processing Node), performing data aggregation, frame building and partial reconstruction. It is foreseen to have 268 FLPs dispatching data to 1500 EPNs with an average output of 20 Gb/s each. In overall, the O2 processing system will operate at terabits per second of throughput while handling millions of concurrent connections. The ALFA framework will standardize and handle software related tasks such as readout, data transport, frame building, calibration, online reconstruction and more in the upgraded computing system. ALFA supports two data transport libraries: ZeroMQ and nanomsg. This paper discusses the efficiency of ALFA in terms of high throughput data transport. The tests were performed with multiple FLPs pushing data to multiple EPNs. The transfer was done using push-pull communication patterns and two socket configurations: bind, connect. The set of benchmarks was prepared to get the most performant results on each hardware setup. The paper presents the measurement process and final results - data throughput combined with computing resources usage as a function of block size. The high number of nodes and connections in the final set up may cause race conditions that can lead to uneven load balancing and poor scalability. The performed tests allow us to validate whether the traffic is distributed evenly over all receivers. It also measures the behaviour of the network in saturation and evaluates scalability from a 1-to-1 to a N-to-M solution.

  8. CoMiniGut—a small volume in vitro colon model for the screening of gut microbial fermentation processes

    PubMed Central

    Khakimov, Bekzod; Nielsen, Sebastian; Sørensen, Helena; van den Berg, Frans; Nielsen, Dennis Sandris

    2018-01-01

    Driven by the growing recognition of the influence of the gut microbiota (GM) on human health and disease, there is a rapidly increasing interest in understanding how dietary components, pharmaceuticals and pre- and probiotics influence GM. In vitro colon models represent an attractive tool for this purpose. With the dual objective of facilitating the investigation of rare and expensive compounds, as well as an increased throughput, we have developed a prototype in vitro parallel gut microbial fermentation screening tool with a working volume of only 5 ml consisting of five parallel reactor units that can be expanded with multiples of five to increase throughput. This allows e.g., the investigation of interpersonal variations in gut microbial dynamics and the acquisition of larger data sets with enhanced statistical inference. The functionality of the in vitro colon model, Copenhagen MiniGut (CoMiniGut) was first demonstrated in experiments with two common prebiotics using the oligosaccharide inulin and the disaccharide lactulose at 1% (w/v). We then investigated fermentation of the scarce and expensive human milk oligosaccharides (HMOs) 3-Fucosyllactose, 3-Sialyllactose, 6-Sialyllactose and the more common Fructooligosaccharide in fermentations with infant gut microbial communities. Investigations of microbial community composition dynamics in the CoMiniGut reactors by MiSeq-based 16S rRNA gene amplicon high throughput sequencing showed excellent experimental reproducibility and allowed us to extract significant differences in gut microbial composition after 24 h of fermentation for all investigated substrates and fecal donors. Furthermore, short chain fatty acids (SCFAs) were quantified for all treatments and donors. Fermentations with inulin and lactulose showed that inulin leads to a microbiota dominated by obligate anaerobes, with high relative abundance of Bacteroidetes, while the more easily fermented lactulose leads to higher relative abundance of Proteobacteria. The subsequent study on the influence of HMOs on two infant GM communities, revealed the strongest bifidogenic effect for 3′SL for both infants. Inter-individual differences of infant GM, especially with regards to the occurrence of Bacteroidetes and differences in bifidobacterial species composition, correlated with varying degrees of HMO utilization foremost of 6′SL and 3′FL, indicating species and strain related differences in HMO utilization which was also reflected in SCFAs concentrations, with 3′SL and 6′SL resulting in significantly higher butyrate production compared to 3′FL. In conclusion, the increased throughput of CoMiniGut strengthens experimental conclusions through elimination of statistical interferences originating from low number of repetitions. Its small working volume moreover allows the investigation of rare and expensive bioactives. PMID:29372119

  9. Managing evaporation for more robust microscale assays. Part 2. Characterization of convection and diffusion for cell biology.

    PubMed

    Berthier, Erwin; Warrick, Jay; Yu, Hongmeiy; Beebe, David J

    2008-06-01

    Cell based microassays allow the screening of a multitude of culture conditions in parallel, which can be used for various applications from drug screening to fundamental cell biology research. Tubeless microfluidic devices based on passive pumping are a step towards accessible high throughput microassays, however they are vulnerable to evaporation. In addition to volume loss, evaporation can lead to the generation of small flows. Here, we focus on issues of convection and diffusion for cell culture in microchannels and particularly the transport of soluble factors secreted by cells. We find that even for humidity levels as high as 95%, convection in a passive pumping channel can significantly alter distributions of these factors and that appropriate system design can prevent convection.

  10. Global phenotypic characterisation of human platelet lysate expanded MSCs by high-throughput flow cytometry.

    PubMed

    Reis, Monica; McDonald, David; Nicholson, Lindsay; Godthardt, Kathrin; Knobel, Sebastian; Dickinson, Anne M; Filby, Andrew; Wang, Xiao-Nong

    2018-03-02

    Mesenchymal stromal cells (MSCs) are a promising cell source to develop cell therapy for many diseases. Human platelet lysate (PLT) is increasingly used as an alternative to foetal calf serum (FCS) for clinical-scale MSC production. To date, the global surface protein expression of PLT-expended MSCs (MSC-PLT) is not known. To investigate this, paired MSC-PLT and MSC-FCS were analysed in parallel using high-throughput flow cytometry for the expression of 356 cell surface proteins. MSC-PLT showed differential surface protein expression compared to their MSC-FCS counterpart. Higher percentage of positive cells was observed in MSC-PLT for 48 surface proteins, of which 13 were significantly enriched on MSC-PLT. This finding was validated using multiparameter flow cytometry and further confirmed by quantitative staining intensity analysis. The enriched surface proteins are relevant to increased proliferation and migration capacity, as well as enhanced chondrogenic and osteogenic differentiation properties. In silico network analysis revealed that these enriched surface proteins are involved in three distinct networks that are associated with inflammatory responses, carbohydrate metabolism and cellular motility. This is the first study reporting differential cell surface protein expression between MSC-PLT and MSC-FSC. Further studies are required to uncover the impact of those enriched proteins on biological functions of MSC-PLT.

  11. High throughput operando studies using Fourier transform infrared imaging and Raman spectroscopy.

    PubMed

    Li, Guosheng; Hu, Dehong; Xia, Guanguang; White, J M; Zhang, Conrad

    2008-07-01

    A prototype high throughput operando (HTO) reactor designed and built for catalyst screening and characterization combines Fourier transform infrared (FT-IR) imaging and Raman spectroscopy in operando conditions. Using a focal plane array detector (HgCdTe focal plane array, 128x128 pixels, and 1610 Hz frame rate) for the FT-IR imaging system, the catalyst activity and selectivity of all parallel reaction channels can be simultaneously followed. Each image data set possesses 16 384 IR spectra with a spectral range of 800-4000 cm(-1) and with an 8 cm(-1) resolution. Depending on the signal-to-noise ratio, 2-20 s are needed to generate a full image of all reaction channels for a data set. Results on reactant conversion and product selectivity are obtained from FT-IR spectral analysis. Six novel Raman probes, one for each reaction channel, were specially designed and house built at Pacific Northwest National Laboratory, to simultaneously collect Raman spectra of the catalysts and possible reaction intermediates on the catalyst surface under operando conditions. As a model system, methanol partial oxidation reaction on silica-supported molybdenum oxide (MoO3SiO2) catalysts has been studied under different reaction conditions to demonstrate the performance of the HTO reactor.

  12. Femtomole-Scale High-Throughput Screening of Protein Ligands with Droplet-Based Thermal Shift Assay.

    PubMed

    Liu, Wen-Wen; Zhu, Ying; Fang, Qun

    2017-06-20

    There is a great demand to measure protein-ligand interactions in rapid and low cost way. Here, we developed a microfluidic droplet-based thermal shift assay (dTSA) system for high-throughput screening of small-molecule protein ligands. The system is composed of a nanoliter droplet array chip, a microfluidic droplet robot, and a real-time fluorescence detection system. Total 324 assays could be performed in parallel in a single chip with an 18 × 18 droplet array. The consumption of dTSA for each protein or ligand sample was only 5 nL (femtomole scale), which is significantly reduced by over 3 orders of magnitude compared with those in 96- or 384-well plate-based systems. We also observed the implementation of TSA in nanoliter droplet format could substantially improve assay precision with relative standard deviation (RSD) of 0.2% (n = 50), which can be ascribed to the enhanced thermal conduction in small volume reactors. The dTSA system was optimized by studying the effect of droplet volumes, as well as protein and fluorescent dye (SYPRO Orange) concentrations. To demonstrate its potential in drug discovery, we applied the dTSA system in screening inhibitors of human thrombin with a commercial library containing 100 different small molecule compounds, and two inhibitors were successfully identified and confirmed.

  13. 1001 Ways to run AutoDock Vina for virtual screening

    NASA Astrophysics Data System (ADS)

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  14. 1001 Ways to run AutoDock Vina for virtual screening.

    PubMed

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  15. High throughput sequencing analysis of RNA libraries reveals the influences of initial library and PCR methods on SELEX efficiency.

    PubMed

    Takahashi, Mayumi; Wu, Xiwei; Ho, Michelle; Chomchan, Pritsana; Rossi, John J; Burnett, John C; Zhou, Jiehua

    2016-09-22

    The systemic evolution of ligands by exponential enrichment (SELEX) technique is a powerful and effective aptamer-selection procedure. However, modifications to the process can dramatically improve selection efficiency and aptamer performance. For example, droplet digital PCR (ddPCR) has been recently incorporated into SELEX selection protocols to putatively reduce the propagation of byproducts and avoid selection bias that result from differences in PCR efficiency of sequences within the random library. However, a detailed, parallel comparison of the efficacy of conventional solution PCR versus the ddPCR modification in the RNA aptamer-selection process is needed to understand effects on overall SELEX performance. In the present study, we took advantage of powerful high throughput sequencing technology and bioinformatics analysis coupled with SELEX (HT-SELEX) to thoroughly investigate the effects of initial library and PCR methods in the RNA aptamer identification. Our analysis revealed that distinct "biased sequences" and nucleotide composition existed in the initial, unselected libraries purchased from two different manufacturers and that the fate of the "biased sequences" was target-dependent during selection. Our comparison of solution PCR- and ddPCR-driven HT-SELEX demonstrated that PCR method affected not only the nucleotide composition of the enriched sequences, but also the overall SELEX efficiency and aptamer efficacy.

  16. A copy number variation genotyping method for aneuploidy detection in spontaneous abortion specimens.

    PubMed

    Chen, Songchang; Liu, Deyuan; Zhang, Junyu; Li, Shuyuan; Zhang, Lanlan; Fan, Jianxia; Luo, Yuqin; Qian, Yeqing; Huang, Hefeng; Liu, Chao; Zhu, Huanhuan; Jiang, Zhengwen; Xu, Chenming

    2017-02-01

    Chromosomal abnormalities such as aneuploidy have been shown to be responsible for causing spontaneous abortion. Genetic evaluation of abortions is currently underperformed. Screening for aneuploidy in the products of conception can help determine the etiology. We designed a high-throughput ligation-dependent probe amplification (HLPA) assay to examine aneuploidy of 24 chromosomes in miscarriage tissues and aimed to validate the performance of this technique. We carried out aneuploidy screening in 98 fetal tissue samples collected from female subjects with singleton pregnancies who experienced spontaneous abortion. The mean maternal age was 31.6 years (range: 24-43), and the mean gestational age was 10.2 weeks (range: 4.6-14.1). HLPA was performed in parallel with array comparative genomic hybridization, which is the gold standard for aneuploidy detection in clinical practices. The results from the two platforms were compared. Forty-nine out of ninety-eight samples were found to be aneuploid. HLPA showed concordance with array comparative genomic hybridization in diagnosing aneuploidy. High-throughput ligation-dependent probe amplification is a rapid and accurate method for aneuploidy detection. It can be used as a cost-effective screening procedure in clinical spontaneous abortions. © 2016 John Wiley & Sons, Ltd. © 2016 John Wiley & Sons, Ltd.

  17. MultiSense: A Multimodal Sensor Tool Enabling the High-Throughput Analysis of Respiration.

    PubMed

    Keil, Peter; Liebsch, Gregor; Borisjuk, Ljudmilla; Rolletschek, Hardy

    2017-01-01

    The high-throughput analysis of respiratory activity has become an important component of many biological investigations. Here, a technological platform, denoted the "MultiSense tool," is described. The tool enables the parallel monitoring of respiration in 100 samples over an extended time period, by dynamically tracking the concentrations of oxygen (O 2 ) and/or carbon dioxide (CO 2 ) and/or pH within an airtight vial. Its flexible design supports the quantification of respiration based on either oxygen consumption or carbon dioxide release, thereby allowing for the determination of the physiologically significant respiratory quotient (the ratio between the quantities of CO 2 released and the O 2 consumed). It requires an LED light source to be mounted above the sample, together with a CCD camera system, adjusted to enable the capture of analyte-specific wavelengths, and fluorescent sensor spots inserted into the sample vial. Here, a demonstration is given of the use of the MultiSense tool to quantify respiration in imbibing plant seeds, for which an appropriate step-by-step protocol is provided. The technology can be easily adapted for a wide range of applications, including the monitoring of gas exchange in any kind of liquid culture system (algae, embryo and tissue culture, cell suspensions, microbial cultures).

  18. High-throughput flow injection analysis mass spectroscopy with networked delivery of color-rendered results. 2. Three-dimensional spectral mapping of 96-well combinatorial chemistry racks.

    PubMed

    Görlach, E; Richmond, R; Lewis, I

    1998-08-01

    For the last two years, the mass spectroscopy section of the Novartis Pharma Research Core Technology group has analyzed tens of thousands of multiple parallel synthesis samples from the Novartis Pharma Combinatorial Chemistry program, using an in-house developed automated high-throughput flow injection analysis electrospray ionization mass spectroscopy system. The electrospray spectra of these samples reflect the many structures present after the cleavage step from the solid support. The overall success of the sequential synthesis is mirrored in the purity of the expected end product, but the partial success of individual synthesis steps is evident in the impurities in the mass spectrum. However this latter reaction information, which is of considerable utility to the combinatorial chemist, is effectively hidden from view by the very large number of analyzed samples. This information is now revealed at the workbench of the combinatorial chemist by a novel three-dimensional display of each rack's complete mass spectral ion current using the in-house RackViewer Visual Basic application. Colorization of "forbidden loss" and "forbidden gas-adduct" zones, normalization to expected monoisotopic molecular weight, colorization of ionization intensity, and sorting by row or column were used in combination to highlight systematic patterns in the mass spectroscopy data.

  19. High throughput sequencing analysis of RNA libraries reveals the influences of initial library and PCR methods on SELEX efficiency

    PubMed Central

    Takahashi, Mayumi; Wu, Xiwei; Ho, Michelle; Chomchan, Pritsana; Rossi, John J.; Burnett, John C.; Zhou, Jiehua

    2016-01-01

    The systemic evolution of ligands by exponential enrichment (SELEX) technique is a powerful and effective aptamer-selection procedure. However, modifications to the process can dramatically improve selection efficiency and aptamer performance. For example, droplet digital PCR (ddPCR) has been recently incorporated into SELEX selection protocols to putatively reduce the propagation of byproducts and avoid selection bias that result from differences in PCR efficiency of sequences within the random library. However, a detailed, parallel comparison of the efficacy of conventional solution PCR versus the ddPCR modification in the RNA aptamer-selection process is needed to understand effects on overall SELEX performance. In the present study, we took advantage of powerful high throughput sequencing technology and bioinformatics analysis coupled with SELEX (HT-SELEX) to thoroughly investigate the effects of initial library and PCR methods in the RNA aptamer identification. Our analysis revealed that distinct “biased sequences” and nucleotide composition existed in the initial, unselected libraries purchased from two different manufacturers and that the fate of the “biased sequences” was target-dependent during selection. Our comparison of solution PCR- and ddPCR-driven HT-SELEX demonstrated that PCR method affected not only the nucleotide composition of the enriched sequences, but also the overall SELEX efficiency and aptamer efficacy. PMID:27652575

  20. Multifocal multiphoton microscopy with adaptive optical correction

    NASA Astrophysics Data System (ADS)

    Coelho, Simao; Poland, Simon; Krstajic, Nikola; Li, David; Monypenny, James; Walker, Richard; Tyndall, David; Ng, Tony; Henderson, Robert; Ameer-Beg, Simon

    2013-02-01

    Fluorescence lifetime imaging microscopy (FLIM) is a well established approach for measuring dynamic signalling events inside living cells, including detection of protein-protein interactions. The improvement in optical penetration of infrared light compared with linear excitation due to Rayleigh scattering and low absorption have provided imaging depths of up to 1mm in brain tissue but significant image degradation occurs as samples distort (aberrate) the infrared excitation beam. Multiphoton time-correlated single photon counting (TCSPC) FLIM is a method for obtaining functional, high resolution images of biological structures. In order to achieve good statistical accuracy TCSPC typically requires long acquisition times. We report the development of a multifocal multiphoton microscope (MMM), titled MegaFLI. Beam parallelization performed via a 3D Gerchberg-Saxton (GS) algorithm using a Spatial Light Modulator (SLM), increases TCSPC count rate proportional to the number of beamlets produced. A weighted 3D GS algorithm is employed to improve homogeneity. An added benefit is the implementation of flexible and adaptive optical correction. Adaptive optics performed by means of Zernike polynomials are used to correct for system induced aberrations. Here we present results with significant improvement in throughput obtained using a novel complementary metal-oxide-semiconductor (CMOS) 1024 pixel single-photon avalanche diode (SPAD) array, opening the way to truly high-throughput FLIM.

  1. Squeezing water from a stone: high-throughput sequencing from a 145-year old holotype resolves (barely) a cryptic species problem in flying lizards.

    PubMed

    McGuire, Jimmy A; Cotoras, Darko D; O'Connell, Brendan; Lawalata, Shobi Z S; Wang-Claypool, Cynthia Y; Stubbs, Alexander; Huang, Xiaoting; Wogan, Guinevere O U; Hykin, Sarah M; Reilly, Sean B; Bi, Ke; Riyanto, Awal; Arida, Evy; Smith, Lydia L; Milne, Heather; Streicher, Jeffrey W; Iskandar, Djoko T

    2018-01-01

    We used Massively Parallel High-Throughput Sequencing to obtain genetic data from a 145-year old holotype specimen of the flying lizard, Draco cristatellus . Obtaining genetic data from this holotype was necessary to resolve an otherwise intractable taxonomic problem involving the status of this species relative to closely related sympatric Draco species that cannot otherwise be distinguished from one another on the basis of museum specimens. Initial analyses suggested that the DNA present in the holotype sample was so degraded as to be unusable for sequencing. However, we used a specialized extraction procedure developed for highly degraded ancient DNA samples and MiSeq shotgun sequencing to obtain just enough low-coverage mitochondrial DNA (721 base pairs) to conclusively resolve the species status of the holotype as well as a second known specimen of this species. The holotype was prepared before the advent of formalin-fixation and therefore was most likely originally fixed with ethanol and never exposed to formalin. Whereas conventional wisdom suggests that formalin-fixed samples should be the most challenging for DNA sequencing, we propose that evaporation during long-term alcohol storage and consequent water-exposure may subject older ethanol-fixed museum specimens to hydrolytic damage. If so, this may pose an even greater challenge for sequencing efforts involving historical samples.

  2. High-Throughput Screening and Quantitative Chemical Ranking for Sodium-Iodide Symporter Inhibitors in ToxCast Phase I Chemical Library.

    PubMed

    Wang, Jun; Hallinger, Daniel R; Murr, Ashley S; Buckalew, Angela R; Simmons, Steven O; Laws, Susan C; Stoker, Tammy E

    2018-05-01

    Thyroid uptake of iodide via the sodium-iodide symporter (NIS) is the first step in the biosynthesis of thyroid hormones that are critical for health and development in humans and wildlife. Despite having long been a known target of endocrine disrupting chemicals such as perchlorate, information regarding NIS inhibition activity is still unavailable for the vast majority of environmental chemicals. This study applied a previously validated high-throughput approach to screen for NIS inhibitors in the ToxCast phase I library, representing 293 important environmental chemicals. Here 310 blinded samples were screened in a tiered-approach using an initial single-concentration (100 μM) radioactive-iodide uptake (RAIU) assay, followed by 169 samples further evaluated in multi-concentration (0.001 μM-100 μM) testing in parallel RAIU and cell viability assays. A novel chemical ranking system that incorporates multi-concentration RAIU and cytotoxicity responses was also developed as a standardized method for chemical prioritization in current and future screenings. Representative chemical responses and thyroid effects of high-ranking chemicals are further discussed. This study significantly expands current knowledge of NIS inhibition potential in environmental chemicals and provides critical support to U.S. EPA's Endocrine Disruptor Screening Program (EDSP) initiative to expand coverage of thyroid molecular targets, as well as the development of thyroid adverse outcome pathways (AOPs).

  3. Translating Computational Toxicology Data Through ...

    EPA Pesticide Factsheets

    US EPA has been using in vitro testing methods in an effort to accelerate the pace of chemical evaluations and address the significant lack of health and environmental data on the thousands of chemicals found in commonly used products. Since 2005, EPA’s researchers have generated hazard data using in vitro methods for thousands chemicals, designed innovative chemical exposure prediction models, and created a repository of thousands of high quality chemical structure data. Recently, EPA's ToxCast research effort, released high-throughput screening data on thousands of chemicals. These chemicals were screened for potential health effects in over 700 high-throughput screening assay endpoints. As part of EPA’s commitment to transparency, all data is accessible through the Chemical Safety for Sustainability Dashboard (iCSS). Policy makers and stakeholders can analyze and use this data to help inform decisions they make about chemicals. Use of these new datasets in risk decisions will require changing a regulatory paradigm that has been used for decades. EPA recognized early in the ToxCast effort that a communications and outreach strategy was needed to parallel the research and aid with the development and use of these new data sources. The goal is to use communications and outreach to increase awareness, interest and usage of analyzing and using these new chemical evaluation methods. To accomplish this, EPA employs numerous communication and outreach including t

  4. Tempest: GPU-CPU computing for high-throughput database spectral matching.

    PubMed

    Milloy, Jeffrey A; Faherty, Brendan K; Gerber, Scott A

    2012-07-06

    Modern mass spectrometers are now capable of producing hundreds of thousands of tandem (MS/MS) spectra per experiment, making the translation of these fragmentation spectra into peptide matches a common bottleneck in proteomics research. When coupled with experimental designs that enrich for post-translational modifications such as phosphorylation and/or include isotopically labeled amino acids for quantification, additional burdens are placed on this computational infrastructure by shotgun sequencing. To address this issue, we have developed a new database searching program that utilizes the massively parallel compute capabilities of a graphical processing unit (GPU) to produce peptide spectral matches in a very high throughput fashion. Our program, named Tempest, combines efficient database digestion and MS/MS spectral indexing on a CPU with fast similarity scoring on a GPU. In our implementation, the entire similarity score, including the generation of full theoretical peptide candidate fragmentation spectra and its comparison to experimental spectra, is conducted on the GPU. Although Tempest uses the classical SEQUEST XCorr score as a primary metric for evaluating similarity for spectra collected at unit resolution, we have developed a new "Accelerated Score" for MS/MS spectra collected at high resolution that is based on a computationally inexpensive dot product but exhibits scoring accuracy similar to that of the classical XCorr. In our experience, Tempest provides compute-cluster level performance in an affordable desktop computer.

  5. Corrugated metal-coated tapered tip for scanning near-field optical microscope.

    PubMed

    Antosiewicz, Tomasz J; Szoplik, Tomasz

    2007-08-20

    This paper addresses an important issue of light throughput of a metal-coated tapered tip for scanning near-field microscope (SNOM). Corrugations of the interface between the fiber core and metal coating in the form of parallel grooves of different profiles etched in the core considerably increase the energy throughput. In 2D FDTD simulations in the Cartesian coordinates we calculate near-field light emitted from such tips. For a certain wavelength range total intensity of forward emission from the corrugated tip is 10 times stronger than that from a classical tapered tip. When realized in practice the idea of corrugated tip may lead up to twice better resolution of SNOM.

  6. Impact of media and antifoam selection on monoclonal antibody production and quality using a high throughput micro‐bioreactor system

    PubMed Central

    Velugula‐Yellela, Sai Rashmika; Williams, Abasha; Trunfio, Nicholas; Hsu, Chih‐Jung; Chavez, Brittany; Yoon, Seongkyu

    2017-01-01

    Monoclonal antibody production in commercial scale cell culture bioprocessing requires a thorough understanding of the engineering process and components used throughout manufacturing. It is important to identify high impact components early on during the lifecycle of a biotechnology‐derived product. While cell culture media selection is of obvious importance to the health and productivity of mammalian bioreactor operations, other components such as antifoam selection can also play an important role in bioreactor cell culture. Silicone polymer‐based antifoams were known to have negative impacts on cell health, production, and downstream filtration and purification operations. High throughput screening in micro‐scale bioreactors provides an efficient strategy to identify initial operating parameters. Here, we utilized a micro‐scale parallel bioreactor system to study an IgG1 producing CHO cell line, to screen Dynamis, ProCHO5, PowerCHO2, EX‐Cell Advanced, and OptiCHO media, and 204, C, EX‐Cell, SE‐15, and Y‐30 antifoams and their impacts on IgG1 production, cell growth, aggregation, and process control. This study found ProCHO5, EX‐Cell Advanced, and PowerCHO2 media supported strong cellular growth profiles, with an IVCD of 25‐35 × 106 cells‐d/mL, while maintaining specific antibody production (Qp > 2 pg/cell‐d) for our model cell line and a monomer percentage above 94%. Antifoams C, EX‐Cell, and SE‐15 were capable of providing adequate control of foaming while antifoam 204 and Y‐30 noticeably stunted cellular growth. This work highlights the utility of high throughput micro bioreactors and the importance of identifying both positive and negative impacts of media and antifoam selection on a model IgG1 producing CHO cell line. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 34:262–270, 2018 PMID:29086492

  7. High-throughput process development: I. Process chromatography.

    PubMed

    Rathore, Anurag S; Bhambure, Rahul

    2014-01-01

    Chromatographic separation serves as "a workhorse" for downstream process development and plays a key role in removal of product-related, host cell-related, and process-related impurities. Complex and poorly characterized raw materials and feed material, low feed concentration, product instability, and poor mechanistic understanding of the processes are some of the critical challenges that are faced during development of a chromatographic step. Traditional process development is performed as trial-and-error-based evaluation and often leads to a suboptimal process. High-throughput process development (HTPD) platform involves an integration of miniaturization, automation, and parallelization and provides a systematic approach for time- and resource-efficient chromatography process development. Creation of such platforms requires integration of mechanistic knowledge of the process with various statistical tools for data analysis. The relevance of such a platform is high in view of the constraints with respect to time and resources that the biopharma industry faces today. This protocol describes the steps involved in performing HTPD of process chromatography step. It described operation of a commercially available device (PreDictor™ plates from GE Healthcare). This device is available in 96-well format with 2 or 6 μL well size. We also discuss the challenges that one faces when performing such experiments as well as possible solutions to alleviate them. Besides describing the operation of the device, the protocol also presents an approach for statistical analysis of the data that is gathered from such a platform. A case study involving use of the protocol for examining ion-exchange chromatography of granulocyte colony-stimulating factor (GCSF), a therapeutic product, is briefly discussed. This is intended to demonstrate the usefulness of this protocol in generating data that is representative of the data obtained at the traditional lab scale. The agreement in the data is indeed very significant (regression coefficient 0.93). We think that this protocol will be of significant value to those involved in performing high-throughput process development of process chromatography.

  8. Emerging patterns of somatic mutations in cancer

    PubMed Central

    Watson, Ian R.; Takahashi, Koichi; Futreal, P. Andrew; Chin, Lynda

    2014-01-01

    The advance in technological tools for massively parallel, high-throughput sequencing of DNA has enabled the comprehensive characterization of somatic mutations in large number of tumor samples. Here, we review recent cancer genomic studies that have assembled emerging views of the landscapes of somatic mutations through deep sequencing analyses of the coding exomes and whole genomes in various cancer types. We discuss the comparative genomics of different cancers, including mutation rates, spectrums, and roles of environmental insults that influence these processes. We highlight the developing statistical approaches used to identify significantly mutated genes, and discuss the emerging biological and clinical insights from such analyses as well as the challenges ahead translating these genomic data into clinical impacts. PMID:24022702

  9. GPU acceleration for digitally reconstructed radiographs using bindless texture objects and CUDA/OpenGL interoperability.

    PubMed

    Abdellah, Marwan; Eldeib, Ayman; Owis, Mohamed I

    2015-01-01

    This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device.

  10. Parallel processing approach to transform-based image coding

    NASA Astrophysics Data System (ADS)

    Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.

    1991-06-01

    This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.

  11. Parallel selection of antibody libraries on phage and yeast surfaces via a cross-species display.

    PubMed

    Patel, Chirag A; Wang, Jinqing; Wang, Xinwei; Dong, Feng; Zhong, Pingyu; Luo, Peter P; Wang, Kevin C

    2011-09-01

    We created a cross-species display system that allows the display of the same antibody libraries on both prokaryotic phage and eukaryotic yeast without the need for molecular cloning. Using this cross-display system, a large, diverse library can be constructed once and subsequently used for display and selection in both phage and yeast systems. In this article, we performed the parallel phage and yeast selection of an antibody maturation library using this cross-display platform. This parallel selection allowed us to isolate more unique hits than single-species selection, with 162 unique clones from phage and 107 unique clones from yeast. In addition, we were able to shuttle yeast hits back to Escherichia coli cells for affinity characterization at a higher throughput.

  12. Non-CAR resists and advanced materials for Massively Parallel E-Beam Direct Write process integration

    NASA Astrophysics Data System (ADS)

    Pourteau, Marie-Line; Servin, Isabelle; Lepinay, Kévin; Essomba, Cyrille; Dal'Zotto, Bernard; Pradelles, Jonathan; Lattard, Ludovic; Brandt, Pieter; Wieland, Marco

    2016-03-01

    The emerging Massively Parallel-Electron Beam Direct Write (MP-EBDW) is an attractive high resolution high throughput lithography technology. As previously shown, Chemically Amplified Resists (CARs) meet process/integration specifications in terms of dose-to-size, resolution, contrast, and energy latitude. However, they are still limited by their line width roughness. To overcome this issue, we tested an alternative advanced non-CAR and showed it brings a substantial gain in sensitivity compared to CAR. We also implemented and assessed in-line post-lithographic treatments for roughness mitigation. For outgassing-reduction purpose, a top-coat layer is added to the total process stack. A new generation top-coat was tested and showed improved printing performances compared to the previous product, especially avoiding dark erosion: SEM cross-section showed a straight pattern profile. A spin-coatable charge dissipation layer based on conductive polyaniline has also been tested for conductivity and lithographic performances, and compatibility experiments revealed that the underlying resist type has to be carefully chosen when using this product. Finally, the Process Of Reference (POR) trilayer stack defined for 5 kV multi-e-beam lithography was successfully etched with well opened and straight patterns, and no lithography-etch bias.

  13. Comparison of capacitive and radio frequency resonator sensors for monitoring parallelized droplet microfluidic production.

    PubMed

    Conchouso, David; McKerricher, Garret; Arevalo, Arpys; Castro, David; Shamim, Atif; Foulds, Ian G

    2016-08-16

    Scaled-up production of microfluidic droplets, through the parallelization of hundreds of droplet generators, has received a lot of attention to bring novel multiphase microfluidics research to industrial applications. However, apart from droplet generation, other significant challenges relevant to this goal have never been discussed. Examples include monitoring systems, high-throughput processing of droplets and quality control procedures among others. In this paper, we present and compare capacitive and radio frequency (RF) resonator sensors as two candidates that can measure the dielectric properties of emulsions in microfluidic channels. By placing several of these sensors in a parallelization device, the stability of the droplet generation at different locations can be compared, and potential malfunctions can be detected. This strategy enables for the first time the monitoring of scaled-up microfluidic droplet production. Both sensors were prototyped and characterized using emulsions with droplets of 100-150 μm in diameter, which were generated in parallelization devices at water-in-oil volume fractions (φ) between 11.1% and 33.3%.Using these sensors, we were able to measure accurately increments as small as 2.4% in the water volume fraction of the emulsions. Although both methods rely on the dielectric properties of the emulsions, the main advantage of the RF resonator sensors is the fact that they can be designed to resonate at multiple frequencies of the broadband transmission line. Consequently with careful design, two or more sensors can be parallelized and read out by a single signal. Finally, a comparison between these sensors based on their sensitivity, readout cost and simplicity, and design flexibility is also discussed.

  14. Automated assessment of pain in rats using a voluntarily accessed static weight-bearing test.

    PubMed

    Kim, Hung Tae; Uchimoto, Kazuhiro; Duellman, Tyler; Yang, Jay

    2015-11-01

    The weight-bearing test is one method to assess pain in rodent animal models; however, the acceptance of this convenient method is limited by the low throughput data acquisition and necessity of confining the rodents to a small chamber. We developed novel data acquisition hardware and software, data analysis software, and a conditioning protocol for an automated high throughput static weight-bearing assessment of pain. With this device, the rats voluntarily enter the weighing chamber, precluding the necessity to restrain the animals and thereby removing the potential stress-induced confounds as well as operator selection bias during data collection. We name this device the Voluntarily Accessed Static Incapacitance Chamber (VASIC). Control rats subjected to the VASIC device provided hundreds of weight-bearing data points in a single behavioral assay. Chronic constriction injury (CCI) surgery and paw pad injection of complete Freund's adjuvant (CFA) or carrageenan in rats generated hundreds of weight-bearing data during a 30 minute recording session. Rats subjected to CCI, CFA, or carrageenan demonstrated the expected bias in weight distribution favoring the un-operated leg, and the analgesic effect of i.p. morphine was demonstrated. In comparison with existing methods, brief water restriction encouraged the rats to enter the weighing chamber to access water, and an infrared detector confirmed the rat position with feet properly positioned on the footplates, triggering data collection. This allowed hands-off measurement of weight distribution data reducing operator selection bias. The VASIC device should enhance the hands-free parallel collection of unbiased weight-bearing data in a high throughput manner, allowing further testing of this behavioral measure as an effective assessment of pain in rodents. Copyright © 2015. Published by Elsevier Inc.

  15. Automated assessment of pain in rats using a voluntarily accessed static weight-bearing test

    PubMed Central

    Kim, Hung Tae; Uchimoto, Kazuhiro; Duellman, Tyler; Yang, Jay

    2015-01-01

    The weight-bearing test is one method to assess pain in rodent animal models; however, the acceptance of this convenient method is limited by the low throughput data acquisition and necessity of confining the rodents to a small chamber. New methods We developed novel data acquisition hardware and software, data analysis software, and a conditioning protocol for an automated high throughput static weight-bearing assessment of pain. With this device, the rats voluntarily enter the weighing chamber, precluding the necessity to restrain the animals and thereby removing the potential stress-induced confounds as well as operator selection bias during data collection. We name this device the Voluntarily Accessed Static Incapacitance Chamber (VASIC). Results Control rats subjected to the VASIC device provided hundreds of weight-bearing data points in a single behavioral assay. Chronic constriction injury (CCI) surgery and paw pad injection of complete Freund's adjuvant (CFA) or carrageenan in rats generated hundreds of weight-bearing data during a 30 minute recording session. Rats subjected to CCI, CFA, or carrageenan demonstrated the expected bias in weight distribution favoring the un-operated leg, and the analgesic effect of i.p. morphine was demonstrated. In comparison with existing methods, brief water restriction encouraged the rats to enter the weighing chamber to access water, and an infrared detector confirmed the rat position with feet properly positioned on the footplates, triggering data collection. This allowed hands-off measurement of weight distribution data reducing operator selection bias. Conclusion The VASIC device should enhance the hands-free parallel collection of unbiased weight-bearing data in a high throughput manner, allowing further testing of this behavioral measure as an effective assessment of pain in rodents. PMID:26143745

  16. Mapping quantum yield for (Fe-Zn-Sn-Ti)Ox photoabsorbers using a high throughput photoelectrochemical screening system.

    PubMed

    Xiang, Chengxiang; Haber, Joel; Marcin, Martin; Mitrovic, Slobodan; Jin, Jian; Gregoire, John M

    2014-03-10

    Combinatorial synthesis and screening of light absorbers are critical to material discoveries for photovoltaic and photoelectrochemical applications. One of the most effective ways to evaluate the energy-conversion properties of a semiconducting light absorber is to form an asymmetric junction and investigate the photogeneration, transport and recombination processes at the semiconductor interface. This standard photoelectrochemical measurement is readily made on a semiconductor sample with a back-side metallic contact (working electrode) and front-side solution contact. In a typical combinatorial material library, each sample shares a common back contact, requiring novel instrumentation to provide spatially resolved and thus sample-resolved measurements. We developed a multiplexing counter electrode with a thin layer assembly, in which a rectifying semiconductor/liquid junction was formed and the short-circuit photocurrent was measured under chopped illumination for each sample in a material library. The multiplexing counter electrode assembly demonstrated a photocurrent sensitivity of sub-10 μA cm(-2) with an external quantum yield sensitivity of 0.5% for each semiconductor sample under a monochromatic ultraviolet illumination source. The combination of cell architecture and multiplexing allows high-throughput modes of operation, including both fast-serial and parallel measurements. To demonstrate the performance of the instrument, the external quantum yields of 1819 different compositions from a pseudoquaternary metal oxide library, (Fe-Zn-Sn-Ti)Ox, at 385 nm were collected in scanning serial mode with a throughput of as fast as 1 s per sample. Preliminary screening results identified a promising ternary composition region centered at Fe0.894Sn0.103Ti0.0034Ox, with an external quantum yield of 6.7% at 385 nm.

  17. Comparative Transcriptomic Analysis in Paddy Rice under Storage and Identification of Differentially Regulated Genes in Response to High Temperature and Humidity.

    PubMed

    Zhao, Chanjuan; Xie, Junqi; Li, Li; Cao, Chongjiang

    2017-09-20

    The transcriptomes of paddy rice in response to high temperature and humidity were studied using a high-throughput RNA sequencing approach. Effects of high temperature and humidity on the sucrose and starch contents and α/β-amylase activity were also investigated. Results showed that 6876 differentially expressed genes (DEGs) were identified in paddy rice under high temperature and humidity storage. Importantly, 12 DEGs that were downregulated fell into the "starch and sucrose pathway". The quantitative real-time polymerase chain reaction assays indicated that expression of these 12 DEGs was significantly decreased, which was in parallel with the reduced level of enzyme activities and the contents of sucrose and starch in paddy rice stored at high temperature and humidity conditions compared to the control group. Taken together, high temperature and humidity influence the quality of paddy rice at least partially by downregulating the expression of genes encoding sucrose transferases and hydrolases, which might result in the decrease of starch and sucrose contents.

  18. Photonic crystal biosensor microplates with integrated fluid networks for high throughput applications in drug discovery

    NASA Astrophysics Data System (ADS)

    Choi, Charles J.; Chan, Leo L.; Pineda, Maria F.; Cunningham, Brian T.

    2007-09-01

    Assays used in pharmaceutical research require a system that can not only detect biochemical interactions with high sensitivity, but that can also perform many measurements in parallel while consuming low volumes of reagents. While nearly all label-free biosensor transducers to date have been interfaced with a flow channel, the liquid handling system is typically aligned and bonded to the transducer for supplying analytes to only a few sensors in parallel. In this presentation, we describe a fabrication approach for photonic crystal biosensors that utilizes nanoreplica molding to produce a network of sensors that are automatically self-aligned with a microfluidic network in a single process step. The sensor/fluid network is inexpensively produced on large surface areas upon flexible plastic substrates, allowing the device to be incorporated into standard format 96-well microplates. A simple flow scheme using hydrostatic pressure applied through a single control point enables immobilization of capture ligands upon a large number of sensors with 220 nL of reagent, and subsequent exposure of the sensors to test samples. A high resolution imaging detection instrument is capable of monitoring the binding within parallel channels at rates compatible with determining kinetic binding constants between the immobilized ligands and the analytes. The first implementation of this system is capable of monitoring the kinetic interactions of 11 flow channels at once, and a total of 88 channels within an integrated biosensor microplate in rapid succession. The system was initially tested to characterize the interaction between sets of proteins with known binding behavior.

  19. An in-line spectrophotometer on a centrifugal microfluidic platform for real-time protein determination and calibration.

    PubMed

    Ding, Zhaoxiong; Zhang, Dongying; Wang, Guanghui; Tang, Minghui; Dong, Yumin; Zhang, Yixin; Ho, Ho-Pui; Zhang, Xuping

    2016-09-21

    In this paper, an in-line, low-cost, miniature and portable spectrophotometric detection system is presented and used for fast protein determination and calibration in centrifugal microfluidics. Our portable detection system is configured with paired emitter and detector diodes (PEDD), where the light beam between both LEDs is collimated with enhanced system tolerance. It is the first time that a physical model of PEDD is clearly presented, which could be modelled as a photosensitive RC oscillator. A portable centrifugal microfluidic system that contains a wireless port in real-time communication with a smartphone has been built to show that PEDD is an effective strategy for conducting rapid protein bioassays with detection performance comparable to that of a UV-vis spectrophotometer. The choice of centrifugal microfluidics offers the unique benefits of highly parallel fluidic actuation at high accuracy while there is no need for a pump, as inertial forces are present within the entire spinning disc and accurately controlled by varying the spinning speed. As a demonstration experiment, we have conducted the Bradford assay for bovine serum albumin (BSA) concentration calibration from 0 to 2 mg mL(-1). Moreover, a novel centrifugal disc with a spiral microchannel is proposed for automatic distribution and metering of the sample to all the parallel reactions at one time. The reported lab-on-a-disc scheme with PEDD detection may offer a solution for high-throughput assays, such as protein density calibration, drug screening and drug solubility measurement that require the handling of a large number of reactions in parallel.

  20. High Throughput PBTK: Open-Source Data and Tools for ...

    EPA Pesticide Factsheets

    Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy

  1. Accelerating list management for MPI.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemmert, K. Scott; Rodrigues, Arun F.; Underwood, Keith Douglas

    2005-07-01

    The latency and throughput of MPI messages are critically important to a range of parallel scientific applications. In many modern networks, both of these performance characteristics are largely driven by the performance of a processor on the network interface. Because of the semantics of MPI, this embedded processor is forced to traverse a linked list of posted receives each time a message is received. As this list grows long, the latency of message reception grows and the throughput of MPI messages decreases. This paper presents a novel hardware feature to handle list management functions on a network interface. By movingmore » functions such as list insertion, list traversal, and list deletion to the hardware unit, latencies are decreased by up to 20% in the zero length queue case with dramatic improvements in the presence of long queues. Similarly, the throughput is increased by up to 10% in the zero length queue case and by nearly 100% in the presence queues of 30 messages.« less

  2. CUDAMPF: a multi-tiered parallel framework for accelerating protein sequence search in HMMER on CUDA-enabled GPU.

    PubMed

    Jiang, Hanyu; Ganesan, Narayan

    2016-02-27

    HMMER software suite is widely used for analysis of homologous protein and nucleotide sequences with high sensitivity. The latest version of hmmsearch in HMMER 3.x, utilizes heuristic-pipeline which consists of MSV/SSV (Multiple/Single ungapped Segment Viterbi) stage, P7Viterbi stage and the Forward scoring stage to accelerate homology detection. Since the latest version is highly optimized for performance on modern multi-core CPUs with SSE capabilities, only a few acceleration attempts report speedup. However, the most compute intensive tasks within the pipeline (viz., MSV/SSV and P7Viterbi stages) still stand to benefit from the computational capabilities of massively parallel processors. A Multi-Tiered Parallel Framework (CUDAMPF) implemented on CUDA-enabled GPUs presented here, offers a finer-grained parallelism for MSV/SSV and Viterbi algorithms. We couple SIMT (Single Instruction Multiple Threads) mechanism with SIMD (Single Instructions Multiple Data) video instructions with warp-synchronism to achieve high-throughput processing and eliminate thread idling. We also propose a hardware-aware optimal allocation scheme of scarce resources like on-chip memory and caches in order to boost performance and scalability of CUDAMPF. In addition, runtime compilation via NVRTC available with CUDA 7.0 is incorporated into the presented framework that not only helps unroll innermost loop to yield upto 2 to 3-fold speedup than static compilation but also enables dynamic loading and switching of kernels depending on the query model size, in order to achieve optimal performance. CUDAMPF is designed as a hardware-aware parallel framework for accelerating computational hotspots within the hmmsearch pipeline as well as other sequence alignment applications. It achieves significant speedup by exploiting hierarchical parallelism on single GPU and takes full advantage of limited resources based on their own performance features. In addition to exceeding performance of other acceleration attempts, comprehensive evaluations against high-end CPUs (Intel i5, i7 and Xeon) shows that CUDAMPF yields upto 440 GCUPS for SSV, 277 GCUPS for MSV and 14.3 GCUPS for P7Viterbi all with 100 % accuracy, which translates to a maximum speedup of 37.5, 23.1 and 11.6-fold for MSV, SSV and P7Viterbi respectively. The source code is available at https://github.com/Super-Hippo/CUDAMPF.

  3. Real science at the petascale.

    PubMed

    Saksena, Radhika S; Boghosian, Bruce; Fazendeiro, Luis; Kenway, Owain A; Manos, Steven; Mazzeo, Marco D; Sadiq, S Kashif; Suter, James L; Wright, David; Coveney, Peter V

    2009-06-28

    We describe computational science research that uses petascale resources to achieve scientific results at unprecedented scales and resolution. The applications span a wide range of domains, from investigation of fundamental problems in turbulence through computational materials science research to biomedical applications at the forefront of HIV/AIDS research and cerebrovascular haemodynamics. This work was mainly performed on the US TeraGrid 'petascale' resource, Ranger, at Texas Advanced Computing Center, in the first half of 2008 when it was the largest computing system in the world available for open scientific research. We have sought to use this petascale supercomputer optimally across application domains and scales, exploiting the excellent parallel scaling performance found on up to at least 32 768 cores for certain of our codes in the so-called 'capability computing' category as well as high-throughput intermediate-scale jobs for ensemble simulations in the 32-512 core range. Furthermore, this activity provides evidence that conventional parallel programming with MPI should be successful at the petascale in the short to medium term. We also report on the parallel performance of some of our codes on up to 65 636 cores on the IBM Blue Gene/P system at the Argonne Leadership Computing Facility, which has recently been named the fastest supercomputer in the world for open science.

  4. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  5. Flexbar 3.0 - SIMD and multicore parallelization.

    PubMed

    Roehr, Johannes T; Dieterich, Christoph; Reinert, Knut

    2017-09-15

    High-throughput sequencing machines can process many samples in a single run. For Illumina systems, sequencing reads are barcoded with an additional DNA tag that is contained in the respective sequencing adapters. The recognition of barcode and adapter sequences is hence commonly needed for the analysis of next-generation sequencing data. Flexbar performs demultiplexing based on barcodes and adapter trimming for such data. The massive amounts of data generated on modern sequencing machines demand that this preprocessing is done as efficiently as possible. We present Flexbar 3.0, the successor of the popular program Flexbar. It employs now twofold parallelism: multi-threading and additionally SIMD vectorization. Both types of parallelism are used to speed-up the computation of pair-wise sequence alignments, which are used for the detection of barcodes and adapters. Furthermore, new features were included to cover a wide range of applications. We evaluated the performance of Flexbar based on a simulated sequencing dataset. Our program outcompetes other tools in terms of speed and is among the best tools in the presented quality benchmark. https://github.com/seqan/flexbar. johannes.roehr@fu-berlin.de or knut.reinert@fu-berlin.de. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  7. Parallel Mitogenome Sequencing Alleviates Random Rooting Effect in Phylogeography.

    PubMed

    Hirase, Shotaro; Takeshima, Hirohiko; Nishida, Mutsumi; Iwasaki, Wataru

    2016-04-28

    Reliably rooted phylogenetic trees play irreplaceable roles in clarifying diversification in the patterns of species and populations. However, such trees are often unavailable in phylogeographic studies, particularly when the focus is on rapidly expanded populations that exhibit star-like trees. A fundamental bottleneck is known as the random rooting effect, where a distant outgroup tends to root an unrooted tree "randomly." We investigated whether parallel mitochondrial genome (mitogenome) sequencing alleviates this effect in phylogeography using a case study on the Sea of Japan lineage of the intertidal goby Chaenogobius annularis Eighty-three C. annularis individuals were collected and their mitogenomes were determined by high-throughput and low-cost parallel sequencing. Phylogenetic analysis of these mitogenome sequences was conducted to root the Sea of Japan lineage, which has a star-like phylogeny and had not been reliably rooted. The topologies of the bootstrap trees were investigated to determine whether the use of mitogenomes alleviated the random rooting effect. The mitogenome data successfully rooted the Sea of Japan lineage by alleviating the effect, which hindered phylogenetic analysis that used specific gene sequences. The reliable rooting of the lineage led to the discovery of a novel, northern lineage that expanded during an interglacial period with high bootstrap support. Furthermore, the finding of this lineage suggested the existence of additional glacial refugia and provided a new recent calibration point that revised the divergence time estimation between the Sea of Japan and Pacific Ocean lineages. This study illustrates the effectiveness of parallel mitogenome sequencing for solving the random rooting problem in phylogeographic studies. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  8. Highly specific detection of genetic modification events using an enzyme-linked probe hybridization chip.

    PubMed

    Zhang, M Z; Zhang, X F; Chen, X M; Chen, X; Wu, S; Xu, L L

    2015-08-10

    The enzyme-linked probe hybridization chip utilizes a method based on ligase-hybridizing probe chip technology, with the principle of using thio-primers for protection against enzyme digestion, and using lambda DNA exonuclease to cut multiple PCR products obtained from the sample being tested into single-strand chains for hybridization. The 5'-end amino-labeled probe was fixed onto the aldehyde chip, and hybridized with the single-stranded PCR product, followed by addition of a fluorescent-modified probe that was then enzymatically linked with the adjacent, substrate-bound probe in order to achieve highly specific, parallel, and high-throughput detection. Specificity and sensitivity testing demonstrated that enzyme-linked probe hybridization technology could be applied to the specific detection of eight genetic modification events at the same time, with a sensitivity reaching 0.1% and the achievement of accurate, efficient, and stable results.

  9. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    PubMed

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.

  10. Exploiting Parallel R in the Cloud with SPRINT

    PubMed Central

    Piotrowski, M.; McGilvary, G.A.; Sloan, T. M.; Mewissen, M.; Lloyd, A.D.; Forster, T.; Mitchell, L.; Ghazal, P.; Hill, J.

    2012-01-01

    Background Advances in DNA Microarray devices and next-generation massively parallel DNA sequencing platforms have led to an exponential growth in data availability but the arising opportunities require adequate computing resources. High Performance Computing (HPC) in the Cloud offers an affordable way of meeting this need. Objectives Bioconductor, a popular tool for high-throughput genomic data analysis, is distributed as add-on modules for the R statistical programming language but R has no native capabilities for exploiting multi-processor architectures. SPRINT is an R package that enables easy access to HPC for genomics researchers. This paper investigates: setting up and running SPRINT-enabled genomic analyses on Amazon’s Elastic Compute Cloud (EC2), the advantages of submitting applications to EC2 from different parts of the world and, if resource underutilization can improve application performance. Methods The SPRINT parallel implementations of correlation, permutation testing, partitioning around medoids and the multi-purpose papply have been benchmarked on data sets of various size on Amazon EC2. Jobs have been submitted from both the UK and Thailand to investigate monetary differences. Results It is possible to obtain good, scalable performance but the level of improvement is dependent upon the nature of algorithm. Resource underutilization can further improve the time to result. End-user’s location impacts on costs due to factors such as local taxation. Conclusions: Although not designed to satisfy HPC requirements, Amazon EC2 and cloud computing in general provides an interesting alternative and provides new possibilities for smaller organisations with limited funds. PMID:23223611

  11. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  12. Multi-target parallel processing approach for gene-to-structure determination of the influenza polymerase PB2 subunit.

    PubMed

    Armour, Brianna L; Barnes, Steve R; Moen, Spencer O; Smith, Eric; Raymond, Amy C; Fairman, James W; Stewart, Lance J; Staker, Bart L; Begley, Darren W; Edwards, Thomas E; Lorimer, Donald D

    2013-06-28

    Pandemic outbreaks of highly virulent influenza strains can cause widespread morbidity and mortality in human populations worldwide. In the United States alone, an average of 41,400 deaths and 1.86 million hospitalizations are caused by influenza virus infection each year (1). Point mutations in the polymerase basic protein 2 subunit (PB2) have been linked to the adaptation of the viral infection in humans (2). Findings from such studies have revealed the biological significance of PB2 as a virulence factor, thus highlighting its potential as an antiviral drug target. The structural genomics program put forth by the National Institute of Allergy and Infectious Disease (NIAID) provides funding to Emerald Bio and three other Pacific Northwest institutions that together make up the Seattle Structural Genomics Center for Infectious Disease (SSGCID). The SSGCID is dedicated to providing the scientific community with three-dimensional protein structures of NIAID category A-C pathogens. Making such structural information available to the scientific community serves to accelerate structure-based drug design. Structure-based drug design plays an important role in drug development. Pursuing multiple targets in parallel greatly increases the chance of success for new lead discovery by targeting a pathway or an entire protein family. Emerald Bio has developed a high-throughput, multi-target parallel processing pipeline (MTPP) for gene-to-structure determination to support the consortium. Here we describe the protocols used to determine the structure of the PB2 subunit from four different influenza A strains.

  13. Exploiting parallel R in the cloud with SPRINT.

    PubMed

    Piotrowski, M; McGilvary, G A; Sloan, T M; Mewissen, M; Lloyd, A D; Forster, T; Mitchell, L; Ghazal, P; Hill, J

    2013-01-01

    Advances in DNA Microarray devices and next-generation massively parallel DNA sequencing platforms have led to an exponential growth in data availability but the arising opportunities require adequate computing resources. High Performance Computing (HPC) in the Cloud offers an affordable way of meeting this need. Bioconductor, a popular tool for high-throughput genomic data analysis, is distributed as add-on modules for the R statistical programming language but R has no native capabilities for exploiting multi-processor architectures. SPRINT is an R package that enables easy access to HPC for genomics researchers. This paper investigates: setting up and running SPRINT-enabled genomic analyses on Amazon's Elastic Compute Cloud (EC2), the advantages of submitting applications to EC2 from different parts of the world and, if resource underutilization can improve application performance. The SPRINT parallel implementations of correlation, permutation testing, partitioning around medoids and the multi-purpose papply have been benchmarked on data sets of various size on Amazon EC2. Jobs have been submitted from both the UK and Thailand to investigate monetary differences. It is possible to obtain good, scalable performance but the level of improvement is dependent upon the nature of the algorithm. Resource underutilization can further improve the time to result. End-user's location impacts on costs due to factors such as local taxation. Although not designed to satisfy HPC requirements, Amazon EC2 and cloud computing in general provides an interesting alternative and provides new possibilities for smaller organisations with limited funds.

  14. High-Throughput Parallel Sequencing to Measure Fitness of Leptospira interrogans Transposon Insertion Mutants during Acute Infection

    PubMed Central

    Matsunaga, James; Haake, David A.

    2016-01-01

    Pathogenic species of Leptospira are the causative agents of leptospirosis, a zoonotic disease that causes mortality and morbidity worldwide. The understanding of the virulence mechanisms of Leptospira spp is still at an early stage due to the limited number of genetic tools available for this microorganism. The development of random transposon mutagenesis in pathogenic strains a decade ago has contributed to the identification of several virulence factors. In this study, we used the transposon sequencing (Tn-Seq) technique, which combines transposon mutagenesis with massive parallel sequencing, to study the in vivo fitness of a pool of Leptospira interrogans mutants. We infected hamsters with a pool of 42 mutants (input pool), which included control mutants with insertions in four genes previously analyzed by virulence testing (loa22, ligB, flaA1, and lic20111) and 23 mutants with disrupted signal transduction genes. We quantified the mutants in different tissues (blood, kidney and liver) at 4 days post-challenge by high-throughput sequencing and compared the frequencies of mutants recovered from tissues to their frequencies in the input pool. Control mutants that were less fit in the Tn-Seq experiment were attenuated for virulence when tested separately in the hamster model of lethal leptospirosis. Control mutants with unaltered fitness were as virulent as the wild-type strain. We identified two mutants with the transposon inserted in the same putative adenylate/guanylate cyclase gene (lic12327) that had reduced in vivo fitness in blood, kidney and liver. Both lic12327 mutants were attenuated for virulence when tested individually in hamsters. Growth of the control mutants and lic12327 mutants in culture medium were similar to that of the wild-type strain. These results demonstrate the feasibility of screening large pools of L. interrogans transposon mutants for those with altered fitness, and potentially attenuated virulence, by transposon sequencing. PMID:27824878

  15. nextPARS: parallel probing of RNA structures in Illumina

    PubMed Central

    Saus, Ester; Willis, Jesse R.; Pryszcz, Leszek P.; Hafez, Ahmed; Llorens, Carlos; Himmelbauer, Heinz

    2018-01-01

    RNA molecules play important roles in virtually every cellular process. These functions are often mediated through the adoption of specific structures that enable RNAs to interact with other molecules. Thus, determining the secondary structures of RNAs is central to understanding their function and evolution. In recent years several sequencing-based approaches have been developed that allow probing structural features of thousands of RNA molecules present in a sample. Here, we describe nextPARS, a novel Illumina-based implementation of in vitro parallel probing of RNA structures. Our approach achieves comparable accuracy to previous implementations, while enabling higher throughput and sample multiplexing. PMID:29358234

  16. A High-Speed Design of Montgomery Multiplier

    NASA Astrophysics Data System (ADS)

    Fan, Yibo; Ikenaga, Takeshi; Goto, Satoshi

    With the increase of key length used in public cryptographic algorithms such as RSA and ECC, the speed of Montgomery multiplication becomes a bottleneck. This paper proposes a high speed design of Montgomery multiplier. Firstly, a modified scalable high-radix Montgomery algorithm is proposed to reduce critical path. Secondly, a high-radix clock-saving dataflow is proposed to support high-radix operation and one clock cycle delay in dataflow. Finally, a hardware-reused architecture is proposed to reduce the hardware cost and a parallel radix-16 design of data path is proposed to accelerate the speed. By using HHNEC 0.25μm standard cell library, the implementation results show that the total cost of Montgomery multiplier is 130 KGates, the clock frequency is 180MHz and the throughput of 1024-bit RSA encryption is 352kbps. This design is suitable to be used in high speed RSA or ECC encryption/decryption. As a scalable design, it supports any key-length encryption/decryption up to the size of on-chip memory.

  17. Application of ToxCast High-Throughput Screening and ...

    EPA Pesticide Factsheets

    Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenesis Distruptors Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenssis Distruptors

  18. IONAC-Lite

    NASA Technical Reports Server (NTRS)

    Torgerson, Jordan L.; Clare, Loren P.; Pang, Jackson

    2011-01-01

    The Interplanetary Overlay Net - working Protocol Accelerator (IONAC) described previously in The Inter - planetary Overlay Networking Protocol Accelerator (NPO-45584), NASA Tech Briefs, Vol. 32, No. 10, (October 2008) p. 106 (http://www.techbriefs.com/component/ content/article/3317) provides functions that implement the Delay Tolerant Networking (DTN) bundle protocol. New missions that require high-speed downlink-only use of DTN can now be accommodated by the unidirectional IONAC-Lite to support high data rate downlink mission applications. Due to constrained energy resources, a conventional software implementation of the DTN protocol can provide only limited throughput for any given reasonable energy consumption rate. The IONAC-Lite DTN Protocol Accelerator is able to reduce this energy consumption by an order of magnitude and increase the throughput capability by two orders of magnitude. In addition, a conventional DTN implementation requires a bundle database with a considerable storage requirement. In very high downlink datarate missions such as near-Earth radar science missions, the storage space utilization needs to be maximized for science data and minimized for communications protocol-related storage needs. The IONAC-Lite DTN Protocol Accelerator is implemented in a reconfigurable hardware device to accomplish exactly what s needed for high-throughput DTN downlink-only scenarios. The following are salient features of the IONAC-Lite implementation: An implementation of the Bundle Protocol for an environment that requires a very high rate bundle egress data rate. The C&DH (command and data handling) subsystem is also expected to be very constrained so the interaction with the C&DH processor and the temporary storage are minimized. Fully pipelined design so that bundle processing database is not required. Implements a lookup table-based approach to eliminate multi-pass processing requirement imposed by the Bundle Protocol header s length field structure and the SDNV (self-delimiting numeric value) data field formatting. 8-bit parallel datapath to support high data-rate missions. Reduced resource utilization implementation for missions that do not require custody transfer features. There was no known implementation of the DTN protocol in a field programmable gate array (FPGA) device prior to the current implementation. The combination of energy and performance optimization that embodies this design makes the work novel.

  19. High-rate serial interconnections for embedded and distributed systems with power and resource constraints

    NASA Astrophysics Data System (ADS)

    Sheynin, Yuriy; Shutenko, Felix; Suvorova, Elena; Yablokov, Evgenej

    2008-04-01

    High rate interconnections are important subsystems in modern data processing and control systems of many classes. They are especially important in prospective embedded and on-board systems that used to be multicomponent systems with parallel or distributed architecture, [1]. Modular architecture systems of previous generations were based on parallel busses that were widely used and standardised: VME, PCI, CompactPCI, etc. Busses evolution went in improvement of bus protocol efficiency (burst transactions, split transactions, etc.) and increasing operation frequencies. However, due to multi-drop bus nature and multi-wire skew problems the parallel bussing speedup became more and more limited. For embedded and on-board systems additional reason for this trend was in weight, size and power constraints of an interconnection and its components. Parallel interfaces have become technologically more challenging as their respective clock frequencies have increased to keep pace with the bandwidth requirements of their attached storage devices. Since each interface uses a data clock to gate and validate the parallel data (which is normally 8 bits or 16 bits wide), the clock frequency need only be equivalent to the byte rate or word rate being transmitted. In other words, for a given transmission frequency, the wider the data bus, the slower the clock. As the clock frequency increases, more high frequency energy is available in each of the data lines, and a portion of this energy is dissipated in radiation. Each data line not only transmits this energy but also receives some from its neighbours. This form of mutual interference is commonly called "cross-talk," and the signal distortion it produces can become another major contributor to loss of data integrity unless compensated by appropriate cable designs. Other transmission problems such as frequency-dependent attenuation and signal reflections, while also applicable to serial interfaces, are more troublesome in parallel interfaces due to the number of additional cable conductors involved. In order to compensate for these drawbacks, higher quality cables, shorter cable runs and fewer devices on the bus have been the norm. Finally, the physical bulk of the parallel cables makes them more difficult to route inside an enclosure, hinders cooling airflow and is incompatible with the trend toward smaller form-factor devices. Parallel busses worked in systems during the past 20 years, but the accumulated problems dictate the need for change and the technology is available to spur the transition. The general trend in high-rate interconnections turned from parallel bussing to scalable interconnections with a network architecture and high-rate point-to-point links. Analysis showed that data links with serial information transfer could achieve higher throughput and efficiency and it was confirmed in various research and practical design. Serial interfaces offer an improvement over older parallel interfaces: better performance, better scalability, and also better reliability as the parallel interfaces are at their limits of speed with reliable data transfers and others. The trend was implemented in major standards' families evolution: e.g. from PCI/PCI-X parallel bussing to PCIExpress interconnection architecture with serial lines, from CompactPCI parallel bus to ATCA (Advanced Telecommunications Architecture) specification with serial links and network topologies of an interconnection, etc. In the article we consider a general set of characteristics and features of serial interconnections, give a brief overview of serial interconnections specifications. In more details we present the SpaceWire interconnection technology. Have been developed for space on-board systems applications the SpaceWire has important features and characteristics that make it a prospective interconnection for wide range of embedded systems.

  20. The high throughput biomedicine unit at the institute for molecular medicine Finland: high throughput screening meets precision medicine.

    PubMed

    Pietiainen, Vilja; Saarela, Jani; von Schantz, Carina; Turunen, Laura; Ostling, Paivi; Wennerberg, Krister

    2014-05-01

    The High Throughput Biomedicine (HTB) unit at the Institute for Molecular Medicine Finland FIMM was established in 2010 to serve as a national and international academic screening unit providing access to state of the art instrumentation for chemical and RNAi-based high throughput screening. The initial focus of the unit was multiwell plate based chemical screening and high content microarray-based siRNA screening. However, over the first four years of operation, the unit has moved to a more flexible service platform where both chemical and siRNA screening is performed at different scales primarily in multiwell plate-based assays with a wide range of readout possibilities with a focus on ultraminiaturization to allow for affordable screening for the academic users. In addition to high throughput screening, the equipment of the unit is also used to support miniaturized, multiplexed and high throughput applications for other types of research such as genomics, sequencing and biobanking operations. Importantly, with the translational research goals at FIMM, an increasing part of the operations at the HTB unit is being focused on high throughput systems biological platforms for functional profiling of patient cells in personalized and precision medicine projects.

  1. High Throughput Screening For Hazard and Risk of Environmental Contaminants

    EPA Science Inventory

    High throughput toxicity testing provides detailed mechanistic information on the concentration response of environmental contaminants in numerous potential toxicity pathways. High throughput screening (HTS) has several key advantages: (1) expense orders of magnitude less than an...

  2. Increasing the reach of forensic genetics with massively parallel sequencing.

    PubMed

    Budowle, Bruce; Schmedes, Sarah E; Wendt, Frank R

    2017-09-01

    The field of forensic genetics has made great strides in the analysis of biological evidence related to criminal and civil matters. More so, the discipline has set a standard of performance and quality in the forensic sciences. The advent of massively parallel sequencing will allow the field to expand its capabilities substantially. This review describes the salient features of massively parallel sequencing and how it can impact forensic genetics. The features of this technology offer increased number and types of genetic markers that can be analyzed, higher throughput of samples, and the capability of targeting different organisms, all by one unifying methodology. While there are many applications, three are described where massively parallel sequencing will have immediate impact: molecular autopsy, microbial forensics and differentiation of monozygotic twins. The intent of this review is to expose the forensic science community to the potential enhancements that have or are soon to arrive and demonstrate the continued expansion the field of forensic genetics and its service in the investigation of legal matters.

  3. Parallel compression/decompression-based datapath architecture for multibeam mask writers

    NASA Astrophysics Data System (ADS)

    Chaudhary, Narendra; Savari, Serap A.

    2017-06-01

    Multibeam electron beam systems will be used in the future for mask writing and for complimentary lithography. The major challenges of the multibeam systems are in meeting throughput requirements and in handling the large data volumes associated with writing grayscale data on the wafer. In terms of future communications and computational requirements Amdahl's Law suggests that a simple increase of computation power and parallelism may not be a sustainable solution. We propose a parallel data compression algorithm to exploit the sparsity of mask data and a grayscale video-like representation of data. To improve the communication and computational efficiency of these systems at the write time we propose an alternate datapath architecture partly motivated by multibeam direct write lithography and partly motivated by the circuit testing literature, where parallel decompression reduces clock cycles. We explain a deflection plate architecture inspired by NuFlare Technology's multibeam mask writing system and how our datapath architecture can be easily added to it to improve performance.

  4. Parallel compression/decompression-based datapath architecture for multibeam mask writers

    NASA Astrophysics Data System (ADS)

    Chaudhary, Narendra; Savari, Serap A.

    2017-10-01

    Multibeam electron beam systems will be used in the future for mask writing and for complementary lithography. The major challenges of the multibeam systems are in meeting throughput requirements and in handling the large data volumes associated with writing grayscale data on the wafer. In terms of future communications and computational requirements, Amdahl's law suggests that a simple increase of computation power and parallelism may not be a sustainable solution. We propose a parallel data compression algorithm to exploit the sparsity of mask data and a grayscale video-like representation of data. To improve the communication and computational efficiency of these systems at the write time, we propose an alternate datapath architecture partly motivated by multibeam direct-write lithography and partly motivated by the circuit testing literature, where parallel decompression reduces clock cycles. We explain a deflection plate architecture inspired by NuFlare Technology's multibeam mask writing system and how our datapath architecture can be easily added to it to improve performance.

  5. An automated workflow for enhancing microbial bioprocess optimization on a novel microbioreactor platform

    PubMed Central

    2012-01-01

    Background High-throughput methods are widely-used for strain screening effectively resulting in binary information regarding high or low productivity. Nevertheless achieving quantitative and scalable parameters for fast bioprocess development is much more challenging, especially for heterologous protein production. Here, the nature of the foreign protein makes it impossible to predict the, e.g. best expression construct, secretion signal peptide, inductor concentration, induction time, temperature and substrate feed rate in fed-batch operation to name only a few. Therefore, a high number of systematic experiments are necessary to elucidate the best conditions for heterologous expression of each new protein of interest. Results To increase the throughput in bioprocess development, we used a microtiter plate based cultivation system (Biolector) which was fully integrated into a liquid-handling platform enclosed in laminar airflow housing. This automated cultivation platform was used for optimization of the secretory production of a cutinase from Fusarium solani pisi with Corynebacterium glutamicum. The online monitoring of biomass, dissolved oxygen and pH in each of the microtiter plate wells enables to trigger sampling or dosing events with the pipetting robot used for a reliable selection of best performing cutinase producers. In addition to this, further automated methods like media optimization and induction profiling were developed and validated. All biological and bioprocess parameters were exclusively optimized at microtiter plate scale and showed perfect scalable results to 1 L and 20 L stirred tank bioreactor scale. Conclusions The optimization of heterologous protein expression in microbial systems currently requires extensive testing of biological and bioprocess engineering parameters. This can be efficiently boosted by using a microtiter plate cultivation setup embedded into a liquid-handling system, providing more throughput by parallelization and automation. Due to improved statistics by replicate cultivations, automated downstream analysis, and scalable process information, this setup has superior performance compared to standard microtiter plate cultivation. PMID:23113930

  6. High Throughput Transcriptomics: From screening to pathways

    EPA Science Inventory

    The EPA ToxCast effort has screened thousands of chemicals across hundreds of high-throughput in vitro screening assays. The project is now leveraging high-throughput transcriptomic (HTTr) technologies to substantially expand its coverage of biological pathways. The first HTTr sc...

  7. Extending the BEAGLE library to a multi-FPGA platform

    PubMed Central

    2013-01-01

    Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707

  8. Quantitative description on structure-property relationships of Li-ion battery materials for high-throughput computations

    NASA Astrophysics Data System (ADS)

    Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun

    2017-12-01

    Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.

  9. Quantitative description on structure-property relationships of Li-ion battery materials for high-throughput computations.

    PubMed

    Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun

    2017-01-01

    Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.

  10. Detection system of capillary array electrophoresis microchip based on optical fiber

    NASA Astrophysics Data System (ADS)

    Yang, Xiaobo; Bai, Haiming; Yan, Weiping

    2009-11-01

    To meet the demands of the post-genomic era study and the large parallel detections of epidemic diseases and drug screening, the high throughput micro-fluidic detection system is needed urgently. A scanning laser induced fluorescence detection system based on optical fiber has been established by using a green laser diode double-pumped solid-state laser as excitation source. It includes laser induced fluorescence detection subsystem, capillary array electrophoresis micro-chip, channel identification unit and fluorescent signal processing subsystem. V-shaped detecting probe composed with two optical fibers for transmitting the excitation light and detecting induced fluorescence were constructed. Parallel four-channel signal analysis of capillary electrophoresis was performed on this system by using Rhodamine B as the sample. The distinction of different samples and separation of samples were achieved with the constructed detection system. The lowest detected concentration is 1×10-5 mol/L for Rhodamine B. The results show that the detection system possesses some advantages, such as compact structure, better stability and higher sensitivity, which are beneficial to the development of microminiaturization and integration of capillary array electrophoresis chip.

  11. Fine grained event processing on HPCs with the ATLAS Yoda system

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; Van Gemmeren, Peter; Wenaus, Torre

    2015-12-01

    High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HEP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficiency and scheduling flexibility of preemption without requiring the application actually support or employ check-pointing. We will present the new Yoda system, its motivations, architecture, implementation, and applications in ATLAS data processing at several US HPC centers.

  12. Design and evaluation of an architecture for a digital signal processor for instrumentation applications

    NASA Astrophysics Data System (ADS)

    Fellman, Ronald D.; Kaneshiro, Ronald T.; Konstantinides, Konstantinos

    1990-03-01

    The authors present the design and evaluation of an architecture for a monolithic, programmable, floating-point digital signal processor (DSP) for instrumentation applications. An investigation of the most commonly used algorithms in instrumentation led to a design that satisfies the requirements for high computational and I/O (input/output) throughput. In the arithmetic unit, a 16- x 16-bit multiplier and a 32-bit accumulator provide the capability for single-cycle multiply/accumulate operations, and three format adjusters automatically adjust the data format for increased accuracy and dynamic range. An on-chip I/O unit is capable of handling data block transfers through a direct memory access port and real-time data streams through a pair of parallel I/O ports. I/O operations and program execution are performed in parallel. In addition, the processor includes two data memories with independent addressing units, a microsequencer with instruction RAM, and multiplexers for internal data redirection. The authors also present the structure and implementation of a design environment suitable for the algorithmic, behavioral, and timing simulation of a complete DSP system. Various benchmarking results are reported.

  13. High Throughput Experimental Materials Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakutayev, Andriy; Perkins, John; Schwarting, Marcus

    The mission of the High Throughput Experimental Materials Database (HTEM DB) is to enable discovery of new materials with useful properties by releasing large amounts of high-quality experimental data to public. The HTEM DB contains information about materials obtained from high-throughput experiments at the National Renewable Energy Laboratory (NREL).

  14. Constructing DNA Barcode Sets Based on Particle Swarm Optimization.

    PubMed

    Wang, Bin; Zheng, Xuedong; Zhou, Shihua; Zhou, Changjun; Wei, Xiaopeng; Zhang, Qiang; Wei, Ziqi

    2018-01-01

    Following the completion of the human genome project, a large amount of high-throughput bio-data was generated. To analyze these data, massively parallel sequencing, namely next-generation sequencing, was rapidly developed. DNA barcodes are used to identify the ownership between sequences and samples when they are attached at the beginning or end of sequencing reads. Constructing DNA barcode sets provides the candidate DNA barcodes for this application. To increase the accuracy of DNA barcode sets, a particle swarm optimization (PSO) algorithm has been modified and used to construct the DNA barcode sets in this paper. Compared with the extant results, some lower bounds of DNA barcode sets are improved. The results show that the proposed algorithm is effective in constructing DNA barcode sets.

  15. A Parallel Spectroscopic Method for Examining Dynamic Phenomena on the Millisecond Time Scale

    PubMed Central

    Snively, Christopher M.; Chase, D. Bruce; Rabolt, John F.

    2009-01-01

    An infrared spectroscopic technique based on planar array infrared (PAIR) spectroscopy has been developed that allows the acquisition of spectra from multiple samples simultaneously. Using this technique, it is possible to acquire spectra over a spectral range of 950–1900cm−1 with a temporal resolution of 2.2ms. The performance of this system was demonstrated by determining the shear-induced orientational response of several low molecular weight liquid crystals. Five different liquid crystals were examined in combination with five different alignment layers, and both primary and secondary screens were demonstrated. Implementation of this high throughput PAIR technique resulted in a reduction in acquisition time as compared to both step-scan and ultra-rapid-scanning FTIR spectroscopy. PMID:19239197

  16. Survey of MapReduce frame operation in bioinformatics.

    PubMed

    Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke

    2014-07-01

    Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  17. Deep sequencing methods for protein engineering and design.

    PubMed

    Wrenbeck, Emily E; Faber, Matthew S; Whitehead, Timothy A

    2017-08-01

    The advent of next-generation sequencing (NGS) has revolutionized protein science, and the development of complementary methods enabling NGS-driven protein engineering have followed. In general, these experiments address the functional consequences of thousands of protein variants in a massively parallel manner using genotype-phenotype linked high-throughput functional screens followed by DNA counting via deep sequencing. We highlight the use of information rich datasets to engineer protein molecular recognition. Examples include the creation of multiple dual-affinity Fabs targeting structurally dissimilar epitopes and engineering of a broad germline-targeted anti-HIV-1 immunogen. Additionally, we highlight the generation of enzyme fitness landscapes for conducting fundamental studies of protein behavior and evolution. We conclude with discussion of technological advances. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. A novel PMT test system based on waveform sampling

    NASA Astrophysics Data System (ADS)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  19. Parallel and automated library synthesis of 2-long alkyl chain benzoazoles and azole[4,5-b]pyridines under microwave irradiation.

    PubMed

    Martínez-Palou, Rafael; Zepeda, L Gerardo; Höpfl, Herbert; Montoya, Ascensión; Guzmán-Lucero, Diego J; Guzmán, Javier

    2005-01-01

    A versatile route to 40-membered library of 2-long alkyl chain substituted benzoazoles (1 and 2) and azole[4,5-b]pyridines (3 and 4) via microwave-assisted combinatorial synthesis was developed. The reactions were carried out in both monomode and multimode microwave oven. With the latter, all reactions were performed in high-throughput experimental settings consisting of an 8 x 5 combinatorial library designed to synthesize 40 compounds. Each step, from the addition of reagents to the recovery of final products, was automated. The microwave-assisted N-long chain alkylation reactions of 2-alkyl-1H-benzimidazole (1) and 2-alkyl-1H-benzimidazole[4,5-b] pyridines (3) were also studied.

  20. Machine learning for Big Data analytics in plants.

    PubMed

    Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng

    2014-12-01

    Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. 20180311 - High Throughput Transcriptomics: From screening to pathways (SOT 2018)

    EPA Science Inventory

    The EPA ToxCast effort has screened thousands of chemicals across hundreds of high-throughput in vitro screening assays. The project is now leveraging high-throughput transcriptomic (HTTr) technologies to substantially expand its coverage of biological pathways. The first HTTr sc...

  2. Evaluation of Sequencing Approaches for High-Throughput Transcriptomics - (BOSC)

    EPA Science Inventory

    Whole-genome in vitro transcriptomics has shown the capability to identify mechanisms of action and estimates of potency for chemical-mediated effects in a toxicological framework, but with limited throughput and high cost. The generation of high-throughput global gene expression...

  3. A complementary role of multiparameter flow cytometry and high-throughput sequencing for minimal residual disease detection in chronic lymphocytic leukemia: an European Research Initiative on CLL study.

    PubMed

    Rawstron, A C; Fazi, C; Agathangelidis, A; Villamor, N; Letestu, R; Nomdedeu, J; Palacio, C; Stehlikova, O; Kreuzer, K-A; Liptrot, S; O'Brien, D; de Tute, R M; Marinov, I; Hauwel, M; Spacek, M; Dobber, J; Kater, A P; Gambell, P; Soosapilla, A; Lozanski, G; Brachtl, G; Lin, K; Boysen, J; Hanson, C; Jorgensen, J L; Stetler-Stevenson, M; Yuan, C; Broome, H E; Rassenti, L; Craig, F; Delgado, J; Moreno, C; Bosch, F; Egle, A; Doubek, M; Pospisilova, S; Mulligan, S; Westerman, D; Sanders, C M; Emerson, R; Robins, H S; Kirsch, I; Shanafelt, T; Pettitt, A; Kipps, T J; Wierda, W G; Cymbalista, F; Hallek, M; Hillmen, P; Montserrat, E; Ghia, P

    2016-04-01

    In chronic lymphocytic leukemia (CLL) the level of minimal residual disease (MRD) after therapy is an independent predictor of outcome. Given the increasing number of new agents being explored for CLL therapy, using MRD as a surrogate could greatly reduce the time necessary to assess their efficacy. In this European Research Initiative on CLL (ERIC) project we have identified and validated a flow-cytometric approach to reliably quantitate CLL cells to the level of 0.0010% (10(-5)). The assay comprises a core panel of six markers (i.e. CD19, CD20, CD5, CD43, CD79b and CD81) with a component specification independent of instrument and reagents, which can be locally re-validated using normal peripheral blood. This method is directly comparable to previous ERIC-designed assays and also provides a backbone for investigation of new markers. A parallel analysis of high-throughput sequencing using the ClonoSEQ assay showed good concordance with flow cytometry results at the 0.010% (10(-4)) level, the MRD threshold defined in the 2008 International Workshop on CLL guidelines, but it also provides good linearity to a detection limit of 1 in a million (10(-6)). The combination of both technologies would permit a highly sensitive approach to MRD detection while providing a reproducible and broadly accessible method to quantify residual disease and optimize treatment in CLL.

  4. Transcriptome-based differentiation of closely-related Miscanthus lines.

    PubMed

    Chouvarine, Philippe; Cooksey, Amanda M; McCarthy, Fiona M; Ray, David A; Baldwin, Brian S; Burgess, Shane C; Peterson, Daniel G

    2012-01-01

    Distinguishing between individuals is critical to those conducting animal/plant breeding, food safety/quality research, diagnostic and clinical testing, and evolutionary biology studies. Classical genetic identification studies are based on marker polymorphisms, but polymorphism-based techniques are time and labor intensive and often cannot distinguish between closely related individuals. Illumina sequencing technologies provide the detailed sequence data required for rapid and efficient differentiation of related species, lines/cultivars, and individuals in a cost-effective manner. Here we describe the use of Illumina high-throughput exome sequencing, coupled with SNP mapping, as a rapid means of distinguishing between related cultivars of the lignocellulosic bioenergy crop giant miscanthus (Miscanthus × giganteus). We provide the first exome sequence database for Miscanthus species complete with Gene Ontology (GO) functional annotations. A SNP comparative analysis of rhizome-derived cDNA sequences was successfully utilized to distinguish three Miscanthus × giganteus cultivars from each other and from other Miscanthus species. Moreover, the resulting phylogenetic tree generated from SNP frequency data parallels the known breeding history of the plants examined. Some of the giant miscanthus plants exhibit considerable sequence divergence. Here we describe an analysis of Miscanthus in which high-throughput exome sequencing was utilized to differentiate between closely related genotypes despite the current lack of a reference genome sequence. We functionally annotated the exome sequences and provide resources to support Miscanthus systems biology. In addition, we demonstrate the use of the commercial high-performance cloud computing to do computational GO annotation.

  5. NiftyPET: a High-throughput Software Platform for High Quantitative Accuracy and Precision PET Imaging and Analysis.

    PubMed

    Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien

    2018-01-01

    We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.

  6. Optimizing CyberShake Seismic Hazard Workflows for Large HPC Resources

    NASA Astrophysics Data System (ADS)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2014-12-01

    The CyberShake computational platform is a well-integrated collection of scientific software and middleware that calculates 3D simulation-based probabilistic seismic hazard curves and hazard maps for the Los Angeles region. Currently each CyberShake model comprises about 235 million synthetic seismograms from about 415,000 rupture variations computed at 286 sites. CyberShake integrates large-scale parallel and high-throughput serial seismological research codes into a processing framework in which early stages produce files used as inputs by later stages. Scientific workflow tools are used to manage the jobs, data, and metadata. The Southern California Earthquake Center (SCEC) developed the CyberShake platform using USC High Performance Computing and Communications systems and open-science NSF resources.CyberShake calculations were migrated to the NSF Track 1 system NCSA Blue Waters when it became operational in 2013, via an interdisciplinary team approach including domain scientists, computer scientists, and middleware developers. Due to the excellent performance of Blue Waters and CyberShake software optimizations, we reduced the makespan (a measure of wallclock time-to-solution) of a CyberShake study from 1467 to 342 hours. We will describe the technical enhancements behind this improvement, including judicious introduction of new GPU software, improved scientific software components, increased workflow-based automation, and Blue Waters-specific workflow optimizations.Our CyberShake performance improvements highlight the benefits of scientific workflow tools. The CyberShake workflow software stack includes the Pegasus Workflow Management System (Pegasus-WMS, which includes Condor DAGMan), HTCondor, and Globus GRAM, with Pegasus-mpi-cluster managing the high-throughput tasks on the HPC resources. The workflow tools handle data management, automatically transferring about 13 TB back to SCEC storage.We will present performance metrics from the most recent CyberShake study, executed on Blue Waters. We will compare the performance of CPU and GPU versions of our large-scale parallel wave propagation code, AWP-ODC-SGT. Finally, we will discuss how these enhancements have enabled SCEC to move forward with plans to increase the CyberShake simulation frequency to 1.0 Hz.

  7. PChopper: high throughput peptide prediction for MRM/SRM transition design.

    PubMed

    Afzal, Vackar; Huang, Jeffrey T-J; Atrih, Abdel; Crowther, Daniel J

    2011-08-15

    The use of selective reaction monitoring (SRM) based LC-MS/MS analysis for the quantification of phosphorylation stoichiometry has been rapidly increasing. At the same time, the number of sites that can be monitored in a single LC-MS/MS experiment is also increasing. The manual processes associated with running these experiments have highlighted the need for computational assistance to quickly design MRM/SRM candidates. PChopper has been developed to predict peptides that can be produced via enzymatic protein digest; this includes single enzyme digests, and combinations of enzymes. It also allows digests to be simulated in 'batch' mode and can combine information from these simulated digests to suggest the most appropriate enzyme(s) to use. PChopper also allows users to define the characteristic of their target peptides, and can automatically identify phosphorylation sites that may be of interest. Two application end points are available for interacting with the system; the first is a web based graphical tool, and the second is an API endpoint based on HTTP REST. Service oriented architecture was used to rapidly develop a system that can consume and expose several services. A graphical tool was built to provide an easy to follow workflow that allows scientists to quickly and easily identify the enzymes required to produce multiple peptides in parallel via enzymatic digests in a high throughput manner.

  8. Extraction of drainage networks from large terrain datasets using high throughput computing

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Xie, Jibo

    2009-02-01

    Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.

  9. Electronic hardware design of electrical capacitance tomography systems.

    PubMed

    Saied, I; Meribout, M

    2016-06-28

    Electrical tomography techniques for process imaging are very prominent for industrial applications, such as the oil and gas industry and chemical refineries, owing to their ability to provide the flow regime of a flowing fluid within a relatively high throughput. Among the various techniques, electrical capacitance tomography (ECT) is gaining popularity due to its non-invasive nature and its capability to differentiate between different phases based on their permittivity distribution. In recent years, several hardware designs have been provided for ECT systems that have improved its resolution of measurements to be around attofarads (aF, 10(-18) F), or the number of channels, that is required to be large for some applications that require a significant amount of data. In terms of image acquisition time, some recent systems could achieve a throughput of a few hundred frames per second, while data processing time could be achieved in only a few milliseconds per frame. This paper outlines the concept and main features of the most recent front-end and back-end electronic circuits dedicated for ECT systems. In this paper, multiple-excitation capacitance polling, a front-end electronic technique, shows promising results for ECT systems to acquire fast data acquisition speeds. A highly parallel field-programmable gate array (FPGA) based architecture for a fast reconstruction algorithm is also described. This article is part of the themed issue 'Supersensing through industrial process tomography'. © 2016 The Author(s).

  10. TARGETED CAPTURE IN EVOLUTIONARY AND ECOLOGICAL GENOMICS

    PubMed Central

    Jones, Matthew R.; Good, Jeffrey M.

    2016-01-01

    The rapid expansion of next-generation sequencing has yielded a powerful array of tools to address fundamental biological questions at a scale that was inconceivable just a few years ago. Various genome partitioning strategies to sequence select subsets of the genome have emerged as powerful alternatives to whole genome sequencing in ecological and evolutionary genomic studies. High throughput targeted capture is one such strategy that involves the parallel enrichment of pre-selected genomic regions of interest. The growing use of targeted capture demonstrates its potential power to address a range of research questions, yet these approaches have yet to expand broadly across labs focused on evolutionary and ecological genomics. In part, the use of targeted capture has been hindered by the logistics of capture design and implementation in species without established reference genomes. Here we aim to 1) increase the accessibility of targeted capture to researchers working in non-model taxa by discussing capture methods that circumvent the need of a reference genome, 2) highlight the evolutionary and ecological applications where this approach is emerging as a powerful sequencing strategy, and 3) discuss the future of targeted capture and other genome partitioning approaches in light of the increasing accessibility of whole genome sequencing. Given the practical advantages and increasing feasibility of high-throughput targeted capture, we anticipate an ongoing expansion of capture-based approaches in evolutionary and ecological research, synergistic with an expansion of whole genome sequencing. PMID:26137993

  11. Design of a real-time wind turbine simulator using a custom parallel architecture

    NASA Technical Reports Server (NTRS)

    Hoffman, John A.; Gluck, R.; Sridhar, S.

    1995-01-01

    The design of a new parallel-processing digital simulator is described. The new simulator has been developed specifically for analysis of wind energy systems in real time. The new processor has been named: the Wind Energy System Time-domain simulator, version 3 (WEST-3). Like previous WEST versions, WEST-3 performs many computations in parallel. The modules in WEST-3 are pure digital processors, however. These digital processors can be programmed individually and operated in concert to achieve real-time simulation of wind turbine systems. Because of this programmability, WEST-3 is very much more flexible and general than its two predecessors. The design features of WEST-3 are described to show how the system produces high-speed solutions of nonlinear time-domain equations. WEST-3 has two very fast Computational Units (CU's) that use minicomputer technology plus special architectural features that make them many times faster than a microcomputer. These CU's are needed to perform the complex computations associated with the wind turbine rotor system in real time. The parallel architecture of the CU causes several tasks to be done in each cycle, including an IO operation and the combination of a multiply, add, and store. The WEST-3 simulator can be expanded at any time for additional computational power. This is possible because the CU's interfaced to each other and to other portions of the simulation using special serial buses. These buses can be 'patched' together in essentially any configuration (in a manner very similar to the programming methods used in analog computation) to balance the input/ output requirements. CU's can be added in any number to share a given computational load. This flexible bus feature is very different from many other parallel processors which usually have a throughput limit because of rigid bus architecture.

  12. Design of object-oriented distributed simulation classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D. (Principal Investigator)

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  13. Design of Object-Oriented Distributed Simulation Classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  14. High Throughput Determination of Critical Human Dosing Parameters (SOT)

    EPA Science Inventory

    High throughput toxicokinetics (HTTK) is a rapid approach that uses in vitro data to estimate TK for hundreds of environmental chemicals. Reverse dosimetry (i.e., reverse toxicokinetics or RTK) based on HTTK data converts high throughput in vitro toxicity screening (HTS) data int...

  15. High Throughput Determinations of Critical Dosing Parameters (IVIVE workshop)

    EPA Science Inventory

    High throughput toxicokinetics (HTTK) is an approach that allows for rapid estimations of TK for hundreds of environmental chemicals. HTTK-based reverse dosimetry (i.e, reverse toxicokinetics or RTK) is used in order to convert high throughput in vitro toxicity screening (HTS) da...

  16. Optimization of high-throughput nanomaterial developmental toxicity testing in zebrafish embryos

    EPA Science Inventory

    Nanomaterial (NM) developmental toxicities are largely unknown. With an extensive variety of NMs available, high-throughput screening methods may be of value for initial characterization of potential hazard. We optimized a zebrafish embryo test as an in vivo high-throughput assay...

  17. Demonstration of lithography patterns using reflective e-beam direct write

    NASA Astrophysics Data System (ADS)

    Freed, Regina; Sun, Jeff; Brodie, Alan; Petric, Paul; McCord, Mark; Ronse, Kurt; Haspeslagh, Luc; Vereecke, Bart

    2011-04-01

    Traditionally, e-beam direct write lithography has been too slow for most lithography applications. E-beam direct write lithography has been used for mask writing rather than wafer processing since the maximum blur requirements limit column beam current - which drives e-beam throughput. To print small features and a fine pitch with an e-beam tool requires a sacrifice in processing time unless one significantly increases the total number of beams on a single writing tool. Because of the uncertainty with regards to the optical lithography roadmap beyond the 22 nm technology node, the semiconductor equipment industry is in the process of designing and testing e-beam lithography tools with the potential for high volume wafer processing. For this work, we report on the development and current status of a new maskless, direct write e-beam lithography tool which has the potential for high volume lithography at and below the 22 nm technology node. A Reflective Electron Beam Lithography (REBL) tool is being developed for high throughput electron beam direct write maskless lithography. The system is targeting critical patterning steps at the 22 nm node and beyond at a capital cost equivalent to conventional lithography. Reflective Electron Beam Lithography incorporates a number of novel technologies to generate and expose lithographic patterns with a throughput and footprint comparable to current 193 nm immersion lithography systems. A patented, reflective electron optic or Digital Pattern Generator (DPG) enables the unique approach. The Digital Pattern Generator is a CMOS ASIC chip with an array of small, independently controllable lens elements (lenslets), which act as an array of electron mirrors. In this way, the REBL system is capable of generating the pattern to be written using massively parallel exposure by ~1 million beams at extremely high data rates (~ 1Tbps). A rotary stage concept using a rotating platen carrying multiple wafers optimizes the writing strategy of the DPG to achieve the capability of high throughput for sparse pattern wafer levels. The lens elements on the DPG are fabricated at IMEC (Leuven, Belgium) under IMEC's CMORE program. The CMOS fabricated DPG contains ~ 1,000,000 lens elements, allowing for 1,000,000 individually controllable beamlets. A single lens element consists of 5 electrodes, each of which can be set at controlled voltage levels to either absorb or reflect the electron beam. A system using a linear movable stage and the DPG integrated into the electron optics module was used to expose patterns on device representative wafers. Results of these exposure tests are discussed.

  18. A Barcoding Strategy Enabling Higher-Throughput Library Screening by Microscopy.

    PubMed

    Chen, Robert; Rishi, Harneet S; Potapov, Vladimir; Yamada, Masaki R; Yeh, Vincent J; Chow, Thomas; Cheung, Celia L; Jones, Austin T; Johnson, Terry D; Keating, Amy E; DeLoache, William C; Dueber, John E

    2015-11-20

    Dramatic progress has been made in the design and build phases of the design-build-test cycle for engineering cells. However, the test phase usually limits throughput, as many outputs of interest are not amenable to rapid analytical measurements. For example, phenotypes such as motility, morphology, and subcellular localization can be readily measured by microscopy, but analysis of these phenotypes is notoriously slow. To increase throughput, we developed microscopy-readable barcodes (MiCodes) composed of fluorescent proteins targeted to discernible organelles. In this system, a unique barcode can be genetically linked to each library member, making possible the parallel analysis of phenotypes of interest via microscopy. As a first demonstration, we MiCoded a set of synthetic coiled-coil leucine zipper proteins to allow an 8 × 8 matrix to be tested for specific interactions in micrographs consisting of mixed populations of cells. A novel microscopy-readable two-hybrid fluorescence localization assay for probing candidate interactions in the cytosol was also developed using a bait protein targeted to the peroxisome and a prey protein tagged with a fluorescent protein. This work introduces a generalizable, scalable platform for making microscopy amenable to higher-throughput library screening experiments, thereby coupling the power of imaging with the utility of combinatorial search paradigms.

  19. Using space and time to encode vibrotactile information: toward an estimate of the skin's achievable throughput.

    PubMed

    Novich, Scott D; Eagleman, David M

    2015-10-01

    Touch receptors in the skin can relay various forms of abstract information, such as words (Braille), haptic feedback (cell phones, game controllers, feedback for prosthetic control), and basic visual information such as edges and shape (sensory substitution devices). The skin can support such applications with ease: They are all low bandwidth and do not require a fine temporal acuity. But what of high-throughput applications? We use sound-to-touch conversion as a motivating example, though others abound (e.g., vision, stock market data). In the past, vibrotactile hearing aids have demonstrated improvement in speech perceptions in the deaf. However, a sound-to-touch sensory substitution device that works with high efficacy and without the aid of lipreading has yet to be developed. Is this because skin simply does not have the capacity to effectively relay high-throughput streams such as sound? Or is this because the spatial and temporal properties of skin have not been leveraged to full advantage? Here, we begin to address these questions with two experiments. First, we seek to determine the best method of relaying information through the skin using an identification task on the lower back. We find that vibrotactile patterns encoding information in both space and time yield the best overall information transfer estimate. Patterns encoded in space and time or "intensity" (the coupled coding of vibration frequency and force) both far exceed performance of only spatially encoded patterns. Next, we determine the vibrotactile two-tacton resolution on the lower back-the distance necessary for resolving two vibrotactile patterns. We find that our vibratory motors conservatively require at least 6 cm of separation to resolve two independent tactile patterns (>80 % correct), regardless of stimulus type (e.g., spatiotemporal "sweeps" versus single vibratory pulses). Six centimeter is a greater distance than the inter-motor distances used in Experiment 1 (2.5 cm), which explains the poor identification performance of spatially encoded patterns. Hence, when using an array of vibrational motors, spatiotemporal sweeps can overcome the limitations of vibrotactile two-tacton resolution. The results provide the first steps toward obtaining a realistic estimate of the skin's achievable throughput, illustrating the best ways to encode data to the skin (using as many dimensions as possible) and how far such interfaces would need to be separated if using multiple arrays in parallel.

  20. Network issues for large mass storage requirements

    NASA Technical Reports Server (NTRS)

    Perdue, James

    1992-01-01

    File Servers and Supercomputing environments need high performance networks to balance the I/O requirements seen in today's demanding computing scenarios. UltraNet is one solution which permits both high aggregate transfer rates and high task-to-task transfer rates as demonstrated in actual tests. UltraNet provides this capability as both a Server-to-Server and Server-to-Client access network giving the supercomputing center the following advantages highest performance Transport Level connections (to 40 MBytes/sec effective rates); matches the throughput of the emerging high performance disk technologies, such as RAID, parallel head transfer devices and software striping; supports standard network and file system applications using SOCKET's based application program interface such as FTP, rcp, rdump, etc.; supports access to the Network File System (NFS) and LARGE aggregate bandwidth for large NFS usage; provides access to a distributed, hierarchical data server capability using DISCOS UniTree product; supports file server solutions available from multiple vendors, including Cray, Convex, Alliant, FPS, IBM, and others.

  1. A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi

    1997-01-01

    A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.

  2. TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics

    PubMed Central

    Röst, Hannes L.; Liu, Yansheng; D’Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C.; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi

    2016-01-01

    Large scale, quantitative proteomic studies have become essential for the analysis of clinical cohorts, large perturbation experiments and systems biology studies. While next-generation mass spectrometric techniques such as SWATH-MS have substantially increased throughput and reproducibility, ensuring consistent quantification of thousands of peptide analytes across multiple LC-MS/MS runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we have developed the TRIC software which utilizes fragment ion data to perform cross-run alignment, consistent peak-picking and quantification for high throughput targeted proteomics. TRIC uses a graph-based alignment strategy based on non-linear retention time correction to integrate peak elution information from all LC-MS/MS runs acquired in a study. When compared to state-of-the-art SWATH-MS data analysis, the algorithm was able to reduce the identification error by more than 3-fold at constant recall, while correcting for highly non-linear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem (iPS) cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups and substantially increased the quantitative completeness and biological information in the data, providing insights into protein dynamics of iPS cells. Overall, this study demonstrates the importance of consistent quantification in highly challenging experimental setups, and proposes an algorithm to automate this task, constituting the last missing piece in a pipeline for automated analysis of massively parallel targeted proteomics datasets. PMID:27479329

  3. Spatial tuning of acoustofluidic pressure nodes by altering net sonic velocity enables high-throughput, efficient cell sorting

    DOE PAGES

    Jung, Seung-Yong; Notton, Timothy; Fong, Erika; ...

    2015-01-07

    Particle sorting using acoustofluidics has enormous potential but widespread adoption has been limited by complex device designs and low throughput. Here, we report high-throughput separation of particles and T lymphocytes (600 μL min -1) by altering the net sonic velocity to reposition acoustic pressure nodes in a simple two-channel device. Finally, the approach is generalizable to other microfluidic platforms for rapid, high-throughput analysis.

  4. De novo assembly of human genomes with massively parallel short read sequencing.

    PubMed

    Li, Ruiqiang; Zhu, Hongmei; Ruan, Jue; Qian, Wubin; Fang, Xiaodong; Shi, Zhongbin; Li, Yingrui; Li, Shengting; Shan, Gao; Kristiansen, Karsten; Li, Songgang; Yang, Huanming; Wang, Jian; Wang, Jun

    2010-02-01

    Next-generation massively parallel DNA sequencing technologies provide ultrahigh throughput at a substantially lower unit data cost; however, the data are very short read length sequences, making de novo assembly extremely challenging. Here, we describe a novel method for de novo assembly of large genomes from short read sequences. We successfully assembled both the Asian and African human genome sequences, achieving an N50 contig size of 7.4 and 5.9 kilobases (kb) and scaffold of 446.3 and 61.9 kb, respectively. The development of this de novo short read assembly method creates new opportunities for building reference sequences and carrying out accurate analyses of unexplored genomes in a cost-effective way.

  5. Quantitative description on structure–property relationships of Li-ion battery materials for high-throughput computations

    PubMed Central

    Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun

    2017-01-01

    Abstract Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure–property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure–property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure–property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials. PMID:28458737

  6. Missile signal processing common computer architecture for rapid technology upgrade

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.

  7. Targeted next-generation sequencing in steroid-resistant nephrotic syndrome: mutations in multiple glomerular genes may influence disease severity.

    PubMed

    Bullich, Gemma; Trujillano, Daniel; Santín, Sheila; Ossowski, Stephan; Mendizábal, Santiago; Fraga, Gloria; Madrid, Álvaro; Ariceta, Gema; Ballarín, José; Torra, Roser; Estivill, Xavier; Ars, Elisabet

    2015-09-01

    Genetic diagnosis of steroid-resistant nephrotic syndrome (SRNS) using Sanger sequencing is complicated by the high genetic heterogeneity and phenotypic variability of this disease. We aimed to improve the genetic diagnosis of SRNS by simultaneously sequencing 26 glomerular genes using massive parallel sequencing and to study whether mutations in multiple genes increase disease severity. High-throughput mutation analysis was performed in 50 SRNS and/or focal segmental glomerulosclerosis (FSGS) patients, a validation cohort of 25 patients with known pathogenic mutations, and a discovery cohort of 25 uncharacterized patients with probable genetic etiology. In the validation cohort, we identified the 42 previously known pathogenic mutations across NPHS1, NPHS2, WT1, TRPC6, and INF2 genes. In the discovery cohort, disease-causing mutations in SRNS/FSGS genes were found in nine patients. We detected three patients with mutations in an SRNS/FSGS gene and COL4A3. Two of them were familial cases and presented a more severe phenotype than family members with mutation in only one gene. In conclusion, our results show that massive parallel sequencing is feasible and robust for genetic diagnosis of SRNS/FSGS. Our results indicate that patients carrying mutations in an SRNS/FSGS gene and also in COL4A3 gene have increased disease severity.

  8. Silicon photon-counting avalanche diodes for single-molecule fluorescence spectroscopy

    PubMed Central

    Michalet, Xavier; Ingargiola, Antonino; Colyer, Ryan A.; Scalia, Giuseppe; Weiss, Shimon; Maccagnani, Piera; Gulinatti, Angelo; Rech, Ivan; Ghioni, Massimo

    2014-01-01

    Solution-based single-molecule fluorescence spectroscopy is a powerful experimental tool with applications in cell biology, biochemistry and biophysics. The basic feature of this technique is to excite and collect light from a very small volume and work in a low concentration regime resulting in rare burst-like events corresponding to the transit of a single molecule. Detecting photon bursts is a challenging task: the small number of emitted photons in each burst calls for high detector sensitivity. Bursts are very brief, requiring detectors with fast response time and capable of sustaining high count rates. Finally, many bursts need to be accumulated to achieve proper statistical accuracy, resulting in long measurement time unless parallelization strategies are implemented to speed up data acquisition. In this paper we will show that silicon single-photon avalanche diodes (SPADs) best meet the needs of single-molecule detection. We will review the key SPAD parameters and highlight the issues to be addressed in their design, fabrication and operation. After surveying the state-of-the-art SPAD technologies, we will describe our recent progress towards increasing the throughput of single-molecule fluorescence spectroscopy in solution using parallel arrays of SPADs. The potential of this approach is illustrated with single-molecule Förster resonance energy transfer measurements. PMID:25309114

  9. CERN data services for LHC computing

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.

    2017-10-01

    Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.

  10. High-throughput screening (HTS) and modeling of the retinoid ...

    EPA Pesticide Factsheets

    Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system

  11. Evaluating High Throughput Toxicokinetics and Toxicodynamics for IVIVE (WC10)

    EPA Science Inventory

    High-throughput screening (HTS) generates in vitro data for characterizing potential chemical hazard. TK models are needed to allow in vitro to in vivo extrapolation (IVIVE) to real world situations. The U.S. EPA has created a public tool (R package “httk” for high throughput tox...

  12. High-throughput RAD-SNP genotyping for characterization of sugar beet genotypes

    USDA-ARS?s Scientific Manuscript database

    High-throughput SNP genotyping provides a rapid way of developing resourceful set of markers for delineating the genetic architecture and for effective species discrimination. In the presented research, we demonstrate a set of 192 SNPs for effective genotyping in sugar beet using high-throughput mar...

  13. Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays (SOT)

    EPA Science Inventory

    Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays DE DeGroot, RS Thomas, and SO SimmonsNational Center for Computational Toxicology, US EPA, Research Triangle Park, NC USAThe EPA’s ToxCast program utilizes a wide variety of high-throughput s...

  14. A quantitative literature-curated gold standard for kinase-substrate pairs

    PubMed Central

    2011-01-01

    We describe the Yeast Kinase Interaction Database (KID, http://www.moseslab.csb.utoronto.ca/KID/), which contains high- and low-throughput data relevant to phosphorylation events. KID includes 6,225 low-throughput and 21,990 high-throughput interactions, from greater than 35,000 experiments. By quantitatively integrating these data, we identified 517 high-confidence kinase-substrate pairs that we consider a gold standard. We show that this gold standard can be used to assess published high-throughput datasets, suggesting that it will enable similar rigorous assessments in the future. PMID:21492431

  15. Outlook for Development of High-throughput Cryopreservation for Small-bodied Biomedical Model Fishes★

    PubMed Central

    Tiersch, Terrence R.; Yang, Huiping; Hu, E.

    2011-01-01

    With the development of genomic research technologies, comparative genome studies among vertebrate species are becoming commonplace for human biomedical research. Fish offer unlimited versatility for biomedical research. Extensive studies are done using these fish models, yielding tens of thousands of specific strains and lines, and the number is increasing every day. Thus, high-throughput sperm cryopreservation is urgently needed to preserve these genetic resources. Although high-throughput processing has been widely applied for sperm cryopreservation in livestock for decades, application in biomedical model fishes is still in the concept-development stage because of the limited sample volumes and the biological characteristics of fish sperm. High-throughput processing in livestock was developed based on advances made in the laboratory and was scaled up for increased processing speed, capability for mass production, and uniformity and quality assurance. Cryopreserved germplasm combined with high-throughput processing constitutes an independent industry encompassing animal breeding, preservation of genetic diversity, and medical research. Currently, there is no specifically engineered system available for high-throughput of cryopreserved germplasm for aquatic species. This review is to discuss the concepts and needs for high-throughput technology for model fishes, propose approaches for technical development, and overview future directions of this approach. PMID:21440666

  16. Parallelizing ATLAS Reconstruction and Simulation: Issues and Optimization Solutions for Scaling on Multi- and Many-CPU Platforms

    NASA Astrophysics Data System (ADS)

    Leggett, C.; Binet, S.; Jackson, K.; Levinthal, D.; Tatarkhanov, M.; Yao, Y.

    2011-12-01

    Thermal limitations have forced CPU manufacturers to shift from simply increasing clock speeds to improve processor performance, to producing chip designs with multi- and many-core architectures. Further the cores themselves can run multiple threads as a zero overhead context switch allowing low level resource sharing (Intel Hyperthreading). To maximize bandwidth and minimize memory latency, memory access has become non uniform (NUMA). As manufacturers add more cores to each chip, a careful understanding of the underlying architecture is required in order to fully utilize the available resources. We present AthenaMP and the Atlas event loop manager, the driver of the simulation and reconstruction engines, which have been rewritten to make use of multiple cores, by means of event based parallelism, and final stage I/O synchronization. However, initial studies on 8 andl6 core Intel architectures have shown marked non-linearities as parallel process counts increase, with as much as 30% reductions in event throughput in some scenarios. Since the Intel Nehalem architecture (both Gainestown and Westmere) will be the most common choice for the next round of hardware procurements, an understanding of these scaling issues is essential. Using hardware based event counters and Intel's Performance Tuning Utility, we have studied the performance bottlenecks at the hardware level, and discovered optimization schemes to maximize processor throughput. We have also produced optimization mechanisms, common to all large experiments, that address the extreme nature of today's HEP code, which due to it's size, places huge burdens on the memory infrastructure of today's processors.

  17. Parallel processing architecture for H.264 deblocking filter on multi-core platforms

    NASA Astrophysics Data System (ADS)

    Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao

    2012-03-01

    Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.

  18. Accurate high-throughput structure mapping and prediction with transition metal ion FRET

    PubMed Central

    Yu, Xiaozhen; Wu, Xiongwu; Bermejo, Guillermo A.; Brooks, Bernard R.; Taraska, Justin W.

    2013-01-01

    Mapping the landscape of a protein’s conformational space is essential to understanding its functions and regulation. The limitations of many structural methods have made this process challenging for most proteins. Here, we report that transition metal ion FRET (tmFRET) can be used in a rapid, highly parallel screen, to determine distances from multiple locations within a protein at extremely low concentrations. The distances generated through this screen for the protein Maltose Binding Protein (MBP) match distances from the crystal structure to within a few angstroms. Furthermore, energy transfer accurately detects structural changes during ligand binding. Finally, fluorescence-derived distances can be used to guide molecular simulations to find low energy states. Our results open the door to rapid, accurate mapping and prediction of protein structures at low concentrations, in large complex systems, and in living cells. PMID:23273426

  19. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    DOE PAGES

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; ...

    2015-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomicsmore » system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.« less

  20. Integration of next-generation sequencing in clinical diagnostic molecular pathology laboratories for analysis of solid tumours; an expert opinion on behalf of IQN Path ASBL.

    PubMed

    Deans, Zandra C; Costa, Jose Luis; Cree, Ian; Dequeker, Els; Edsjö, Anders; Henderson, Shirley; Hummel, Michael; Ligtenberg, Marjolijn Jl; Loddo, Marco; Machado, Jose Carlos; Marchetti, Antonio; Marquis, Katherine; Mason, Joanne; Normanno, Nicola; Rouleau, Etienne; Schuuring, Ed; Snelson, Keeda-Marie; Thunnissen, Erik; Tops, Bastiaan; Williams, Gareth; van Krieken, Han; Hall, Jacqueline A

    2017-01-01

    The clinical demand for mutation detection within multiple genes from a single tumour sample requires molecular diagnostic laboratories to develop rapid, high-throughput, highly sensitive, accurate and parallel testing within tight budget constraints. To meet this demand, many laboratories employ next-generation sequencing (NGS) based on small amplicons. Building on existing publications and general guidance for the clinical use of NGS and learnings from germline testing, the following guidelines establish consensus standards for somatic diagnostic testing, specifically for identifying and reporting mutations in solid tumours. These guidelines cover the testing strategy, implementation of testing within clinical service, sample requirements, data analysis and reporting of results. In conjunction with appropriate staff training and international standards for laboratory testing, these consensus standards for the use of NGS in molecular pathology of solid tumours will assist laboratories in implementing NGS in clinical services.

  1. Titanium(IV) isopropoxide mediated solution phase reductive amination on an automated platform: application in the generation of urea and amide libraries.

    PubMed

    Bhattacharyya, S; Fan, L; Vo, L; Labadie, J

    2000-04-01

    Amine libraries and their derivatives are important targets for high throughput synthesis because of their versatility as medicinal agents and agrochemicals. As a part of our efforts towards automated chemical library synthesis, a titanium(IV) isopropoxide mediated solution phase reductive amination protocol was successfully translated to automation on the Trident(TM) library synthesizer of Argonaut Technologies. An array of 24 secondary amines was prepared in high yield and purity from 4 primary amines and 6 carbonyl compounds. These secondary amines were further utilized in a split synthesis to generate libraries of ureas, amides and sulfonamides in solution phase on the Trident(TM). The automated runs included 192 reactions to synthesize 96 ureas in duplicate and 96 reactions to synthesize 48 amides and 48 sulfonamides. A number of polymer-assisted solution phase protocols were employed for parallel work-up and purification of the products in each step.

  2. A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0

    PubMed Central

    Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.

    2014-01-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072

  3. A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.

    PubMed

    Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P

    2014-09-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  4. A Distributed Amplifier System for Bilayer Lipid Membrane (BLM) Arrays With Noise and Individual Offset Cancellation.

    PubMed

    Crescentini, Marco; Thei, Frederico; Bennati, Marco; Saha, Shimul; de Planque, Maurits R R; Morgan, Hywel; Tartagni, Marco

    2015-06-01

    Lipid bilayer membrane (BLM) arrays are required for high throughput analysis, for example drug screening or advanced DNA sequencing. Complex microfluidic devices are being developed but these are restricted in terms of array size and structure or have integrated electronic sensing with limited noise performance. We present a compact and scalable multichannel electrophysiology platform based on a hybrid approach that combines integrated state-of-the-art microelectronics with low-cost disposable fluidics providing a platform for high-quality parallel single ion channel recording. Specifically, we have developed a new integrated circuit amplifier based on a novel noise cancellation scheme that eliminates flicker noise derived from devices under test and amplifiers. The system is demonstrated through the simultaneous recording of ion channel activity from eight bilayer membranes. The platform is scalable and could be extended to much larger array sizes, limited only by electronic data decimation and communication capabilities.

  5. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    PubMed Central

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha

    2014-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-endNGS analysis requirements. The Globus Genomics system is built on Amazon 's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research. PMID:26925205

  6. Optimization of three- and four-component reactions for polysubstituted piperidines: application to the synthesis and preliminary biological screening of a prototype library.

    PubMed

    Ulaczyk-Lesanko, Agnieszka; Pelletier, Eric; Lee, Maria; Prinz, Heino; Waldmann, Herbert; Hall, Dennis G

    2007-01-01

    Several solid- and solution-phase strategies were evaluated for the preparation of libraries of polysubstituted piperidines of type 7 using the tandem aza[4+2]cycloaddition/allylboration multicomponent reaction between 1-aza-4-boronobutadienes, maleimides, and aldehydes. A novel four-component variant of this chemistry was developed in solution phase, and it circumvents the need for pre-forming the azabutadiene component. A parallel synthesis coupled with compound purification by HPLC with mass-based fraction collection allowed the preparation of a library of 944 polysubstituted piperidines in a high degree of purity suitable for biological screening. A representative subset of 244 compounds was screened against a panel of phosphatase enzymes, and despite the modest levels of activity obtained, this study demonstrated that piperidines of type 7 display the right physical properties (e.g., solubility) to be assayed effectively in high-throughput enzymatic tests.

  7. Functional annotation of chemical libraries across diverse biological processes.

    PubMed

    Piotrowski, Jeff S; Li, Sheena C; Deshpande, Raamesh; Simpkins, Scott W; Nelson, Justin; Yashiroda, Yoko; Barber, Jacqueline M; Safizadeh, Hamid; Wilson, Erin; Okada, Hiroki; Gebre, Abraham A; Kubo, Karen; Torres, Nikko P; LeBlanc, Marissa A; Andrusiak, Kerry; Okamoto, Reika; Yoshimura, Mami; DeRango-Adem, Eva; van Leeuwen, Jolanda; Shirahige, Katsuhiko; Baryshnikova, Anastasia; Brown, Grant W; Hirano, Hiroyuki; Costanzo, Michael; Andrews, Brenda; Ohya, Yoshikazu; Osada, Hiroyuki; Yoshida, Minoru; Myers, Chad L; Boone, Charles

    2017-09-01

    Chemical-genetic approaches offer the potential for unbiased functional annotation of chemical libraries. Mutations can alter the response of cells in the presence of a compound, revealing chemical-genetic interactions that can elucidate a compound's mode of action. We developed a highly parallel, unbiased yeast chemical-genetic screening system involving three key components. First, in a drug-sensitive genetic background, we constructed an optimized diagnostic mutant collection that is predictive for all major yeast biological processes. Second, we implemented a multiplexed (768-plex) barcode-sequencing protocol, enabling the assembly of thousands of chemical-genetic profiles. Finally, based on comparison of the chemical-genetic profiles with a compendium of genome-wide genetic interaction profiles, we predicted compound functionality. Applying this high-throughput approach, we screened seven different compound libraries and annotated their functional diversity. We further validated biological process predictions, prioritized a diverse set of compounds, and identified compounds that appear to have dual modes of action.

  8. Aviation System Capacity Program Terminal Area Productivity Project: Ground and Airborne Technologies

    NASA Technical Reports Server (NTRS)

    Giulianetti, Demo J.

    2001-01-01

    Ground and airborne technologies were developed in the Terminal Area Productivity (TAP) project for increasing throughput at major airports by safely maintaining good-weather operating capacity during bad weather. Methods were demonstrated for accurately predicting vortices to prevent wake-turbulence encounters and to reduce in-trail separation requirements for aircraft approaching the same runway for landing. Technology was demonstrated that safely enabled independent simultaneous approaches in poor weather conditions to parallel runways spaced less than 3,400 ft apart. Guidance, control, and situation-awareness systems were developed to reduce congestion in airport surface operations resulting from the increased throughput, particularly during night and instrument meteorological conditions (IMC). These systems decreased runway occupancy time by safely and smoothly decelerating the aircraft, increasing taxi speed, and safely steering the aircraft off the runway. Simulations were performed in which optimal trajectories were determined by air traffic control (ATC) and communicated to flight crews by means of Center TRACON Automation System/Flight Management System (CTASFMS) automation to reduce flight delays, increase throughput, and ensure flight safety.

  9. Multi-target Parallel Processing Approach for Gene-to-structure Determination of the Influenza Polymerase PB2 Subunit

    PubMed Central

    Moen, Spencer O.; Smith, Eric; Raymond, Amy C.; Fairman, James W.; Stewart, Lance J.; Staker, Bart L.; Begley, Darren W.; Edwards, Thomas E.; Lorimer, Donald D.

    2013-01-01

    Pandemic outbreaks of highly virulent influenza strains can cause widespread morbidity and mortality in human populations worldwide. In the United States alone, an average of 41,400 deaths and 1.86 million hospitalizations are caused by influenza virus infection each year 1. Point mutations in the polymerase basic protein 2 subunit (PB2) have been linked to the adaptation of the viral infection in humans 2. Findings from such studies have revealed the biological significance of PB2 as a virulence factor, thus highlighting its potential as an antiviral drug target. The structural genomics program put forth by the National Institute of Allergy and Infectious Disease (NIAID) provides funding to Emerald Bio and three other Pacific Northwest institutions that together make up the Seattle Structural Genomics Center for Infectious Disease (SSGCID). The SSGCID is dedicated to providing the scientific community with three-dimensional protein structures of NIAID category A-C pathogens. Making such structural information available to the scientific community serves to accelerate structure-based drug design. Structure-based drug design plays an important role in drug development. Pursuing multiple targets in parallel greatly increases the chance of success for new lead discovery by targeting a pathway or an entire protein family. Emerald Bio has developed a high-throughput, multi-target parallel processing pipeline (MTPP) for gene-to-structure determination to support the consortium. Here we describe the protocols used to determine the structure of the PB2 subunit from four different influenza A strains. PMID:23851357

  10. Kmerind: A Flexible Parallel Library for K-mer Indexing of Biological Sequences on Distributed Memory Systems.

    PubMed

    Pan, Tony; Flick, Patrick; Jain, Chirag; Liu, Yongchao; Aluru, Srinivas

    2017-10-09

    Counting and indexing fixed length substrings, or k-mers, in biological sequences is a key step in many bioinformatics tasks including genome alignment and mapping, genome assembly, and error correction. While advances in next generation sequencing technologies have dramatically reduced the cost and improved latency and throughput, few bioinformatics tools can efficiently process the datasets at the current generation rate of 1.8 terabases every 3 days. We present Kmerind, a high performance parallel k-mer indexing library for distributed memory environments. The Kmerind library provides a set of simple and consistent APIs with sequential semantics and parallel implementations that are designed to be flexible and extensible. Kmerind's k-mer counter performs similarly or better than the best existing k-mer counting tools even on shared memory systems. In a distributed memory environment, Kmerind counts k-mers in a 120 GB sequence read dataset in less than 13 seconds on 1024 Xeon CPU cores, and fully indexes their positions in approximately 17 seconds. Querying for 1% of the k-mers in these indices can be completed in 0.23 seconds and 28 seconds, respectively. Kmerind is the first k-mer indexing library for distributed memory environments, and the first extensible library for general k-mer indexing and counting. Kmerind is available at https://github.com/ParBLiSS/kmerind.

  11. Air-cathode microbial fuel cell array: a device for identifying and characterizing electrochemically active microbes.

    PubMed

    Hou, Huijie; Li, Lei; de Figueiredo, Paul; Han, Arum

    2011-01-15

    Microbial fuel cells (MFCs) have generated excitement in environmental and bioenergy communities due to their potential for coupling wastewater treatment with energy generation and powering diverse devices. The pursuit of strategies such as improving microbial cultivation practices and optimizing MFC devices has increased power generating capacities of MFCs. However, surprisingly few microbial species with electrochemical activity in MFCs have been identified because current devices do not support parallel analyses or high throughput screening. We have recently demonstrated the feasibility of using advanced microfabrication methods to fabricate an MFC microarray. Here, we extend these studies by demonstrating a microfabricated air-cathode MFC array system. The system contains 24 individual air-cathode MFCs integrated onto a single chip. The device enables the direct and parallel comparison of different microbes loaded onto the array. Environmental samples were used to validate the utility of the air-cathode MFC array system and two previously identified isolates, 7Ca (Shewanella sp.) and 3C (Arthrobacter sp.), were shown to display enhanced electrochemical activities of 2.69 mW/m(2) and 1.86 mW/m(2), respectively. Experiments using a large scale conventional air-cathode MFC validated these findings. The parallel air-cathode MFC array system demonstrated here is expected to promote and accelerate the discovery and characterization of electrochemically active microbes. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Evaluation of parallel milliliter-scale stirred-tank bioreactors for the study of biphasic whole-cell biocatalysis with ionic liquids.

    PubMed

    Dennewald, Danielle; Hortsch, Ralf; Weuster-Botz, Dirk

    2012-01-01

    As clear structure-activity relationships are still rare for ionic liquids, preliminary experiments are necessary for the process development of biphasic whole-cell processes involving these solvents. To reduce the time investment and the material costs, the process development of such biphasic reaction systems would profit from a small-scale high-throughput platform. Exemplarily, the reduction of 2-octanone to (R)-2-octanol by a recombinant Escherichia coli in a biphasic ionic liquid/water system was studied in a miniaturized stirred-tank bioreactor system allowing the parallel operation of up to 48 reactors at the mL-scale. The results were compared to those obtained in a 20-fold larger stirred-tank reactor. The maximum local energy dissipation was evaluated at the larger scale and compared to the data available for the small-scale reactors, to verify if similar mass transfer could be obtained at both scales. Thereafter, the reaction kinetics and final conversions reached in different reactions setups were analysed. The results were in good agreement between both scales for varying ionic liquids and for ionic liquid volume fractions up to 40%. The parallel bioreactor system can thus be used for the process development of the majority of biphasic reaction systems involving ionic liquids, reducing the time and resource investment during the process development of this type of applications. Copyright © 2011. Published by Elsevier B.V.

  13. Enhancing high throughput toxicology - development of putative adverse outcome pathways linking US EPA ToxCast screening targets to relevant apical hazards.

    EPA Science Inventory

    High throughput toxicology programs, such as ToxCast and Tox21, have provided biological effects data for thousands of chemicals at multiple concentrations. Compared to traditional, whole-organism approaches, high throughput assays are rapid and cost-effective, yet they generall...

  14. Evaluation of High-Throughput Chemical Exposure Models via Analysis of Matched Environmental and Biological Media Measurements

    EPA Science Inventory

    The U.S. EPA, under its ExpoCast program, is developing high-throughput near-field modeling methods to estimate human chemical exposure and to provide real-world context to high-throughput screening (HTS) hazard data. These novel modeling methods include reverse methods to infer ...

  15. The development of a general purpose ARM-based processing unit for the ATLAS TileCal sROD

    NASA Astrophysics Data System (ADS)

    Cox, M. A.; Reed, R.; Mellado, B.

    2015-01-01

    After Phase-II upgrades in 2022, the data output from the LHC ATLAS Tile Calorimeter will increase significantly. ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface to the ARM processors. An overview of the PU is given and the results for performance and throughput testing of four different ARM Cortex System on Chips are presented.

  16. [Current applications of high-throughput DNA sequencing technology in antibody drug research].

    PubMed

    Yu, Xin; Liu, Qi-Gang; Wang, Ming-Rong

    2012-03-01

    Since the publication of a high-throughput DNA sequencing technology based on PCR reaction was carried out in oil emulsions in 2005, high-throughput DNA sequencing platforms have been evolved to a robust technology in sequencing genomes and diverse DNA libraries. Antibody libraries with vast numbers of members currently serve as a foundation of discovering novel antibody drugs, and high-throughput DNA sequencing technology makes it possible to rapidly identify functional antibody variants with desired properties. Herein we present a review of current applications of high-throughput DNA sequencing technology in the analysis of antibody library diversity, sequencing of CDR3 regions, identification of potent antibodies based on sequence frequency, discovery of functional genes, and combination with various display technologies, so as to provide an alternative approach of discovery and development of antibody drugs.

  17. High power parallel ultrashort pulse laser processing

    NASA Astrophysics Data System (ADS)

    Gillner, Arnold; Gretzki, Patrick; Büsing, Lasse

    2016-03-01

    The class of ultra-short-pulse (USP) laser sources are used, whenever high precession and high quality material processing is demanded. These laser sources deliver pulse duration in the range of ps to fs and are characterized with high peak intensities leading to a direct vaporization of the material with a minimum thermal damage. With the availability of industrial laser source with an average power of up to 1000W, the main challenge consist of the effective energy distribution and disposition. Using lasers with high repetition rates in the MHz region can cause thermal issues like overheating, melt production and low ablation quality. In this paper, we will discuss different approaches for multibeam processing for utilization of high pulse energies. The combination of diffractive optics and conventional galvometer scanner can be used for high throughput laser ablation, but are limited in the optical qualities. We will show which applications can benefit from this hybrid optic and which improvements in productivity are expected. In addition, the optical limitations of the system will be compiled, in order to evaluate the suitability of this approach for any given application.

  18. A massive parallel sequencing workflow for diagnostic genetic testing of mismatch repair genes

    PubMed Central

    Hansen, Maren F; Neckmann, Ulrike; Lavik, Liss A S; Vold, Trine; Gilde, Bodil; Toft, Ragnhild K; Sjursen, Wenche

    2014-01-01

    The purpose of this study was to develop a massive parallel sequencing (MPS) workflow for diagnostic analysis of mismatch repair (MMR) genes using the GS Junior system (Roche). A pathogenic variant in one of four MMR genes, (MLH1, PMS2, MSH6, and MSH2), is the cause of Lynch Syndrome (LS), which mainly predispose to colorectal cancer. We used an amplicon-based sequencing method allowing specific and preferential amplification of the MMR genes including PMS2, of which several pseudogenes exist. The amplicons were pooled at different ratios to obtain coverage uniformity and maximize the throughput of a single-GS Junior run. In total, 60 previously identified and distinct variants (substitutions and indels), were sequenced by MPS and successfully detected. The heterozygote detection range was from 19% to 63% and dependent on sequence context and coverage. We were able to distinguish between false-positive and true-positive calls in homopolymeric regions by cross-sample comparison and evaluation of flow signal distributions. In addition, we filtered variants according to a predefined status, which facilitated variant annotation. Our study shows that implementation of MPS in routine diagnostics of LS can accelerate sample throughput and reduce costs without compromising sensitivity, compared to Sanger sequencing. PMID:24689082

  19. A paralleled readout system for an electrical DNA-hybridization assay based on a microstructured electrode array

    NASA Astrophysics Data System (ADS)

    Urban, Matthias; Möller, Robert; Fritzsche, Wolfgang

    2003-02-01

    DNA analytics is a growing field based on the increasing knowledge about the genome with special implications for the understanding of molecular bases for diseases. Driven by the need for cost-effective and high-throughput methods for molecular detection, DNA chips are an interesting alternative to more traditional analytical methods in this field. The standard readout principle for DNA chips is fluorescence based. Fluorescence is highly sensitive and broadly established, but shows limitations regarding quantification (due to signal and/or dye instability) and the need for sophisticated (and therefore high-cost) equipment. This article introduces a readout system for an alternative detection scheme based on electrical detection of nanoparticle-labeled DNA. If labeled DNA is present in the analyte solution, it will bind on complementary capture DNA immobilized in a microelectrode gap. A subsequent metal enhancement step leads to a deposition of conductive material on the nanoparticles, and finally an electrical contact between the electrodes. This detection scheme offers the potential for a simple (low-cost as well as robust) and highly miniaturizable method, which could be well-suited for point-of-care applications in the context of lab-on-a-chip technologies. The demonstrated apparatus allows a parallel readout of an entire array of microstructured measurement sites. The readout is combined with data-processing by an embedded personal computer, resulting in an autonomous instrument that measures and presents the results. The design and realization of such a system is described, and first measurements are presented.

  20. High-throughput screening approaches and combinatorial development of biomaterials using microfluidics.

    PubMed

    Barata, David; van Blitterswijk, Clemens; Habibovic, Pamela

    2016-04-01

    From the first microfluidic devices used for analysis of single metabolic by-products to highly complex multicompartmental co-culture organ-on-chip platforms, efforts of many multidisciplinary teams around the world have been invested in overcoming the limitations of conventional research methods in the biomedical field. Close spatial and temporal control over fluids and physical parameters, integration of sensors for direct read-out as well as the possibility to increase throughput of screening through parallelization, multiplexing and automation are some of the advantages of microfluidic over conventional, 2D tissue culture in vitro systems. Moreover, small volumes and relatively small cell numbers used in experimental set-ups involving microfluidics, can potentially decrease research cost. On the other hand, these small volumes and numbers of cells also mean that many of the conventional molecular biology or biochemistry assays cannot be directly applied to experiments that are performed in microfluidic platforms. Development of different types of assays and evidence that such assays are indeed a suitable alternative to conventional ones is a step that needs to be taken in order to have microfluidics-based platforms fully adopted in biomedical research. In this review, rather than providing a comprehensive overview of the literature on microfluidics, we aim to discuss developments in the field of microfluidics that can aid advancement of biomedical research, with emphasis on the field of biomaterials. Three important topics will be discussed, being: screening, in particular high-throughput and combinatorial screening; mimicking of natural microenvironment ranging from 3D hydrogel-based cellular niches to organ-on-chip devices; and production of biomaterials with closely controlled properties. While important technical aspects of various platforms will be discussed, the focus is mainly on their applications, including the state-of-the-art, future perspectives and challenges. Microfluidics, being a technology characterized by the engineered manipulation of fluids at the submillimeter scale, offers some interesting tools that can advance biomedical research and development. Screening platforms based on microfluidic technologies that allow high-throughput and combinatorial screening may lead to breakthrough discoveries not only in basic research but also relevant to clinical application. This is further strengthened by the fact that reliability of such screens may improve, since microfluidic systems allow close mimicking of physiological conditions. Finally, microfluidic systems are also very promising as micro factories of a new generation of natural or synthetic biomaterials and constructs, with finely controlled properties. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

Top