Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992
Air-stable ink for scalable, high-throughput layer deposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weil, Benjamin D; Connor, Stephen T; Cui, Yi
A method for producing and depositing air-stable, easily decomposable, vulcanized ink on any of a wide range of substrates is disclosed. The ink enables high-volume production of optoelectronic and/or electronic devices using scalable production methods, such as roll-to-roll transfer, fast rolling processes, and the like.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)
2002-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.
Peterson, Kevin J.; Pathak, Jyotishman
2014-01-01
Automated execution of electronic Clinical Quality Measures (eCQMs) from electronic health records (EHRs) on large patient populations remains a significant challenge, and the testability, interoperability, and scalability of measure execution are critical. The High Throughput Phenotyping (HTP; http://phenotypeportal.org) project aligns with these goals by using the standards-based HL7 Health Quality Measures Format (HQMF) and Quality Data Model (QDM) for measure specification, as well as Common Terminology Services 2 (CTS2) for semantic interpretation. The HQMF/QDM representation is automatically transformed into a JBoss® Drools workflow, enabling horizontal scalability via clustering and MapReduce algorithms. Using Project Cypress, automated verification metrics can then be produced. Our results show linear scalability for nine executed 2014 Center for Medicare and Medicaid Services (CMS) eCQMs for eligible professionals and hospitals for >1,000,000 patients, and verified execution correctness of 96.4% based on Project Cypress test data of 58 eCQMs. PMID:25954459
Shir, Daniel; Ballard, Zachary S.; Ozcan, Aydogan
2016-01-01
Mechanical flexibility and the advent of scalable, low-cost, and high-throughput fabrication techniques have enabled numerous potential applications for plasmonic sensors. Sensitive and sophisticated biochemical measurements can now be performed through the use of flexible plasmonic sensors integrated into existing medical and industrial devices or sample collection units. More robust sensing schemes and practical techniques must be further investigated to fully realize the potentials of flexible plasmonics as a framework for designing low-cost, embedded and integrated sensors for medical, environmental, and industrial applications. PMID:27547023
Screening for endocrine-disrupting chemicals (EDCs) requires sensitive, scalable assays. Current high-throughput screening (HTPS) approaches for estrogenic and androgenic activity yield rapid results, but many are not sensitive to physiological hormone concentrations, suggesting ...
Juluri, Bala Krishna; Chaturvedi, Neetu; Hao, Qingzhen; Lu, Mengqian; Velegol, Darrell; Jensen, Lasse; Huang, Tony Jun
2014-01-01
Localization of large electric fields in plasmonic nanostructures enables various processes such as single molecule detection, higher harmonic light generation, and control of molecular fluorescence and absorption. High-throughput, simple nanofabrication techniques are essential for implementing plasmonic nanostructures with large electric fields for practical applications. In this article we demonstrate a scalable, rapid, and inexpensive fabrication method based on the salting-out quenching technique and colloidal lithography for the fabrication of two types of nanostructures with large electric field: nanodisk dimers and cusp nanostructures. Our technique relies on fabricating polystyrene doublets from single beads by controlled aggregation and later using them as soft masks to fabricate metal nanodisk dimers and nanocusp structures. Both of these structures have a well-defined geometry for the localization of large electric fields comparable to structures fabricated by conventional nanofabrication techniques. We also show that various parameters in the fabrication process can be adjusted to tune the geometry of the final structures and control their plasmonic properties. With advantages in throughput, cost, and geometric tunability, our fabrication method can be valuable in many applications that require plasmonic nanostructures with large electric fields. PMID:21692473
The iPlant Collaborative: Cyberinfrastructure for Enabling Data to Discovery for the Life Sciences.
Merchant, Nirav; Lyons, Eric; Goff, Stephen; Vaughn, Matthew; Ware, Doreen; Micklos, David; Antin, Parker
2016-01-01
The iPlant Collaborative provides life science research communities access to comprehensive, scalable, and cohesive computational infrastructure for data management; identity management; collaboration tools; and cloud, high-performance, high-throughput computing. iPlant provides training, learning material, and best practice resources to help all researchers make the best use of their data, expand their computational skill set, and effectively manage their data and computation when working as distributed teams. iPlant's platform permits researchers to easily deposit and share their data and deploy new computational tools and analysis workflows, allowing the broader community to easily use and reuse those data and computational analyses.
Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications
NASA Astrophysics Data System (ADS)
Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei
2007-04-01
In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.
High Throughput Transcriptomics @ USEPA (Toxicology ...
The ideal chemical testing approach will provide complete coverage of all relevant toxicological responses. It should be sensitive and specific It should identify the mechanism/mode-of-action (with dose-dependence). It should identify responses relevant to the species of interest. Responses should ideally be translated into tissue-, organ-, and organism-level effects. It must be economical and scalable. Using a High Throughput Transcriptomics platform within US EPA provides broader coverage of biological activity space and toxicological MOAs and helps fill the toxicological data gap. Slide presentation at the 2016 ToxForum on using High Throughput Transcriptomics at US EPA for broader coverage biological activity space and toxicological MOAs.
The iPlant Collaborative: Cyberinfrastructure for Enabling Data to Discovery for the Life Sciences
Merchant, Nirav; Lyons, Eric; Goff, Stephen; Vaughn, Matthew; Ware, Doreen; Micklos, David; Antin, Parker
2016-01-01
The iPlant Collaborative provides life science research communities access to comprehensive, scalable, and cohesive computational infrastructure for data management; identity management; collaboration tools; and cloud, high-performance, high-throughput computing. iPlant provides training, learning material, and best practice resources to help all researchers make the best use of their data, expand their computational skill set, and effectively manage their data and computation when working as distributed teams. iPlant’s platform permits researchers to easily deposit and share their data and deploy new computational tools and analysis workflows, allowing the broader community to easily use and reuse those data and computational analyses. PMID:26752627
NASA Astrophysics Data System (ADS)
Fang, Sheng-Po; Jao, PitFee; Senior, David E.; Kim, Kyoung-Tae; Yoon, Yong-Kyu
2017-12-01
High throughput nanomanufacturing of photopatternable nanofibers and subsequent photopatterning is reported. For the production of high density nanofibers, the tube nozzle electrospinning (TNE) process has been used, where an array of micronozzles on the sidewall of a plastic tube are used as spinnerets. By increasing the density of nozzles, the electric fields of adjacent nozzles confine the cone of electrospinning and give a higher density of nanofibers. With TNE, higher density nozzles are easily achievable compared to metallic nozzles, e.g. an inter-nozzle distance as small as 0.5 cm and an average semi-vertical repulsion angle of 12.28° for 8-nozzles were achieved. Nanofiber diameter distribution, mass throughput rate, and growth rate of nanofiber stacks in different operating conditions and with different numbers of nozzles, such as 2, 4 and 8 nozzles, and scalability with single and double tube configurations are discussed. Nanofibers made of SU-8, photopatternable epoxy, have been collected to a thickness of over 80 μm in 240 s of electrospinning and the production rate of 0.75 g/h is achieved using the 2 tube 8 nozzle systems, followed by photolithographic micropatterning. TNE is scalable to a large number of nozzles, and offers high throughput production, plug and play capability with standard electrospinning equipment, and little waste of polymer.
The iPlant collaborative: cyberinfrastructure for enabling data to discovery for the life sciences
USDA-ARS?s Scientific Manuscript database
The iPlant Collaborative provides life science research communities access to comprehensive, scalable, and cohesive computational infrastructure for data management; identify management; collaboration tools; and cloud, high-performance, high-throughput computing. iPlant provides training, learning m...
High throughput system for magnetic manipulation of cells, polymers, and biomaterials
Spero, Richard Chasen; Vicci, Leandra; Cribb, Jeremy; Bober, David; Swaminathan, Vinay; O’Brien, E. Timothy; Rogers, Stephen L.; Superfine, R.
2008-01-01
In the past decade, high throughput screening (HTS) has changed the way biochemical assays are performed, but manipulation and mechanical measurement of micro- and nanoscale systems have not benefited from this trend. Techniques using microbeads (particles ∼0.1–10 μm) show promise for enabling high throughput mechanical measurements of microscopic systems. We demonstrate instrumentation to magnetically drive microbeads in a biocompatible, multiwell magnetic force system. It is based on commercial HTS standards and is scalable to 96 wells. Cells can be cultured in this magnetic high throughput system (MHTS). The MHTS can apply independently controlled forces to 16 specimen wells. Force calibrations demonstrate forces in excess of 1 nN, predicted force saturation as a function of pole material, and powerlaw dependence of F∼r−2.7±0.1. We employ this system to measure the stiffness of SR2+ Drosophila cells. MHTS technology is a key step toward a high throughput screening system for micro- and nanoscale biophysical experiments. PMID:19044357
Schlecht, Ulrich; Liu, Zhimin; Blundell, Jamie R; St Onge, Robert P; Levy, Sasha F
2017-05-25
Several large-scale efforts have systematically catalogued protein-protein interactions (PPIs) of a cell in a single environment. However, little is known about how the protein interactome changes across environmental perturbations. Current technologies, which assay one PPI at a time, are too low throughput to make it practical to study protein interactome dynamics. Here, we develop a highly parallel protein-protein interaction sequencing (PPiSeq) platform that uses a novel double barcoding system in conjunction with the dihydrofolate reductase protein-fragment complementation assay in Saccharomyces cerevisiae. PPiSeq detects PPIs at a rate that is on par with current assays and, in contrast with current methods, quantitatively scores PPIs with enough accuracy and sensitivity to detect changes across environments. Both PPI scoring and the bulk of strain construction can be performed with cell pools, making the assay scalable and easily reproduced across environments. PPiSeq is therefore a powerful new tool for large-scale investigations of dynamic PPIs.
A high-throughput label-free nanoparticle analyser.
Fraikin, Jean-Luc; Teesalu, Tambet; McKenney, Christopher M; Ruoslahti, Erkki; Cleland, Andrew N
2011-05-01
Synthetic nanoparticles and genetically modified viruses are used in a range of applications, but high-throughput analytical tools for the physical characterization of these objects are needed. Here we present a microfluidic analyser that detects individual nanoparticles and characterizes complex, unlabelled nanoparticle suspensions. We demonstrate the detection, concentration analysis and sizing of individual synthetic nanoparticles in a multicomponent mixture with sufficient throughput to analyse 500,000 particles per second. We also report the rapid size and titre analysis of unlabelled bacteriophage T7 in both salt solution and mouse blood plasma, using just ~1 × 10⁻⁶ l of analyte. Unexpectedly, in the native blood plasma we discover a large background of naturally occurring nanoparticles with a power-law size distribution. The high-throughput detection capability, scalable fabrication and simple electronics of this instrument make it well suited for diverse applications.
2011-01-01
Genome targeting methods enable cost-effective capture of specific subsets of the genome for sequencing. We present here an automated, highly scalable method for carrying out the Solution Hybrid Selection capture approach that provides a dramatic increase in scale and throughput of sequence-ready libraries produced. Significant process improvements and a series of in-process quality control checkpoints are also added. These process improvements can also be used in a manual version of the protocol. PMID:21205303
Nanosurveyor: a framework for real-time data processing
Daurer, Benedikt J.; Krishnan, Hari; Perciano, Talita; ...
2017-01-31
Background: The ever improving brightness of accelerator based sources is enabling novel observations and discoveries with faster frame rates, larger fields of view, higher resolution, and higher dimensionality. Results: Here we present an integrated software/algorithmic framework designed to capitalize on high-throughput experiments through efficient kernels, load-balanced workflows, which are scalable in design. We describe the streamlined processing pipeline of ptychography data analysis. Conclusions: The pipeline provides throughput, compression, and resolution as well as rapid feedback to the microscope operators.
Kosa, Gergely; Vuoristo, Kiira S; Horn, Svein Jarle; Zimmermann, Boris; Afseth, Nils Kristian; Kohler, Achim; Shapaval, Volha
2018-06-01
Recent developments in molecular biology and metabolic engineering have resulted in a large increase in the number of strains that need to be tested, positioning high-throughput screening of microorganisms as an important step in bioprocess development. Scalability is crucial for performing reliable screening of microorganisms. Most of the scalability studies from microplate screening systems to controlled stirred-tank bioreactors have been performed so far with unicellular microorganisms. We have compared cultivation of industrially relevant oleaginous filamentous fungi and microalga in a Duetz-microtiter plate system to benchtop and pre-pilot bioreactors. Maximal glucose consumption rate, biomass concentration, lipid content of the biomass, biomass, and lipid yield values showed good scalability for Mucor circinelloides (less than 20% differences) and Mortierella alpina (less than 30% differences) filamentous fungi. Maximal glucose consumption and biomass production rates were identical for Crypthecodinium cohnii in microtiter plate and benchtop bioreactor. Most likely due to shear stress sensitivity of this microalga in stirred bioreactor, biomass concentration and lipid content of biomass were significantly higher in the microtiter plate system than in the benchtop bioreactor. Still, fermentation results obtained in the Duetz-microtiter plate system for Crypthecodinium cohnii are encouraging compared to what has been reported in literature. Good reproducibility (coefficient of variation less than 15% for biomass growth, glucose consumption, lipid content, and pH) were achieved in the Duetz-microtiter plate system for Mucor circinelloides and Crypthecodinium cohnii. Mortierella alpina cultivation reproducibility might be improved with inoculation optimization. In conclusion, we have presented suitability of the Duetz-microtiter plate system for the reproducible, scalable, and cost-efficient high-throughput screening of oleaginous microorganisms.
D'Aiuto, Leonardo; Zhi, Yun; Kumar Das, Dhanjit; Wilcox, Madeleine R; Johnson, Jon W; McClain, Lora; MacDonald, Matthew L; Di Maio, Roberto; Schurdak, Mark E; Piazza, Paolo; Viggiano, Luigi; Sweet, Robert; Kinchington, Paul R; Bhattacharjee, Ayantika G; Yolken, Robert; Nimgaonka, Vishwajit L; Nimgaonkar, Vishwajit L
2014-01-01
Induced pluripotent stem cell (iPSC)-based technologies offer an unprecedented opportunity to perform high-throughput screening of novel drugs for neurological and neurodegenerative diseases. Such screenings require a robust and scalable method for generating large numbers of mature, differentiated neuronal cells. Currently available methods based on differentiation of embryoid bodies (EBs) or directed differentiation of adherent culture systems are either expensive or are not scalable. We developed a protocol for large-scale generation of neuronal stem cells (NSCs)/early neural progenitor cells (eNPCs) and their differentiation into neurons. Our scalable protocol allows robust and cost-effective generation of NSCs/eNPCs from iPSCs. Following culture in neurobasal medium supplemented with B27 and BDNF, NSCs/eNPCs differentiate predominantly into vesicular glutamate transporter 1 (VGLUT1) positive neurons. Targeted mass spectrometry analysis demonstrates that iPSC-derived neurons express ligand-gated channels and other synaptic proteins and whole-cell patch-clamp experiments indicate that these channels are functional. The robust and cost-effective differentiation protocol described here for large-scale generation of NSCs/eNPCs and their differentiation into neurons paves the way for automated high-throughput screening of drugs for neurological and neurodegenerative diseases.
Ethoscopes: An open platform for high-throughput ethomics.
Geissmann, Quentin; Garcia Rodriguez, Luis; Beckwith, Esteban J; French, Alice S; Jamasb, Arian R; Gilestro, Giorgio F
2017-10-01
Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope.
Opportunistic data locality for end user data analysis
NASA Astrophysics Data System (ADS)
Fischer, M.; Heidecker, C.; Kuehn, E.; Quast, G.; Giffels, M.; Schnepf, M.; Heiss, A.; Petzold, A.
2017-10-01
With the increasing data volume of LHC Run2, user analyses are evolving towards increasing data throughput. This evolution translates to higher requirements for efficiency and scalability of the underlying analysis infrastructure. We approach this issue with a new middleware to optimise data access: a layer of coordinated caches transparently provides data locality for high-throughput analyses. We demonstrated the feasibility of this approach with a prototype used for analyses of the CMS working groups at KIT. In this paper, we present our experience both with the approach in general, and our prototype in specific.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
NASA Astrophysics Data System (ADS)
Lei, Yuguo; Schaffer, David V.
2013-12-01
Human pluripotent stem cells (hPSCs), including human embryonic stem cells and induced pluripotent stem cells, are promising for numerous biomedical applications, such as cell replacement therapies, tissue and whole-organ engineering, and high-throughput pharmacology and toxicology screening. Each of these applications requires large numbers of cells of high quality; however, the scalable expansion and differentiation of hPSCs, especially for clinical utilization, remains a challenge. We report a simple, defined, efficient, scalable, and good manufacturing practice-compatible 3D culture system for hPSC expansion and differentiation. It employs a thermoresponsive hydrogel that combines easy manipulation and completely defined conditions, free of any human- or animal-derived factors, and entailing only recombinant protein factors. Under an optimized protocol, the 3D system enables long-term, serial expansion of multiple hPSCs lines with a high expansion rate (∼20-fold per 5-d passage, for a 1072-fold expansion over 280 d), yield (∼2.0 × 107 cells per mL of hydrogel), and purity (∼95% Oct4+), even with single-cell inoculation, all of which offer considerable advantages relative to current approaches. Moreover, the system enabled 3D directed differentiation of hPSCs into multiple lineages, including dopaminergic neuron progenitors with a yield of ∼8 × 107 dopaminergic progenitors per mL of hydrogel and ∼80-fold expansion by the end of a 15-d derivation. This versatile system may be useful at numerous scales, from basic biological investigation to clinical development.
Ethoscopes: An open platform for high-throughput ethomics
Geissmann, Quentin; Garcia Rodriguez, Luis; Beckwith, Esteban J.; French, Alice S.; Jamasb, Arian R.
2017-01-01
Here, we present the use of ethoscopes, which are machines for high-throughput analysis of behavior in Drosophila and other animals. Ethoscopes provide a software and hardware solution that is reproducible and easily scalable. They perform, in real-time, tracking and profiling of behavior by using a supervised machine learning algorithm, are able to deliver behaviorally triggered stimuli to flies in a feedback-loop mode, and are highly customizable and open source. Ethoscopes can be built easily by using 3D printing technology and rely on Raspberry Pi microcomputers and Arduino boards to provide affordable and flexible hardware. All software and construction specifications are available at http://lab.gilest.ro/ethoscope. PMID:29049280
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Long; Xiong, Yuan; Zhang, Qianqian
The commercialization of nonfullerene organic solar cells (OSCs) relies critically on the response under typical operating conditions (for instance, temperature, humidity) and the ability of scale-up. Despite the rapid increase in power conversion efficiency (PCE) of spin-coated devices fabricated in a protective atmosphere, the device efficiencies of printed nonfullerene OSC devices by blade-coating are still lower than 6%. This slow progress significantly limits the practical printing of high-performance nonfullerene OSCs. Here, a new and stable nonfullerene combination was introduced by pairing a commercially available nonfluorinated acceptor IT-M with the polymeric donor FTAZ. Over 12%-efficiency can be achieved in spincoated FTAZ:IT-Mmore » devices using a single halogen-free solvent. More importantly, chlorinefree, in air blade-coating of FTAZ:IT-M is able to yield a PCE of nearly 11%, despite a humidity of ~50%. X-ray scattering results reveal that large π-π coherence lengths, high degree of faceon orientation with respect to the substrate, and small domain spacings of ~20 nm are closely correlated with such high device performance. Our material system and approach yields the highest reported performance for nonfullerene OSC devices by a coating technique approximating scalable fabrication methods and holds great promise for the development of low-cost, low-toxicity, and high-efficiency OSCs by high-throughput production.« less
High-speed zero-copy data transfer for DAQ applications
NASA Astrophysics Data System (ADS)
Pisani, Flavio; Cámpora Pérez, Daniel Hugo; Neufeld, Niko
2015-05-01
The LHCb Data Acquisition (DAQ) will be upgraded in 2020 to a trigger-free readout. In order to achieve this goal we will need to connect around 500 nodes with a total network capacity of 32 Tb/s. To get such an high network capacity we are testing zero-copy technology in order to maximize the theoretical link throughput without adding excessive CPU and memory bandwidth overhead, leaving free resources for data processing resulting in less power, space and money used for the same result. We develop a modular test application which can be used with different transport layers. For the zero-copy implementation we choose the OFED IBVerbs API because it can provide low level access and high throughput. We present throughput and CPU usage measurements of 40 GbE solutions using Remote Direct Memory Access (RDMA), for several network configurations to test the scalability of the system.
2012-01-01
Background High-throughput methods are widely-used for strain screening effectively resulting in binary information regarding high or low productivity. Nevertheless achieving quantitative and scalable parameters for fast bioprocess development is much more challenging, especially for heterologous protein production. Here, the nature of the foreign protein makes it impossible to predict the, e.g. best expression construct, secretion signal peptide, inductor concentration, induction time, temperature and substrate feed rate in fed-batch operation to name only a few. Therefore, a high number of systematic experiments are necessary to elucidate the best conditions for heterologous expression of each new protein of interest. Results To increase the throughput in bioprocess development, we used a microtiter plate based cultivation system (Biolector) which was fully integrated into a liquid-handling platform enclosed in laminar airflow housing. This automated cultivation platform was used for optimization of the secretory production of a cutinase from Fusarium solani pisi with Corynebacterium glutamicum. The online monitoring of biomass, dissolved oxygen and pH in each of the microtiter plate wells enables to trigger sampling or dosing events with the pipetting robot used for a reliable selection of best performing cutinase producers. In addition to this, further automated methods like media optimization and induction profiling were developed and validated. All biological and bioprocess parameters were exclusively optimized at microtiter plate scale and showed perfect scalable results to 1 L and 20 L stirred tank bioreactor scale. Conclusions The optimization of heterologous protein expression in microbial systems currently requires extensive testing of biological and bioprocess engineering parameters. This can be efficiently boosted by using a microtiter plate cultivation setup embedded into a liquid-handling system, providing more throughput by parallelization and automation. Due to improved statistics by replicate cultivations, automated downstream analysis, and scalable process information, this setup has superior performance compared to standard microtiter plate cultivation. PMID:23113930
Adenylylation of small RNA sequencing adapters using the TS2126 RNA ligase I.
Lama, Lodoe; Ryan, Kevin
2016-01-01
Many high-throughput small RNA next-generation sequencing protocols use 5' preadenylylated DNA oligonucleotide adapters during cDNA library preparation. Preadenylylation of the DNA adapter's 5' end frees from ATP-dependence the ligation of the adapter to RNA collections, thereby avoiding ATP-dependent side reactions. However, preadenylylation of the DNA adapters can be costly and difficult. The currently available method for chemical adenylylation of DNA adapters is inefficient and uses techniques not typically practiced in laboratories profiling cellular RNA expression. An alternative enzymatic method using a commercial RNA ligase was recently introduced, but this enzyme works best as a stoichiometric adenylylating reagent rather than a catalyst and can therefore prove costly when several variant adapters are needed or during scale-up or high-throughput adenylylation procedures. Here, we describe a simple, scalable, and highly efficient method for the 5' adenylylation of DNA oligonucleotides using the thermostable RNA ligase 1 from bacteriophage TS2126. Adapters with 3' blocking groups are adenylylated at >95% yield at catalytic enzyme-to-adapter ratios and need not be gel purified before ligation to RNA acceptors. Experimental conditions are also reported that enable DNA adapters with free 3' ends to be 5' adenylylated at >90% efficiency. © 2015 Lama and Ryan; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
FPGA cluster for high-performance AO real-time control system
NASA Astrophysics Data System (ADS)
Geng, Deli; Goodsell, Stephen J.; Basden, Alastair G.; Dipper, Nigel A.; Myers, Richard M.; Saunter, Chris D.
2006-06-01
Whilst the high throughput and low latency requirements for the next generation AO real-time control systems have posed a significant challenge to von Neumann architecture processor systems, the Field Programmable Gate Array (FPGA) has emerged as a long term solution with high performance on throughput and excellent predictability on latency. Moreover, FPGA devices have highly capable programmable interfacing, which lead to more highly integrated system. Nevertheless, a single FPGA is still not enough: multiple FPGA devices need to be clustered to perform the required subaperture processing and the reconstruction computation. In an AO real-time control system, the memory bandwidth is often the bottleneck of the system, simply because a vast amount of supporting data, e.g. pixel calibration maps and the reconstruction matrix, need to be accessed within a short period. The cluster, as a general computing architecture, has excellent scalability in processing throughput, memory bandwidth, memory capacity, and communication bandwidth. Problems, such as task distribution, node communication, system verification, are discussed.
Performance-scalable volumetric data classification for online industrial inspection
NASA Astrophysics Data System (ADS)
Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.
2002-03-01
Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.
Pulsed photonic fabrication of nanostructured metal oxide thin films
NASA Astrophysics Data System (ADS)
Bourgeois, Briley B.; Luo, Sijun; Riggs, Brian C.; Adireddy, Shiva; Chrisey, Douglas B.
2017-09-01
Nanostructured metal oxide thin films with a large specific surface area are preferable for practical device applications in energy conversion and storage. Herein, we report instantaneous (milliseconds) photonic synthesis of three-dimensional (3-D) nanostructured metal oxide thin films through the pulsed photoinitiated pyrolysis of organometallic precursor films made by chemical solution deposition. High wall-plug efficiency-pulsed photonic irradiation (xenon flash lamp, pulse width of 1.93 ms, fluence of 7.7 J/cm2 and frequency of 1.2 Hz) is used for scalable photonic processing. The photothermal effect of subsequent pulses rapidly improves the crystalline quality of nanocrystalline metal oxide thin films in minutes. The following paper highlights pulsed photonic fabrication of 3-D nanostructured TiO2, Co3O4, and Fe2O3 thin films, exemplifying a promising new method for the low-cost and high-throughput manufacturing of nanostructured metal oxide thin films for energy applications.
Novel method for high-throughput colony PCR screening in nanoliter-reactors
Walser, Marcel; Pellaux, Rene; Meyer, Andreas; Bechtold, Matthias; Vanderschuren, Herve; Reinhardt, Richard; Magyar, Joseph; Panke, Sven; Held, Martin
2009-01-01
We introduce a technology for the rapid identification and sequencing of conserved DNA elements employing a novel suspension array based on nanoliter (nl)-reactors made from alginate. The reactors have a volume of 35 nl and serve as reaction compartments during monoseptic growth of microbial library clones, colony lysis, thermocycling and screening for sequence motifs via semi-quantitative fluorescence analyses. nl-Reactors were kept in suspension during all high-throughput steps which allowed performing the protocol in a highly space-effective fashion and at negligible expenses of consumables and reagents. As a first application, 11 high-quality microsatellites for polymorphism studies in cassava were isolated and sequenced out of a library of 20 000 clones in 2 days. The technology is widely scalable and we envision that throughputs for nl-reactor based screenings can be increased up to 100 000 and more samples per day thereby efficiently complementing protocols based on established deep-sequencing technologies. PMID:19282448
Genome sequencing in microfabricated high-density picolitre reactors.
Margulies, Marcel; Egholm, Michael; Altman, William E; Attiya, Said; Bader, Joel S; Bemben, Lisa A; Berka, Jan; Braverman, Michael S; Chen, Yi-Ju; Chen, Zhoutao; Dewell, Scott B; Du, Lei; Fierro, Joseph M; Gomes, Xavier V; Godwin, Brian C; He, Wen; Helgesen, Scott; Ho, Chun Heen; Ho, Chun He; Irzyk, Gerard P; Jando, Szilveszter C; Alenquer, Maria L I; Jarvie, Thomas P; Jirage, Kshama B; Kim, Jong-Bum; Knight, James R; Lanza, Janna R; Leamon, John H; Lefkowitz, Steven M; Lei, Ming; Li, Jing; Lohman, Kenton L; Lu, Hong; Makhijani, Vinod B; McDade, Keith E; McKenna, Michael P; Myers, Eugene W; Nickerson, Elizabeth; Nobile, John R; Plant, Ramona; Puc, Bernard P; Ronan, Michael T; Roth, George T; Sarkis, Gary J; Simons, Jan Fredrik; Simpson, John W; Srinivasan, Maithreyan; Tartaro, Karrie R; Tomasz, Alexander; Vogt, Kari A; Volkmer, Greg A; Wang, Shally H; Wang, Yong; Weiner, Michael P; Yu, Pengguang; Begley, Richard F; Rothberg, Jonathan M
2005-09-15
The proliferation of large-scale DNA-sequencing projects in recent years has driven a search for alternative methods to reduce time and cost. Here we describe a scalable, highly parallel sequencing system with raw throughput significantly greater than that of state-of-the-art capillary electrophoresis instruments. The apparatus uses a novel fibre-optic slide of individual wells and is able to sequence 25 million bases, at 99% or better accuracy, in one four-hour run. To achieve an approximately 100-fold increase in throughput over current Sanger sequencing technology, we have developed an emulsion method for DNA amplification and an instrument for sequencing by synthesis using a pyrosequencing protocol optimized for solid support and picolitre-scale volumes. Here we show the utility, throughput, accuracy and robustness of this system by shotgun sequencing and de novo assembly of the Mycoplasma genitalium genome with 96% coverage at 99.96% accuracy in one run of the machine.
Ion channel drug discovery and research: the automated Nano-Patch-Clamp technology.
Brueggemann, A; George, M; Klau, M; Beckler, M; Steindl, J; Behrends, J C; Fertig, N
2004-01-01
Unlike the genomics revolution, which was largely enabled by a single technological advance (high throughput sequencing), rapid advancement in proteomics will require a broader effort to increase the throughput of a number of key tools for functional analysis of different types of proteins. In the case of ion channels -a class of (membrane) proteins of great physiological importance and potential as drug targets- the lack of adequate assay technologies is felt particularly strongly. The available, indirect, high throughput screening methods for ion channels clearly generate insufficient information. The best technology to study ion channel function and screen for compound interaction is the patch clamp technique, but patch clamping suffers from low throughput, which is not acceptable for drug screening. A first step towards a solution is presented here. The nano patch clamp technology, which is based on a planar, microstructured glass chip, enables automatic whole cell patch clamp measurements. The Port-a-Patch is an automated electrophysiology workstation, which uses planar patch clamp chips. This approach enables high quality and high content ion channel and compound evaluation on a one-cell-at-a-time basis. The presented automation of the patch process and its scalability to an array format are the prerequisites for any higher throughput electrophysiology instruments.
Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping
2017-03-17
A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.
Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander
2015-01-01
Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots. It enables scientists to store, manage and share crop root images with metadata and compute RSA traits from thousands of images in parallel. It makes high-throughput RSA trait computation available to the community with just a few button clicks. As such it enables plant scientists to spend more time on science rather than on technology. All stored and computed data is easily accessible to the public and broader scientific community. We hope that easy data accessibility will attract new tool developers and spur creative data usage that may even be applied to other fields of science.
Highly scalable, closed-loop synthesis of drug-loaded, layer-by-layer nanoparticles.
Correa, Santiago; Choi, Ki Young; Dreaden, Erik C; Renggli, Kasper; Shi, Aria; Gu, Li; Shopsowitz, Kevin E; Quadir, Mohiuddin A; Ben-Akiva, Elana; Hammond, Paula T
2016-02-16
Layer-by-layer (LbL) self-assembly is a versatile technique from which multicomponent and stimuli-responsive nanoscale drug carriers can be constructed. Despite the benefits of LbL assembly, the conventional synthetic approach for fabricating LbL nanoparticles requires numerous purification steps that limit scale, yield, efficiency, and potential for clinical translation. In this report, we describe a generalizable method for increasing throughput with LbL assembly by using highly scalable, closed-loop diafiltration to manage intermediate purification steps. This method facilitates highly controlled fabrication of diverse nanoscale LbL formulations smaller than 150 nm composed from solid-polymer, mesoporous silica, and liposomal vesicles. The technique allows for the deposition of a broad range of polyelectrolytes that included native polysaccharides, linear polypeptides, and synthetic polymers. We also explore the cytotoxicity, shelf life and long-term storage of LbL nanoparticles produced using this approach. We find that LbL coated systems can be reliably and rapidly produced: specifically, LbL-modified liposomes could be lyophilized, stored at room temperature, and reconstituted without compromising drug encapsulation or particle stability, thereby facilitating large scale applications. Overall, this report describes an accessible approach that significantly improves the throughput of nanoscale LbL drug-carriers that show low toxicity and are amenable to clinically relevant storage conditions.
Bláha, Benjamin A F; Morris, Stephen A; Ogonah, Olotu W; Maucourant, Sophie; Crescente, Vincenzo; Rosenberg, William; Mukhopadhyay, Tarit K
2018-01-01
The time and cost benefits of miniaturized fermentation platforms can only be gained by employing complementary techniques facilitating high-throughput at small sample volumes. Microbial cell disruption is a major bottleneck in experimental throughput and is often restricted to large processing volumes. Moreover, for rigid yeast species, such as Pichia pastoris, no effective high-throughput disruption methods exist. The development of an automated, miniaturized, high-throughput, noncontact, scalable platform based on adaptive focused acoustics (AFA) to disrupt P. pastoris and recover intracellular heterologous protein is described. Augmented modes of AFA were established by investigating vessel designs and a novel enzymatic pretreatment step. Three different modes of AFA were studied and compared to the performance high-pressure homogenization. For each of these modes of cell disruption, response models were developed to account for five different performance criteria. Using multiple responses not only demonstrated that different operating parameters are required for different response optima, with highest product purity requiring suboptimal values for other criteria, but also allowed for AFA-based methods to mimic large-scale homogenization processes. These results demonstrate that AFA-mediated cell disruption can be used for a wide range of applications including buffer development, strain selection, fermentation process development, and whole bioprocess integration. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 34:130-140, 2018. © 2017 American Institute of Chemical Engineers.
The Nano-Patch-Clamp Array: Microfabricated Glass Chips for High-Throughput Electrophysiology
NASA Astrophysics Data System (ADS)
Fertig, Niels
2003-03-01
Electrophysiology (i.e. patch clamping) remains the gold standard for pharmacological testing of putative ion channel active drugs (ICADs), but suffers from low throughput. A new ion channel screening technology based on microfabricated glass chip devices will be presented. The glass chips contain very fine apertures, which are used for whole-cell voltage clamp recordings as well as single channel recordings from mammalian cell lines. Chips containing multiple patch clamp wells will be used in a first bench-top device, which will allow perfusion and electrical readout of each well. This scalable technology will allow for automated, rapid and parallel screening on ion channel drug targets.
2011-06-01
4. Conclusion The Web -based AGeS system described in this paper is a computationally-efficient and scalable system for high- throughput genome...method for protecting web services involves making them more resilient to attack using autonomic computing techniques. This paper presents our initial...20–23, 2011 2011 DoD High Performance Computing Modernzation Program Users Group Conference HPCMP UGC 2011 The papers in this book comprise the
Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping
2017-01-01
A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing. PMID:28304371
Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah
Tian, Le; Latré, Steven
2017-01-01
IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks. PMID:28677617
Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah.
Tian, Le; Khorov, Evgeny; Latré, Steven; Famaey, Jeroen
2017-07-04
IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks.
A scalable silicon photonic chip-scale optical switch for high performance computing systems.
Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B
2013-12-30
This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.
Bunyak, Filiz; Palaniappan, Kannappan; Chagin, Vadim; Cardoso, M
2009-01-01
Fluorescently tagged proteins such as GFP-PCNA produce rich dynamically varying textural patterns of foci distributed in the nucleus. This enables the behavioral study of sub-cellular structures during different phases of the cell cycle. The varying punctuate patterns of fluorescence, drastic changes in SNR, shape and position during mitosis and abundance of touching cells, however, require more sophisticated algorithms for reliable automatic cell segmentation and lineage analysis. Since the cell nuclei are non-uniform in appearance, a distribution-based modeling of foreground classes is essential. The recently proposed graph partitioning active contours (GPAC) algorithm supports region descriptors and flexible distance metrics. We extend GPAC for fluorescence-based cell segmentation using regional density functions and dramatically improve its efficiency for segmentation from O(N(4)) to O(N(2)), for an image with N(2) pixels, making it practical and scalable for high throughput microscopy imaging studies.
Optically enhanced acoustophoresis
NASA Astrophysics Data System (ADS)
McDougall, Craig; O'Mahoney, Paul; McGuinn, Alan; Willoughby, Nicholas A.; Qiu, Yongqiang; Demore, Christine E. M.; MacDonald, Michael P.
2017-08-01
Regenerative medicine has the capability to revolutionise many aspects of medical care, but for it to make the step from small scale autologous treatments to larger scale allogeneic approaches, robust and scalable label free cell sorting technologies are needed as part of a cell therapy bioprocessing pipeline. In this proceedings we describe several strategies for addressing the requirements for high throughput without labeling via: dimensional scaling, rare species targeting and sorting from a stable state. These three approaches are demonstrated through a combination of optical and ultrasonic forces. By combining mostly conservative and non-conservative forces from two different modalities it is possible to reduce the influence of flow velocity on sorting efficiency, hence increasing robustness and scalability. One such approach can be termed "optically enhanced acoustophoresis" which combines the ability of acoustics to handle large volumes of analyte with the high specificity of optical sorting.
A High-Speed Design of Montgomery Multiplier
NASA Astrophysics Data System (ADS)
Fan, Yibo; Ikenaga, Takeshi; Goto, Satoshi
With the increase of key length used in public cryptographic algorithms such as RSA and ECC, the speed of Montgomery multiplication becomes a bottleneck. This paper proposes a high speed design of Montgomery multiplier. Firstly, a modified scalable high-radix Montgomery algorithm is proposed to reduce critical path. Secondly, a high-radix clock-saving dataflow is proposed to support high-radix operation and one clock cycle delay in dataflow. Finally, a hardware-reused architecture is proposed to reduce the hardware cost and a parallel radix-16 design of data path is proposed to accelerate the speed. By using HHNEC 0.25μm standard cell library, the implementation results show that the total cost of Montgomery multiplier is 130 KGates, the clock frequency is 180MHz and the throughput of 1024-bit RSA encryption is 352kbps. This design is suitable to be used in high speed RSA or ECC encryption/decryption. As a scalable design, it supports any key-length encryption/decryption up to the size of on-chip memory.
Madaria, Anuj R; Yao, Maoqing; Chi, Chunyung; Huang, Ningfeng; Lin, Chenxi; Li, Ruijuan; Povinelli, Michelle L; Dapkus, P Daniel; Zhou, Chongwu
2012-06-13
Vertically aligned, catalyst-free semiconducting nanowires hold great potential for photovoltaic applications, in which achieving scalable synthesis and optimized optical absorption simultaneously is critical. Here, we report combining nanosphere lithography (NSL) and selected area metal-organic chemical vapor deposition (SA-MOCVD) for the first time for scalable synthesis of vertically aligned gallium arsenide nanowire arrays, and surprisingly, we show that such nanowire arrays with patterning defects due to NSL can be as good as highly ordered nanowire arrays in terms of optical absorption and reflection. Wafer-scale patterning for nanowire synthesis was done using a polystyrene nanosphere template as a mask. Nanowires grown from substrates patterned by NSL show similar structural features to those patterned using electron beam lithography (EBL). Reflection of photons from the NSL-patterned nanowire array was used as a measure of the effect of defects present in the structure. Experimentally, we show that GaAs nanowires as short as 130 nm show reflection of <10% over the visible range of the solar spectrum. Our results indicate that a highly ordered nanowire structure is not necessary: despite the "defects" present in NSL-patterned nanowire arrays, their optical performance is similar to "defect-free" structures patterned by more costly, time-consuming EBL methods. Our scalable approach for synthesis of vertical semiconducting nanowires can have application in high-throughput and low-cost optoelectronic devices, including solar cells.
Fast multiclonal clusterization of V(D)J recombinations from high-throughput sequencing.
Giraud, Mathieu; Salson, Mikaël; Duez, Marc; Villenet, Céline; Quief, Sabine; Caillault, Aurélie; Grardel, Nathalie; Roumier, Christophe; Preudhomme, Claude; Figeac, Martin
2014-05-28
V(D)J recombinations in lymphocytes are essential for immunological diversity. They are also useful markers of pathologies. In leukemia, they are used to quantify the minimal residual disease during patient follow-up. However, the full breadth of lymphocyte diversity is not fully understood. We propose new algorithms that process high-throughput sequencing (HTS) data to extract unnamed V(D)J junctions and gather them into clones for quantification. This analysis is based on a seed heuristic and is fast and scalable because in the first phase, no alignment is performed with germline database sequences. The algorithms were applied to TR γ HTS data from a patient with acute lymphoblastic leukemia, and also on data simulating hypermutations. Our methods identified the main clone, as well as additional clones that were not identified with standard protocols. The proposed algorithms provide new insight into the analysis of high-throughput sequencing data for leukemia, and also to the quantitative assessment of any immunological profile. The methods described here are implemented in a C++ open-source program called Vidjil.
From Lab to Fab: Developing a Nanoscale Delivery Tool for Scalable Nanomanufacturing
NASA Astrophysics Data System (ADS)
Safi, Asmahan A.
The emergence of nanomaterials with unique properties at the nanoscale over the past two decades carries a capacity to impact society and transform or create new industries ranging from nanoelectronics to nanomedicine. However, a gap in nanomanufacturing technologies has prevented the translation of nanomaterial into real-world commercialized products. Bridging this gap requires a paradigm shift in methods for fabricating structured devices with a nanoscale resolution in a repeatable fashion. This thesis explores the new paradigms for fabricating nanoscale structures devices and systems for high throughput high registration applications. We present a robust and scalable nanoscale delivery platform, the Nanofountain Probe (NFP), for parallel direct-write of functional materials. The design and microfabrication of NFP is presented. The new generation addresses the challenges of throughput, resolution and ink replenishment characterizing tip-based nanomanufacturing. To achieve these goals, optimized probe geometry is integrated to the process along with channel sealing and cantilever bending. The capabilities of the newly fabricated probes are demonstrated through two type of delivery: protein nanopatterning and single cell nanoinjection. The broad applications of the NFP for single cell delivery are investigated. An external microfluidic packaging is developed to enable delivery in liquid environment. The system is integrated to a combined atomic force microscope and inverted fluorescence microscope. Intracellular delivery is demonstrated by injecting a fluorescent dextran into Hela cells in vitro while monitoring the injection forces. Such developments enable in vitro cellular delivery for single cell studies and high throughput gene expression. The nanomanufacturing capabilities of NFPs are explored. Nanofabrication of carbon nanotube-based electronics presents all the manufacturing challenges characterizing of assembling nanomaterials precisely onto devices. The presented study combines top-down and bottom-approaches by integrating the catalyst patterning and carbon nanotube growth directly on structures. Large array of iron-rich catalyst are patterned on an substrate for subsequent carbon nanotubes synthesis. The dependence of probe geometry and substrate wetting is assessed by modeling and experimental studies. Finally preliminary results on synthesis of carbon nanotube by catalyst assisted chemical vapor deposition suggest increasing the catalyst yield is critical. Such work will enable high throughput nanomanufacturing of carbon nanotube based devices.
OLEDs for lighting: new approaches
NASA Astrophysics Data System (ADS)
Duggal, Anil R.; Foust, Donald F.; Nealon, William F.; Heller, Christian M.
2004-02-01
OLED technology has improved to the point where it is now possible to envision developing OLEDs as a low cost solid state light source. In order to realize this, significant advances have to be made in device efficiency, lifetime at high brightness, high throughput fabrication, and the generation of illumination quality white light. In this talk, the requirements for general lighting will be reviewed and various approaches to meeting them will be outlined. Emphasis will be placed on a new monolithic series-connected OLED design architecture that promises scalability without high fabrication cost or design complexity.
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework
2012-01-01
Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.
Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John
2012-12-05
For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.
Parkison, Steven A.; Carlson, Jay D.; Chaudoin, Tammy R.; Hoke, Traci A.; Schenk, A. Katrin; Goulding, Evan H.; Pérez, Lance C.; Bonasera, Stephen J.
2016-01-01
Inexpensive, high-throughput, low maintenance systems for precise temporal and spatial measurement of mouse home cage behavior (including movement, feeding, and drinking) are required to evaluate products from large scale pharmaceutical design and genetic lesion programs. These measurements are also required to interpret results from more focused behavioral assays. We describe the design and validation of a highly-scalable, reliable mouse home cage behavioral monitoring system modeled on a previously described, one-of-a-kind system [1]. Mouse position was determined by solving static equilibrium equations describing the force and torques acting on the system strain gauges; feeding events were detected by a photobeam across the food hopper, and drinking events were detected by a capacitive lick sensor. Validation studies show excellent agreement between mouse position and drinking events measured by the system compared with video-based observation – a gold standard in neuroscience. PMID:23366406
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.
Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.
OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid
Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.
2016-01-01
High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617
Demonstration of nanoimprinted hyperlens array for high-throughput sub-diffraction imaging
NASA Astrophysics Data System (ADS)
Byun, Minsueop; Lee, Dasol; Kim, Minkyung; Kim, Yangdoo; Kim, Kwan; Ok, Jong G.; Rho, Junsuk; Lee, Heon
2017-04-01
Overcoming the resolution limit of conventional optics is regarded as the most important issue in optical imaging science and technology. Although hyperlenses, super-resolution imaging devices based on highly anisotropic dispersion relations that allow the access of high-wavevector components, have recently achieved far-field sub-diffraction imaging in real-time, the previously demonstrated devices have suffered from the extreme difficulties of both the fabrication process and the non-artificial objects placement. This results in restrictions on the practical applications of the hyperlens devices. While implementing large-scale hyperlens arrays in conventional microscopy is desirable to solve such issues, it has not been feasible to fabricate such large-scale hyperlens array with the previously used nanofabrication methods. Here, we suggest a scalable and reliable fabrication process of a large-scale hyperlens device based on direct pattern transfer techniques. We fabricate a 5 cm × 5 cm size hyperlenses array and experimentally demonstrate that it can resolve sub-diffraction features down to 160 nm under 410 nm wavelength visible light. The array-based hyperlens device will provide a simple solution for much more practical far-field and real-time super-resolution imaging which can be widely used in optics, biology, medical science, nanotechnology and other closely related interdisciplinary fields.
Crescentini, Marco; Thei, Frederico; Bennati, Marco; Saha, Shimul; de Planque, Maurits R R; Morgan, Hywel; Tartagni, Marco
2015-06-01
Lipid bilayer membrane (BLM) arrays are required for high throughput analysis, for example drug screening or advanced DNA sequencing. Complex microfluidic devices are being developed but these are restricted in terms of array size and structure or have integrated electronic sensing with limited noise performance. We present a compact and scalable multichannel electrophysiology platform based on a hybrid approach that combines integrated state-of-the-art microelectronics with low-cost disposable fluidics providing a platform for high-quality parallel single ion channel recording. Specifically, we have developed a new integrated circuit amplifier based on a novel noise cancellation scheme that eliminates flicker noise derived from devices under test and amplifiers. The system is demonstrated through the simultaneous recording of ion channel activity from eight bilayer membranes. The platform is scalable and could be extended to much larger array sizes, limited only by electronic data decimation and communication capabilities.
NASA Astrophysics Data System (ADS)
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-12-01
Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-01-01
Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.
High throughput and miniaturised systems for biodegradability assessments.
Cregut, Mickael; Jouanneau, Sulivan; Brillet, François; Durand, Marie-José; Sweetlove, Cyril; Chenèble, Jean-Charles; L'Haridon, Jacques; Thouand, Gérald
2014-01-01
The society demands safer products with a better ecological profile. Regulatory criteria have been developed to prevent risks for human health and the environment, for example, within the framework of the European regulation REACH (Regulation (EC) No 1907, 2006). This has driven industry to consider the development of high throughput screening methodologies for assessing chemical biodegradability. These new screening methodologies must be scalable for miniaturisation, reproducible and as reliable as existing procedures for enhanced biodegradability assessment. Here, we evaluate two alternative systems that can be scaled for high throughput screening and conveniently miniaturised to limit costs in comparison with traditional testing. These systems are based on two dyes as follows: an invasive fluorescent dyes that serves as a cellular activity marker (a resazurin-like dye reagent) and a noninvasive fluorescent oxygen optosensor dye (an optical sensor). The advantages and limitations of these platforms for biodegradability assessment are presented. Our results confirm the feasibility of these systems for evaluating and screening chemicals for ready biodegradability. The optosensor is a miniaturised version of a component already used in traditional ready biodegradability testing, whereas the resazurin dye offers an interesting new screening mechanism for chemical concentrations greater than 10 mg/l that are not amenable to traditional closed bottle tests. The use of these approaches allows generalisation of high throughput screening methodologies to meet the need of developing new compounds with a favourable ecological profile and also assessment for regulatory purpose.
Using ALFA for high throughput, distributed data transmission in the ALICE O2 system
NASA Astrophysics Data System (ADS)
Wegrzynek, A.;
2017-10-01
ALICE (A Large Ion Collider Experiment) is a heavy-ion detector designed to study the physics of strongly interacting matter (the Quark-Gluon Plasma at the CERN LHC (Large Hadron Collider). ALICE has been successfully collecting physics data in Run 2 since spring 2015. In parallel, preparations for a major upgrade of the computing system, called O2 (Online-Offline), scheduled for the Long Shutdown 2 in 2019-2020, are being made. One of the major requirements of the system is the capacity to transport data between so-called FLPs (First Level Processors), equipped with readout cards, and the EPNs (Event Processing Node), performing data aggregation, frame building and partial reconstruction. It is foreseen to have 268 FLPs dispatching data to 1500 EPNs with an average output of 20 Gb/s each. In overall, the O2 processing system will operate at terabits per second of throughput while handling millions of concurrent connections. The ALFA framework will standardize and handle software related tasks such as readout, data transport, frame building, calibration, online reconstruction and more in the upgraded computing system. ALFA supports two data transport libraries: ZeroMQ and nanomsg. This paper discusses the efficiency of ALFA in terms of high throughput data transport. The tests were performed with multiple FLPs pushing data to multiple EPNs. The transfer was done using push-pull communication patterns and two socket configurations: bind, connect. The set of benchmarks was prepared to get the most performant results on each hardware setup. The paper presents the measurement process and final results - data throughput combined with computing resources usage as a function of block size. The high number of nodes and connections in the final set up may cause race conditions that can lead to uneven load balancing and poor scalability. The performed tests allow us to validate whether the traffic is distributed evenly over all receivers. It also measures the behaviour of the network in saturation and evaluates scalability from a 1-to-1 to a N-to-M solution.
Castedo, Luis
2017-01-01
Fog computing extends cloud computing to the edge of a network enabling new Internet of Things (IoT) applications and services, which may involve critical data that require privacy and security. In an IoT fog computing system, three elements can be distinguished: IoT nodes that collect data, the cloud, and interconnected IoT gateways that exchange messages with the IoT nodes and with the cloud. This article focuses on securing IoT gateways, which are assumed to be constrained in terms of computational resources, but that are able to offload some processing from the cloud and to reduce the latency in the responses to the IoT nodes. However, it is usually taken for granted that IoT gateways have direct access to the electrical grid, which is not always the case: in mission-critical applications like natural disaster relief or environmental monitoring, it is common to deploy IoT nodes and gateways in large areas where electricity comes from solar or wind energy that charge the batteries that power every device. In this article, how to secure IoT gateway communications while minimizing power consumption is analyzed. The throughput and power consumption of Rivest–Shamir–Adleman (RSA) and Elliptic Curve Cryptography (ECC) are considered, since they are really popular, but have not been thoroughly analyzed when applied to IoT scenarios. Moreover, the most widespread Transport Layer Security (TLS) cipher suites use RSA as the main public key-exchange algorithm, but the key sizes needed are not practical for most IoT devices and cannot be scaled to high security levels. In contrast, ECC represents a much lighter and scalable alternative. Thus, RSA and ECC are compared for equivalent security levels, and power consumption and data throughput are measured using a testbed of IoT gateways. The measurements obtained indicate that, in the specific fog computing scenario proposed, ECC is clearly a much better alternative than RSA, obtaining energy consumption reductions of up to 50% and a data throughput that doubles RSA in most scenarios. These conclusions are then corroborated by a frame temporal analysis of Ethernet packets. In addition, current data compression algorithms are evaluated, concluding that, when dealing with the small payloads related to IoT applications, they do not pay off in terms of real data throughput and power consumption. PMID:28850104
Suárez-Albela, Manuel; Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Castedo, Luis
2017-08-29
Fog computing extends cloud computing to the edge of a network enabling new Internet of Things (IoT) applications and services, which may involve critical data that require privacy and security. In an IoT fog computing system, three elements can be distinguished: IoT nodes that collect data, the cloud, and interconnected IoT gateways that exchange messages with the IoT nodes and with the cloud. This article focuses on securing IoT gateways, which are assumed to be constrained in terms of computational resources, but that are able to offload some processing from the cloud and to reduce the latency in the responses to the IoT nodes. However, it is usually taken for granted that IoT gateways have direct access to the electrical grid, which is not always the case: in mission-critical applications like natural disaster relief or environmental monitoring, it is common to deploy IoT nodes and gateways in large areas where electricity comes from solar or wind energy that charge the batteries that power every device. In this article, how to secure IoT gateway communications while minimizing power consumption is analyzed. The throughput and power consumption of Rivest-Shamir-Adleman (RSA) and Elliptic Curve Cryptography (ECC) are considered, since they are really popular, but have not been thoroughly analyzed when applied to IoT scenarios. Moreover, the most widespread Transport Layer Security (TLS) cipher suites use RSA as the main public key-exchange algorithm, but the key sizes needed are not practical for most IoT devices and cannot be scaled to high security levels. In contrast, ECC represents a much lighter and scalable alternative. Thus, RSA and ECC are compared for equivalent security levels, and power consumption and data throughput are measured using a testbed of IoT gateways. The measurements obtained indicate that, in the specific fog computing scenario proposed, ECC is clearly a much better alternative than RSA, obtaining energy consumption reductions of up to 50% and a data throughput that doubles RSA in most scenarios. These conclusions are then corroborated by a frame temporal analysis of Ethernet packets. In addition, current data compression algorithms are evaluated, concluding that, when dealing with the small payloads related to IoT applications, they do not pay off in terms of real data throughput and power consumption.
Castle, John; Garrett-Engele, Phil; Armour, Christopher D; Duenwald, Sven J; Loerch, Patrick M; Meyer, Michael R; Schadt, Eric E; Stoughton, Roland; Parrish, Mark L; Shoemaker, Daniel D; Johnson, Jason M
2003-01-01
Microarrays offer a high-resolution means for monitoring pre-mRNA splicing on a genomic scale. We have developed a novel, unbiased amplification protocol that permits labeling of entire transcripts. Also, hybridization conditions, probe characteristics, and analysis algorithms were optimized for detection of exons, exon-intron edges, and exon junctions. These optimized protocols can be used to detect small variations and isoform mixtures, map the tissue specificity of known human alternative isoforms, and provide a robust, scalable platform for high-throughput discovery of alternative splicing.
Castle, John; Garrett-Engele, Phil; Armour, Christopher D; Duenwald, Sven J; Loerch, Patrick M; Meyer, Michael R; Schadt, Eric E; Stoughton, Roland; Parrish, Mark L; Shoemaker, Daniel D; Johnson, Jason M
2003-01-01
Microarrays offer a high-resolution means for monitoring pre-mRNA splicing on a genomic scale. We have developed a novel, unbiased amplification protocol that permits labeling of entire transcripts. Also, hybridization conditions, probe characteristics, and analysis algorithms were optimized for detection of exons, exon-intron edges, and exon junctions. These optimized protocols can be used to detect small variations and isoform mixtures, map the tissue specificity of known human alternative isoforms, and provide a robust, scalable platform for high-throughput discovery of alternative splicing. PMID:14519201
A CRISPR Cas9 high-throughput genome editing toolkit for kinetoplastids
Beneke, Tom; Makin, Laura; Valli, Jessica; Sunter, Jack
2017-01-01
Clustered regularly interspaced short palindromic repeats (CRISPR), CRISPR-associated gene 9 (Cas9) genome editing is set to revolutionize genetic manipulation of pathogens, including kinetoplastids. CRISPR technology provides the opportunity to develop scalable methods for high-throughput production of mutant phenotypes. Here, we report development of a CRISPR-Cas9 toolkit that allows rapid tagging and gene knockout in diverse kinetoplastid species without requiring the user to perform any DNA cloning. We developed a new protocol for single-guide RNA (sgRNA) delivery using PCR-generated DNA templates which are transcribed in vivo by T7 RNA polymerase and an online resource (LeishGEdit.net) for automated primer design. We produced a set of plasmids that allows easy and scalable generation of DNA constructs for transfections in just a few hours. We show how these tools allow knock-in of fluorescent protein tags, modified biotin ligase BirA*, luciferase, HaloTag and small epitope tags, which can be fused to proteins at the N- or C-terminus, for functional studies of proteins and localization screening. These tools enabled generation of null mutants in a single round of transfection in promastigote form Leishmania major, Leishmania mexicana and bloodstream form Trypanosoma brucei; deleted genes were undetectable in non-clonal populations, enabling for the first time rapid and large-scale knockout screens. PMID:28573017
NASA Astrophysics Data System (ADS)
de Schryver, C.; Weithoffer, S.; Wasenmüller, U.; Wehn, N.
2012-09-01
Channel coding is a standard technique in all wireless communication systems. In addition to the typically employed methods like convolutional coding, turbo coding or low density parity check (LDPC) coding, algebraic codes are used in many cases. For example, outer BCH coding is applied in the DVB-S2 standard for satellite TV broadcasting. A key operation for BCH and the related Reed-Solomon codes are multiplications in finite fields (Galois Fields), where extension fields of prime fields are used. A lot of architectures for multiplications in finite fields have been published over the last decades. This paper examines four different multiplier architectures in detail that offer the potential for very high throughputs. We investigate the implementation performance of these multipliers on FPGA technology in the context of channel coding. We study the efficiency of the multipliers with respect to area, frequency and throughput, as well as configurability and scalability. The implementation data of the fully verified circuits are provided for a Xilinx Virtex-4 device after place and route.
Gogoshin, Grigoriy; Boerwinkle, Eric
2017-01-01
Abstract Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology—type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types—single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc. PMID:27681505
Gogoshin, Grigoriy; Boerwinkle, Eric; Rodin, Andrei S
2017-04-01
Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology-type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types-single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc.
Scalable Sub-micron Patterning of Organic Materials Toward High Density Soft Electronics.
Kim, Jaekyun; Kim, Myung-Gil; Kim, Jaehyun; Jo, Sangho; Kang, Jingu; Jo, Jeong-Wan; Lee, Woobin; Hwang, Chahwan; Moon, Juhyuk; Yang, Lin; Kim, Yun-Hi; Noh, Yong-Young; Jaung, Jae Yun; Kim, Yong-Hoon; Park, Sung Kyu
2015-09-28
The success of silicon based high density integrated circuits ignited explosive expansion of microelectronics. Although the inorganic semiconductors have shown superior carrier mobilities for conventional high speed switching devices, the emergence of unconventional applications, such as flexible electronics, highly sensitive photosensors, large area sensor array, and tailored optoelectronics, brought intensive research on next generation electronic materials. The rationally designed multifunctional soft electronic materials, organic and carbon-based semiconductors, are demonstrated with low-cost solution process, exceptional mechanical stability, and on-demand optoelectronic properties. Unfortunately, the industrial implementation of the soft electronic materials has been hindered due to lack of scalable fine-patterning methods. In this report, we demonstrated facile general route for high throughput sub-micron patterning of soft materials, using spatially selective deep-ultraviolet irradiation. For organic and carbon-based materials, the highly energetic photons (e.g. deep-ultraviolet rays) enable direct photo-conversion from conducting/semiconducting to insulating state through molecular dissociation and disordering with spatial resolution down to a sub-μm-scale. The successful demonstration of organic semiconductor circuitry promise our result proliferate industrial adoption of soft materials for next generation electronics.
Scalable Sub-micron Patterning of Organic Materials Toward High Density Soft Electronics
NASA Astrophysics Data System (ADS)
Kim, Jaekyun; Kim, Myung-Gil; Kim, Jaehyun; Jo, Sangho; Kang, Jingu; Jo, Jeong-Wan; Lee, Woobin; Hwang, Chahwan; Moon, Juhyuk; Yang, Lin; Kim, Yun-Hi; Noh, Yong-Young; Yun Jaung, Jae; Kim, Yong-Hoon; Kyu Park, Sung
2015-09-01
The success of silicon based high density integrated circuits ignited explosive expansion of microelectronics. Although the inorganic semiconductors have shown superior carrier mobilities for conventional high speed switching devices, the emergence of unconventional applications, such as flexible electronics, highly sensitive photosensors, large area sensor array, and tailored optoelectronics, brought intensive research on next generation electronic materials. The rationally designed multifunctional soft electronic materials, organic and carbon-based semiconductors, are demonstrated with low-cost solution process, exceptional mechanical stability, and on-demand optoelectronic properties. Unfortunately, the industrial implementation of the soft electronic materials has been hindered due to lack of scalable fine-patterning methods. In this report, we demonstrated facile general route for high throughput sub-micron patterning of soft materials, using spatially selective deep-ultraviolet irradiation. For organic and carbon-based materials, the highly energetic photons (e.g. deep-ultraviolet rays) enable direct photo-conversion from conducting/semiconducting to insulating state through molecular dissociation and disordering with spatial resolution down to a sub-μm-scale. The successful demonstration of organic semiconductor circuitry promise our result proliferate industrial adoption of soft materials for next generation electronics.
High-Throughput Fabrication of Flexible and Transparent All-Carbon Nanotube Electronics.
Chen, Yong-Yang; Sun, Yun; Zhu, Qian-Bing; Wang, Bing-Wei; Yan, Xin; Qiu, Song; Li, Qing-Wen; Hou, Peng-Xiang; Liu, Chang; Sun, Dong-Ming; Cheng, Hui-Ming
2018-05-01
This study reports a simple and effective technique for the high-throughput fabrication of flexible all-carbon nanotube (CNT) electronics using a photosensitive dry film instead of traditional liquid photoresists. A 10 in. sized photosensitive dry film is laminated onto a flexible substrate by a roll-to-roll technology, and a 5 µm pattern resolution of the resulting CNT films is achieved for the construction of flexible and transparent all-CNT thin-film transistors (TFTs) and integrated circuits. The fabricated TFTs exhibit a desirable electrical performance including an on-off current ratio of more than 10 5 , a carrier mobility of 33 cm 2 V -1 s -1 , and a small hysteresis. The standard deviations of on-current and mobility are, respectively, 5% and 2% of the average value, demonstrating the excellent reproducibility and uniformity of the devices, which allows constructing a large noise margin inverter circuit with a voltage gain of 30. This study indicates that a photosensitive dry film is very promising for the low-cost, fast, reliable, and scalable fabrication of flexible and transparent CNT-based integrated circuits, and opens up opportunities for future high-throughput CNT-based printed electronics.
Systematic Identification of Combinatorial Drivers and Targets in Cancer Cell Lines
Tabchy, Adel; Eltonsy, Nevine; Housman, David E.; Mills, Gordon B.
2013-01-01
There is an urgent need to elicit and validate highly efficacious targets for combinatorial intervention from large scale ongoing molecular characterization efforts of tumors. We established an in silico bioinformatic platform in concert with a high throughput screening platform evaluating 37 novel targeted agents in 669 extensively characterized cancer cell lines reflecting the genomic and tissue-type diversity of human cancers, to systematically identify combinatorial biomarkers of response and co-actionable targets in cancer. Genomic biomarkers discovered in a 141 cell line training set were validated in an independent 359 cell line test set. We identified co-occurring and mutually exclusive genomic events that represent potential drivers and combinatorial targets in cancer. We demonstrate multiple cooperating genomic events that predict sensitivity to drug intervention independent of tumor lineage. The coupling of scalable in silico and biologic high throughput cancer cell line platforms for the identification of co-events in cancer delivers rational combinatorial targets for synthetic lethal approaches with a high potential to pre-empt the emergence of resistance. PMID:23577104
Systematic identification of combinatorial drivers and targets in cancer cell lines.
Tabchy, Adel; Eltonsy, Nevine; Housman, David E; Mills, Gordon B
2013-01-01
There is an urgent need to elicit and validate highly efficacious targets for combinatorial intervention from large scale ongoing molecular characterization efforts of tumors. We established an in silico bioinformatic platform in concert with a high throughput screening platform evaluating 37 novel targeted agents in 669 extensively characterized cancer cell lines reflecting the genomic and tissue-type diversity of human cancers, to systematically identify combinatorial biomarkers of response and co-actionable targets in cancer. Genomic biomarkers discovered in a 141 cell line training set were validated in an independent 359 cell line test set. We identified co-occurring and mutually exclusive genomic events that represent potential drivers and combinatorial targets in cancer. We demonstrate multiple cooperating genomic events that predict sensitivity to drug intervention independent of tumor lineage. The coupling of scalable in silico and biologic high throughput cancer cell line platforms for the identification of co-events in cancer delivers rational combinatorial targets for synthetic lethal approaches with a high potential to pre-empt the emergence of resistance.
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Rioux, Norman; Bolcar, Matthew; Liu, Alice; Guyon, Oliver; Stark, Chris; Arenberg, Jon
2016-01-01
Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10^-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance. These efforts are combined through integrated modeling, coronagraph evaluations, and Exo-Earth yield calculations to assess the potential performance of the selected architecture. In addition, we discusses the scalability of this architecture to larger apertures and the technological tall poles to enabling it.
Kondrashova, Olga; Love, Clare J.; Lunke, Sebastian; Hsu, Arthur L.; Waring, Paul M.; Taylor, Graham R.
2015-01-01
Whilst next generation sequencing can report point mutations in fixed tissue tumour samples reliably, the accurate determination of copy number is more challenging. The conventional Multiplex Ligation-dependent Probe Amplification (MLPA) assay is an effective tool for measurement of gene dosage, but is restricted to around 50 targets due to size resolution of the MLPA probes. By switching from a size-resolved format, to a sequence-resolved format we developed a scalable, high-throughput, quantitative assay. MLPA-seq is capable of detecting deletions, duplications, and amplifications in as little as 5ng of genomic DNA, including from formalin-fixed paraffin-embedded (FFPE) tumour samples. We show that this method can detect BRCA1, BRCA2, ERBB2 and CCNE1 copy number changes in DNA extracted from snap-frozen and FFPE tumour tissue, with 100% sensitivity and >99.5% specificity. PMID:26569395
Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun
2017-01-01
Abstract Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure–property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure–property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure–property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials. PMID:28458737
iPSC-derived neurons as a higher-throughput readout for autism: Promises and pitfalls
Prilutsky, Daria; Palmer, Nathan P.; Smedemark-Margulies, Niklas; Schlaeger, Thorsten M.; Margulies, David M.; Kohane, Isaac S.
2014-01-01
The elucidation of disease etiologies and establishment of robust, scalable, high-throughput screening assays for autism spectrum disorders (ASDs) have been impeded by both inaccessibility of disease-relevant neuronal tissue and the genetic heterogeneity of the disorder. Neuronal cells derived from induced pluripotent stem cells (iPSCs) from autism patients may circumvent these obstacles and serve as relevant cell models. To date, derived cells are characterized and screened by assessing their neuronal phenotypes. These characterizations are often etiology-specific or lack reproducibility and stability. In this manuscript, we present an overview of efforts to study iPSC-derived neurons as a model for autism, and we explore the plausibility of gene expression profiling as a reproducible and stable disease marker. PMID:24374161
Blood Banking in Living Droplets
Shao, Lei; Zhang, Xiaohui; Xu, Feng; Song, YoungSeok; Keles, Hasan Onur; Matloff, Laura; Markel, Jordan; Demirci, Utkan
2011-01-01
Blood banking has a broad public health impact influencing millions of lives daily. It could potentially benefit from emerging biopreservation technologies. However, although vitrification has shown advantages over traditional cryopreservation techniques, it has not been incorporated into transfusion medicine mainly due to throughput challenges. Here, we present a scalable method that can vitrify red blood cells in microdroplets. This approach enables the vitrification of large volumes of blood in a short amount of time, and makes it a viable and scalable biotechnology tool for blood cryopreservation. PMID:21412411
Web-based visual analysis for high-throughput genomics
2013-01-01
Background Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. Results We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Conclusions Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput genomics experiments. PMID:23758618
Robotic implementation of assays: tissue-nonspecific alkaline phosphatase (TNAP) case study.
Chung, Thomas D Y
2013-01-01
Laboratory automation and robotics have "industrialized" the execution and completion of large-scale, enabling high-capacity and high-throughput (100 K-1 MM/day) screening (HTS) campaigns of large "libraries" of compounds (>200 K-2 MM) to complete in a few days or weeks. Critical to the success these HTS campaigns is the ability of a competent assay development team to convert a validated research-grade laboratory "benchtop" assay suitable for manual or semi-automated operations on a few hundreds of compounds into a robust miniaturized (384- or 1,536-well format), well-engineered, scalable, industrialized assay that can be seamlessly implemented on a fully automated, fully integrated robotic screening platform for cost-effective screening of hundreds of thousands of compounds. Here, we provide a review of the theoretical guiding principles and practical considerations necessary to reduce often complex research biology into a "lean manufacturing" engineering endeavor comprising adaption, automation, and implementation of HTS. Furthermore we provide a detailed example specifically for a cell-free in vitro biochemical, enzymatic phosphatase assay for tissue-nonspecific alkaline phosphatase that illustrates these principles and considerations.
High Throughput Plasma Water Treatment
NASA Astrophysics Data System (ADS)
Mujovic, Selman; Foster, John
2016-10-01
The troublesome emergence of new classes of micro-pollutants, such as pharmaceuticals and endocrine disruptors, poses challenges for conventional water treatment systems. In an effort to address these contaminants and to support water reuse in drought stricken regions, new technologies must be introduced. The interaction of water with plasma rapidly mineralizes organics by inducing advanced oxidation in addition to other chemical, physical and radiative processes. The primary barrier to the implementation of plasma-based water treatment is process volume scale up. In this work, we investigate a potentially scalable, high throughput plasma water reactor that utilizes a packed bed dielectric barrier-like geometry to maximize the plasma-water interface. Here, the water serves as the dielectric medium. High-speed imaging and emission spectroscopy are used to characterize the reactor discharges. Changes in methylene blue concentration and basic water parameters are mapped as a function of plasma treatment time. Experimental results are compared to electrostatic and plasma chemistry computations, which will provide insight into the reactor's operation so that efficiency can be assessed. Supported by NSF (CBET 1336375).
NASA Technical Reports Server (NTRS)
Kaul, Anupama B.; Megerian, Krikor G.; von Allmen, Paul; Kowalczyk, Robert; Baron, Richard
2009-01-01
We have developed manufacturable approaches to form single, vertically aligned carbon nanotubes, where the tubes are centered precisely, and placed within a few hundred nm of 1-1.5 micron deep trenches. These wafer-scale approaches were enabled by chemically amplified resists and inductively coupled Cryo-etchers for forming the 3D nanoscale architectures. The tube growth was performed using dc plasma-enhanced chemical vapor deposition (PECVD), and the materials used for the pre-fabricated 3D architectures were chemically and structurally compatible with the high temperature (700 C) PECVD synthesis of our tubes, in an ammonia and acetylene ambient. Tube characteristics were also engineered to some extent, by adjusting growth parameters, such as Ni catalyst thickness, pressure and plasma power during growth. Such scalable, high throughput top-down fabrication techniques, combined with bottom-up tube synthesis, should accelerate the development of PECVD tubes for applications such as interconnects, nano-electromechanical (NEMS), sensors or 3D electronics in general.
Paintdakhi, Ahmad; Parry, Bradley; Campos, Manuel; Irnov, Irnov; Elf, Johan; Surovtsev, Ivan; Jacobs-Wagner, Christine
2016-01-01
Summary With the realization that bacteria display phenotypic variability among cells and exhibit complex subcellular organization critical for cellular function and behavior, microscopy has re-emerged as a primary tool in bacterial research during the last decade. However, the bottleneck in today’s single-cell studies is quantitative image analysis of cells and fluorescent signals. Here, we address current limitations through the development of Oufti, a stand-alone, open-source software package for automated measurements of microbial cells and fluorescence signals from microscopy images. Oufti provides computational solutions for tracking touching cells in confluent samples, handles various cell morphologies, offers algorithms for quantitative analysis of both diffraction and non-diffraction-limited fluorescence signals, and is scalable for high-throughput analysis of massive datasets, all with subpixel precision. All functionalities are integrated in a single package. The graphical user interface, which includes interactive modules for segmentation, image analysis, and post-processing analysis, makes the software broadly accessible to users irrespective of their computational skills. PMID:26538279
LoRa Scalability: A Simulation Model Based on Interference Measurements
Haxhibeqiri, Jetmir; Van den Abeele, Floris; Moerman, Ingrid; Hoebeke, Jeroen
2017-01-01
LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data. PMID:28545239
LoRa Scalability: A Simulation Model Based on Interference Measurements.
Haxhibeqiri, Jetmir; Van den Abeele, Floris; Moerman, Ingrid; Hoebeke, Jeroen
2017-05-23
LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.
MOEX: Solvent extraction approach for recycling enriched 98Mo/ 100Mo material
Tkac, Peter; Brown, M. Alex; Momen, Abdul; ...
2017-03-20
Several promising pathways exist for the production of 99Mo/ 99mTc using enriched 98Mo or 100Mo. Use of Mo targets require a major change in current generator technology, and the necessity for an efficient recycle pathway to recover valuable enriched Mo material. High recovery yields, purity, suitable chemical form and particle size are required. Results on the development of the MOEX– molybdenum solvent extraction – approach to recycle enriched Mo material are presented. Furthermore, the advantages of the MOEX process are very high decontamination factors from potassium and other elements, high throughput, easy scalability, automation, and minimal waste generation.
MOEX: Solvent extraction approach for recycling enriched 98Mo/ 100Mo material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tkac, Peter; Brown, M. Alex; Momen, Abdul
Several promising pathways exist for the production of 99Mo/ 99mTc using enriched 98Mo or 100Mo. Use of Mo targets require a major change in current generator technology, and the necessity for an efficient recycle pathway to recover valuable enriched Mo material. High recovery yields, purity, suitable chemical form and particle size are required. Results on the development of the MOEX– molybdenum solvent extraction – approach to recycle enriched Mo material are presented. Furthermore, the advantages of the MOEX process are very high decontamination factors from potassium and other elements, high throughput, easy scalability, automation, and minimal waste generation.
Future technologies for monitoring HIV drug resistance and cure.
Parikh, Urvi M; McCormick, Kevin; van Zyl, Gert; Mellors, John W
2017-03-01
Sensitive, scalable and affordable assays are critically needed for monitoring the success of interventions for preventing, treating and attempting to cure HIV infection. This review evaluates current and emerging technologies that are applicable for both surveillance of HIV drug resistance (HIVDR) and characterization of HIV reservoirs that persist despite antiretroviral therapy and are obstacles to curing HIV infection. Next-generation sequencing (NGS) has the potential to be adapted into high-throughput, cost-efficient approaches for HIVDR surveillance and monitoring during continued scale-up of antiretroviral therapy and rollout of preexposure prophylaxis. Similarly, improvements in PCR and NGS are resulting in higher throughput single genome sequencing to detect intact proviruses and to characterize HIV integration sites and clonal expansions of infected cells. Current population genotyping methods for resistance monitoring are high cost and low throughput. NGS, combined with simpler sample collection and storage matrices (e.g. dried blood spots), has considerable potential to broaden global surveillance and patient monitoring for HIVDR. Recent adaptions of NGS to identify integration sites of HIV in the human genome and to characterize the integrated HIV proviruses are likely to facilitate investigations of the impact of experimental 'curative' interventions on HIV reservoirs.
High-throughput label-free microcontact printing graphene-based biosensor for valley fever.
Tsai, Shih-Ming; Goshia, Tyler; Chen, Yen-Chang; Kagiri, Agnes; Sibal, Angelo; Chiu, Meng-Hsuen; Gadre, Anand; Tung, Vincent; Chin, Wei-Chun
2018-06-18
The highly prevalent and virulent disease in the Western Hemisphere Coccidioidomycosis, also known as Valley Fever, can cause serious illness such as severe pneumonia with respiratory failure. It can also take on a disseminated form where the infection spreads throughout the body. Thus, a serious impetus exists to develop effective detection of the disease that can also operate in a rapid and high-throughput fashion. Here, we report the assembly of a highly sensitive biosensor using reduced graphene oxide (rGO) with Coccidioides(cocci) antibodies as the target analytes. The facile design made possible by the scalable microcontact printing (μCP) surface patterning technique which enables rapid, ultrasensitive detection. It provides a wide linear range and sub picomolar (2.5 pg/ml) detection, while also delivering high selectivity and reproducibility. This work demonstrates an important advancement in the development of a sensitive label-free rGO biosensor for Coccidioidomycosis detection. This result also provides the potential application of direct pathogen diagnosis for the future biosensor development. Copyright © 2018 Elsevier B.V. All rights reserved.
Large-scale parallel genome assembler over cloud computing environment.
Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong
2017-06-01
The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.
A Low-Noise Transimpedance Amplifier for BLM-Based Ion Channel Recording.
Crescentini, Marco; Bennati, Marco; Saha, Shimul Chandra; Ivica, Josip; de Planque, Maurits; Morgan, Hywel; Tartagni, Marco
2016-05-19
High-throughput screening (HTS) using ion channel recording is a powerful drug discovery technique in pharmacology. Ion channel recording with planar bilayer lipid membranes (BLM) is scalable and has very high sensitivity. A HTS system based on BLM ion channel recording faces three main challenges: (i) design of scalable microfluidic devices; (ii) design of compact ultra-low-noise transimpedance amplifiers able to detect currents in the pA range with bandwidth >10 kHz; (iii) design of compact, robust and scalable systems that integrate these two elements. This paper presents a low-noise transimpedance amplifier with integrated A/D conversion realized in CMOS 0.35 μm technology. The CMOS amplifier acquires currents in the range ±200 pA and ±20 nA, with 100 kHz bandwidth while dissipating 41 mW. An integrated digital offset compensation loop balances any voltage offsets from Ag/AgCl electrodes. The measured open-input input-referred noise current is as low as 4 fA/√Hz at ±200 pA range. The current amplifier is embedded in an integrated platform, together with a microfluidic device, for current recording from ion channels. Gramicidin-A, α-haemolysin and KcsA potassium channels have been used to prove both the platform and the current-to-digital converter.
A Low-Noise Transimpedance Amplifier for BLM-Based Ion Channel Recording
Crescentini, Marco; Bennati, Marco; Saha, Shimul Chandra; Ivica, Josip; de Planque, Maurits; Morgan, Hywel; Tartagni, Marco
2016-01-01
High-throughput screening (HTS) using ion channel recording is a powerful drug discovery technique in pharmacology. Ion channel recording with planar bilayer lipid membranes (BLM) is scalable and has very high sensitivity. A HTS system based on BLM ion channel recording faces three main challenges: (i) design of scalable microfluidic devices; (ii) design of compact ultra-low-noise transimpedance amplifiers able to detect currents in the pA range with bandwidth >10 kHz; (iii) design of compact, robust and scalable systems that integrate these two elements. This paper presents a low-noise transimpedance amplifier with integrated A/D conversion realized in CMOS 0.35 μm technology. The CMOS amplifier acquires currents in the range ±200 pA and ±20 nA, with 100 kHz bandwidth while dissipating 41 mW. An integrated digital offset compensation loop balances any voltage offsets from Ag/AgCl electrodes. The measured open-input input-referred noise current is as low as 4 fA/√Hz at ±200 pA range. The current amplifier is embedded in an integrated platform, together with a microfluidic device, for current recording from ion channels. Gramicidin-A, α-haemolysin and KcsA potassium channels have been used to prove both the platform and the current-to-digital converter. PMID:27213382
Optical nano-woodpiles: large-area metallic photonic crystals and metamaterials.
Ibbotson, Lindsey A; Demetriadou, Angela; Croxall, Stephen; Hess, Ortwin; Baumberg, Jeremy J
2015-02-09
Metallic woodpile photonic crystals and metamaterials operating across the visible spectrum are extremely difficult to construct over large areas, because of the intricate three-dimensional nanostructures and sub-50 nm features demanded. Previous routes use electron-beam lithography or direct laser writing but widespread application is restricted by their expense and low throughput. Scalable approaches including soft lithography, colloidal self-assembly, and interference holography, produce structures limited in feature size, material durability, or geometry. By multiply stacking gold nanowire flexible gratings, we demonstrate a scalable high-fidelity approach for fabricating flexible metallic woodpile photonic crystals, with features down to 10 nm produced in bulk and at low cost. Control of stacking sequence, asymmetry, and orientation elicits great control, with visible-wavelength band-gap reflections exceeding 60%, and with strong induced chirality. Such flexible and stretchable architectures can produce metamaterials with refractive index near zero, and are easily tuned across the IR and visible ranges.
Hu, Jiazhi; Meyers, Robin M; Dong, Junchao; Panchakshari, Rohit A; Alt, Frederick W; Frock, Richard L
2016-05-01
Unbiased, high-throughput assays for detecting and quantifying DNA double-stranded breaks (DSBs) across the genome in mammalian cells will facilitate basic studies of the mechanisms that generate and repair endogenous DSBs. They will also enable more applied studies, such as those to evaluate the on- and off-target activities of engineered nucleases. Here we describe a linear amplification-mediated high-throughput genome-wide sequencing (LAM-HTGTS) method for the detection of genome-wide 'prey' DSBs via their translocation in cultured mammalian cells to a fixed 'bait' DSB. Bait-prey junctions are cloned directly from isolated genomic DNA using LAM-PCR and unidirectionally ligated to bridge adapters; subsequent PCR steps amplify the single-stranded DNA junction library in preparation for Illumina Miseq paired-end sequencing. A custom bioinformatics pipeline identifies prey sequences that contribute to junctions and maps them across the genome. LAM-HTGTS differs from related approaches because it detects a wide range of broken end structures with nucleotide-level resolution. Familiarity with nucleic acid methods and next-generation sequencing analysis is necessary for library generation and data interpretation. LAM-HTGTS assays are sensitive, reproducible, relatively inexpensive, scalable and straightforward to implement with a turnaround time of <1 week.
Bae, Seunghee; An, In-Sook; An, Sungkwan
2015-09-01
Ultraviolet (UV) radiation is a major inducer of skin aging and accumulated exposure to UV radiation increases DNA damage in skin cells, including dermal fibroblasts. In the present study, we developed a novel DNA repair regulating material discovery (DREAM) system for the high-throughput screening and identification of putative materials regulating DNA repair in skin cells. First, we established a modified lentivirus expressing the luciferase and hypoxanthine phosphoribosyl transferase (HPRT) genes. Then, human dermal fibroblast WS-1 cells were infected with the modified lentivirus and selected with puromycin to establish cells that stably expressed luciferase and HPRT (DREAM-F cells). The first step in the DREAM protocol was a 96-well-based screening procedure, involving the analysis of cell viability and luciferase activity after pretreatment of DREAM-F cells with reagents of interest and post-treatment with UVB radiation, and vice versa. In the second step, we validated certain effective reagents identified in the first step by analyzing the cell cycle, evaluating cell death, and performing HPRT-DNA sequencing in DREAM-F cells treated with these reagents and UVB. This DREAM system is scalable and forms a time-saving high-throughput screening system for identifying novel anti-photoaging reagents regulating DNA damage in dermal fibroblasts.
Tip-Based Nanofabrication for Scalable Manufacturing
Hu, Huan; Kim, Hoe; Somnath, Suhas
2017-03-16
Tip-based nanofabrication (TBN) is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. Here in this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.
Tip-Based Nanofabrication for Scalable Manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Huan; Kim, Hoe; Somnath, Suhas
Tip-based nanofabrication (TBN) is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. Here in this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.
Scalable Sub-micron Patterning of Organic Materials Toward High Density Soft Electronics
Kim, Jaekyun; Kim, Myung-Gil; Kim, Jaehyun; Jo, Sangho; Kang, Jingu; Jo, Jeong-Wan; Lee, Woobin; Hwang, Chahwan; Moon, Juhyuk; Yang, Lin; Kim, Yun-Hi; Noh, Yong-Young; Yun Jaung, Jae; Kim, Yong-Hoon; Kyu Park, Sung
2015-01-01
The success of silicon based high density integrated circuits ignited explosive expansion of microelectronics. Although the inorganic semiconductors have shown superior carrier mobilities for conventional high speed switching devices, the emergence of unconventional applications, such as flexible electronics, highly sensitive photosensors, large area sensor array, and tailored optoelectronics, brought intensive research on next generation electronic materials. The rationally designed multifunctional soft electronic materials, organic and carbon-based semiconductors, are demonstrated with low-cost solution process, exceptional mechanical stability, and on-demand optoelectronic properties. Unfortunately, the industrial implementation of the soft electronic materials has been hindered due to lack of scalable fine-patterning methods. In this report, we demonstrated facile general route for high throughput sub-micron patterning of soft materials, using spatially selective deep-ultraviolet irradiation. For organic and carbon-based materials, the highly energetic photons (e.g. deep-ultraviolet rays) enable direct photo-conversion from conducting/semiconducting to insulating state through molecular dissociation and disordering with spatial resolution down to a sub-μm-scale. The successful demonstration of organic semiconductor circuitry promise our result proliferate industrial adoption of soft materials for next generation electronics. PMID:26411932
Scalable sub-micron patterning of organic materials toward high density soft electronics
Kim, Jaekyun; Kim, Myung -Gil; Kim, Jaehyun; ...
2015-09-28
The success of silicon based high density integrated circuits ignited explosive expansion of microelectronics. Although the inorganic semiconductors have shown superior carrier mobilities for conventional high speed switching devices, the emergence of unconventional applications, such as flexible electronics, highly sensitive photosensors, large area sensor array, and tailored optoelectronics, brought intensive research on next generation electronic materials. The rationally designed multifunctional soft electronic materials, organic and carbon-based semiconductors, are demonstrated with low-cost solution process, exceptional mechanical stability, and on-demand optoelectronic properties. Unfortunately, the industrial implementation of the soft electronic materials has been hindered due to lack of scalable fine-patterning methods. Inmore » this report, we demonstrated facile general route for high throughput sub-micron patterning of soft materials, using spatially selective deep-ultraviolet irradiation. For organic and carbon-based materials, the highly energetic photons (e.g. deep-ultraviolet rays) enable direct photo-conversion from conducting/semiconducting to insulating state through molecular dissociation and disordering with spatial resolution down to a sub-μm-scale. As a result, the successful demonstration of organic semiconductor circuitry promise our result proliferate industrial adoption of soft materials for next generation electronics.« less
Strong, light, multifunctional fibers of carbon nanotubes with ultrahigh conductivity.
Behabtu, Natnael; Young, Colin C; Tsentalovich, Dmitri E; Kleinerman, Olga; Wang, Xuan; Ma, Anson W K; Bengio, E Amram; ter Waarbeek, Ron F; de Jong, Jorrit J; Hoogerwerf, Ron E; Fairchild, Steven B; Ferguson, John B; Maruyama, Benji; Kono, Junichiro; Talmon, Yeshayahu; Cohen, Yachin; Otto, Marcin J; Pasquali, Matteo
2013-01-11
Broader applications of carbon nanotubes to real-world problems have largely gone unfulfilled because of difficult material synthesis and laborious processing. We report high-performance multifunctional carbon nanotube (CNT) fibers that combine the specific strength, stiffness, and thermal conductivity of carbon fibers with the specific electrical conductivity of metals. These fibers consist of bulk-grown CNTs and are produced by high-throughput wet spinning, the same process used to produce high-performance industrial fibers. These scalable CNT fibers are positioned for high-value applications, such as aerospace electronics and field emission, and can evolve into engineered materials with broad long-term impact, from consumer electronics to long-range power transmission.
Manyscale Computing for Sensor Processing in Support of Space Situational Awareness
NASA Astrophysics Data System (ADS)
Schmalz, M.; Chapman, W.; Hayden, E.; Sahni, S.; Ranka, S.
2014-09-01
Increasing image and signal data burden associated with sensor data processing in support of space situational awareness implies continuing computational throughput growth beyond the petascale regime. In addition to growing applications data burden and diversity, the breadth, diversity and scalability of high performance computing architectures and their various organizations challenge the development of a single, unifying, practicable model of parallel computation. Therefore, models for scalable parallel processing have exploited architectural and structural idiosyncrasies, yielding potential misapplications when legacy programs are ported among such architectures. In response to this challenge, we have developed a concise, efficient computational paradigm and software called Manyscale Computing to facilitate efficient mapping of annotated application codes to heterogeneous parallel architectures. Our theory, algorithms, software, and experimental results support partitioning and scheduling of application codes for envisioned parallel architectures, in terms of work atoms that are mapped (for example) to threads or thread blocks on computational hardware. Because of the rigor, completeness, conciseness, and layered design of our manyscale approach, application-to-architecture mapping is feasible and scalable for architectures at petascales, exascales, and above. Further, our methodology is simple, relying primarily on a small set of primitive mapping operations and support routines that are readily implemented on modern parallel processors such as graphics processing units (GPUs) and hybrid multi-processors (HMPs). In this paper, we overview the opportunities and challenges of manyscale computing for image and signal processing in support of space situational awareness applications. We discuss applications in terms of a layered hardware architecture (laboratory > supercomputer > rack > processor > component hierarchy). Demonstration applications include performance analysis and results in terms of execution time as well as storage, power, and energy consumption for bus-connected and/or networked architectures. The feasibility of the manyscale paradigm is demonstrated by addressing four principal challenges: (1) architectural/structural diversity, parallelism, and locality, (2) masking of I/O and memory latencies, (3) scalability of design as well as implementation, and (4) efficient representation/expression of parallel applications. Examples will demonstrate how manyscale computing helps solve these challenges efficiently on real-world computing systems.
Enabling a high throughput real time data pipeline for a large radio telescope array with GPUs
NASA Astrophysics Data System (ADS)
Edgar, R. G.; Clark, M. A.; Dale, K.; Mitchell, D. A.; Ord, S. M.; Wayth, R. B.; Pfister, H.; Greenhill, L. J.
2010-10-01
The Murchison Widefield Array (MWA) is a next-generation radio telescope currently under construction in the remote Western Australia Outback. Raw data will be generated continuously at 5 GiB s-1, grouped into 8 s cadences. This high throughput motivates the development of on-site, real time processing and reduction in preference to archiving, transport and off-line processing. Each batch of 8 s data must be completely reduced before the next batch arrives. Maintaining real time operation will require a sustained performance of around 2.5 TFLOP s-1 (including convolutions, FFTs, interpolations and matrix multiplications). We describe a scalable heterogeneous computing pipeline implementation, exploiting both the high computing density and FLOP-per-Watt ratio of modern GPUs. The architecture is highly parallel within and across nodes, with all major processing elements performed by GPUs. Necessary scatter-gather operations along the pipeline are loosely synchronized between the nodes hosting the GPUs. The MWA will be a frontier scientific instrument and a pathfinder for planned peta- and exa-scale facilities.
Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N
2017-03-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.
Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.
2016-01-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692
High-throughput linear optical stretcher for mechanical characterization of blood cells.
Roth, Kevin B; Neeves, Keith B; Squier, Jeff; Marr, David W M
2016-04-01
This study describes a linear optical stretcher as a high-throughput mechanical property cytometer. Custom, inexpensive, and scalable optics image a linear diode bar source into a microfluidic channel, where cells are hydrodynamically focused into the optical stretcher. Upon entering the stretching region, antipodal optical forces generated by the refraction of tightly focused laser light at the cell membrane deform each cell in flow. Each cell relaxes as it flows out of the trap and is compared to the stretched state to determine deformation. The deformation response of untreated red blood cells and neutrophils were compared to chemically treated cells. Statistically significant differences were observed between normal, diamide-treated, and glutaraldehyde-treated red blood cells, as well as between normal and cytochalasin D-treated neutrophils. Based on the behavior of the pure, untreated populations of red cells and neutrophils, a mixed population of these cells was tested and the discrete populations were identified by deformability. © 2015 International Society for Advancement of Cytometry. © 2015 International Society for Advancement of Cytometry.
Relax with CouchDB - Into the non-relational DBMS era of Bioinformatics
Manyam, Ganiraju; Payton, Michelle A.; Roth, Jack A.; Abruzzo, Lynne V.; Coombes, Kevin R.
2012-01-01
With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. PMID:22609849
Efficient visualization of high-throughput targeted proteomics experiments: TAPIR.
Röst, Hannes L; Rosenberger, George; Aebersold, Ruedi; Malmström, Lars
2015-07-15
Targeted mass spectrometry comprises a set of powerful methods to obtain accurate and consistent protein quantification in complex samples. To fully exploit these techniques, a cross-platform and open-source software stack based on standardized data exchange formats is required. We present TAPIR, a fast and efficient Python visualization software for chromatograms and peaks identified in targeted proteomics experiments. The input formats are open, community-driven standardized data formats (mzML for raw data storage and TraML encoding the hierarchical relationships between transitions, peptides and proteins). TAPIR is scalable to proteome-wide targeted proteomics studies (as enabled by SWATH-MS), allowing researchers to visualize high-throughput datasets. The framework integrates well with existing automated analysis pipelines and can be extended beyond targeted proteomics to other types of analyses. TAPIR is available for all computing platforms under the 3-clause BSD license at https://github.com/msproteomicstools/msproteomicstools. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Byeon, Ji-Yeon; Bailey, Ryan C
2011-09-07
High affinity capture agents recognizing biomolecular targets are essential in the performance of many proteomic detection methods. Herein, we report the application of a label-free silicon photonic biomolecular analysis platform for simultaneously determining kinetic association and dissociation constants for two representative protein capture agents: a thrombin-binding DNA aptamer and an anti-thrombin monoclonal antibody. The scalability and inherent multiplexing capability of the technology make it an attractive platform for simultaneously evaluating the binding characteristics of multiple capture agents recognizing the same target antigen, and thus a tool complementary to emerging high-throughput capture agent generation strategies.
Design and implementation of scalable tape archiver
NASA Technical Reports Server (NTRS)
Nemoto, Toshihiro; Kitsuregawa, Masaru; Takagi, Mikio
1996-01-01
In order to reduce costs, computer manufacturers try to use commodity parts as much as possible. Mainframes using proprietary processors are being replaced by high performance RISC microprocessor-based workstations, which are further being replaced by the commodity microprocessor used in personal computers. Highly reliable disks for mainframes are also being replaced by disk arrays, which are complexes of disk drives. In this paper we try to clarify the feasibility of a large scale tertiary storage system composed of 8-mm tape archivers utilizing robotics. In the near future, the 8-mm tape archiver will be widely used and become a commodity part, since recent rapid growth of multimedia applications requires much larger storage than disk drives can provide. We designed a scalable tape archiver which connects as many 8-mm tape archivers (element archivers) as possible. In the scalable archiver, robotics can exchange a cassette tape between two adjacent element archivers mechanically. Thus, we can build a large scalable archiver inexpensively. In addition, a sophisticated migration mechanism distributes frequently accessed tapes (hot tapes) evenly among all of the element archivers, which improves the throughput considerably. Even with the failures of some tape drives, the system dynamically redistributes hot tapes to the other element archivers which have live tape drives. Several kinds of specially tailored huge archivers are on the market, however, the 8-mm tape scalable archiver could replace them. To maintain high performance in spite of high access locality when a large number of archivers are attached to the scalable archiver, it is necessary to scatter frequently accessed cassettes among the element archivers and to use the tape drives efficiently. For this purpose, we introduce two cassette migration algorithms, foreground migration and background migration. Background migration transfers cassettes between element archivers to redistribute frequently accessed cassettes, thus balancing the load of each archiver. Background migration occurs the robotics are idle. Both migration algorithms are based on access frequency and space utility of each element archiver. To normalize these parameters according to the number of drives in each element archiver, it is possible to maintain high performance even if some tape drives fail. We found that the foreground migration is efficient at reducing access response time. Beside the foreground migration, the background migration makes it possible to track the transition of spatial access locality quickly.
Fair-share scheduling algorithm for a tertiary storage system
NASA Astrophysics Data System (ADS)
Jakl, Pavel; Lauret, Jérôme; Šumbera, Michal
2010-04-01
Any experiment facing Peta bytes scale problems is in need for a highly scalable mass storage system (MSS) to keep a permanent copy of their valuable data. But beyond the permanent storage aspects, the sheer amount of data makes complete data-set availability onto live storage (centralized or aggregated space such as the one provided by Scalla/Xrootd) cost prohibitive implying that a dynamic population from MSS to faster storage is needed. One of the most challenging aspects of dealing with MSS is the robotic tape component. If a robotic system is used as the primary storage solution, the intrinsically long access times (latencies) can dramatically affect the overall performance. To speed the retrieval of such data, one could organize the requests according to criterion with an aim to deliver maximal data throughput. However, such approaches are often orthogonal to fair resource allocation and a trade-off between quality of service, responsiveness and throughput is necessary for achieving an optimal and practical implementation of a truly faire-share oriented file restore policy. Starting from an explanation of the key criterion of such a policy, we will present evaluations and comparisons of three different MSS file restoration algorithms which meet fair-share requirements, and discuss their respective merits. We will quantify their impact on a typical file restoration cycle for the RHIC/STAR experimental setup and this, within a development, analysis and production environment relying on a shared MSS service [1].
High-Throughput Computing on High-Performance Platforms: A Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oleynik, D; Panitkin, S; Matteo, Turilli
The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i)more » a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.« less
Kwak, Jihoon; Genovesio, Auguste; Kang, Myungjoo; Hansen, Michael Adsett Edberg; Han, Sung-Jun
2015-01-01
Genotoxicity testing is an important component of toxicity assessment. As illustrated by the European registration, evaluation, authorization, and restriction of chemicals (REACH) directive, it concerns all the chemicals used in industry. The commonly used in vivo mammalian tests appear to be ill adapted to tackle the large compound sets involved, due to throughput, cost, and ethical issues. The somatic mutation and recombination test (SMART) represents a more scalable alternative, since it uses Drosophila, which develops faster and requires less infrastructure. Despite these advantages, the manual scoring of the hairs on Drosophila wings required for the SMART limits its usage. To overcome this limitation, we have developed an automated SMART readout. It consists of automated imaging, followed by an image analysis pipeline that measures individual wing genotoxicity scores. Finally, we have developed a wing score-based dose-dependency approach that can provide genotoxicity profiles. We have validated our method using 6 compounds, obtaining profiles almost identical to those obtained from manual measures, even for low-genotoxicity compounds such as urethane. The automated SMART, with its faster and more reliable readout, fulfills the need for a high-throughput in vivo test. The flexible imaging strategy we describe and the analysis tools we provide should facilitate the optimization and dissemination of our methods. PMID:25830368
Low-cost scalable quartz crystal microbalance array for environmental sensing
NASA Astrophysics Data System (ADS)
Muckley, Eric S.; Anazagasty, Cristain; Jacobs, Christopher B.; Hianik, Tibor; Ivanov, Ilia N.
2016-09-01
Proliferation of environmental sensors for internet of things (IoT) applications has increased the need for low-cost platforms capable of accommodating multiple sensors. Quartz crystal microbalance (QCM) crystals coated with nanometer-thin sensor films are suitable for use in high-resolution ( 1 ng) selective gas sensor applications. We demonstrate a scalable array for measuring frequency response of six QCM sensors controlled by low-cost Arduino microcontrollers and a USB multiplexer. Gas pulses and data acquisition were controlled by a LabVIEW user interface. We test the sensor array by measuring the frequency shift of crystals coated with different compositions of polymer composites based on poly(3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS) while films are exposed to water vapor and oxygen inside a controlled environmental chamber. Our sensor array exhibits comparable performance to that of a commercial QCM system, while enabling high-throughput 6 QCM testing for under $1,000. We use deep neural network structures to process sensor response and demonstrate that the QCM array is suitable for gas sensing, environmental monitoring, and electronic-nose applications.
Using JWST Heritage to Enable a Future Large Ultra-Violet Optical Infrared Telescope
NASA Technical Reports Server (NTRS)
Feinberg, Lee
2016-01-01
To the extent it makes sense, leverage JWST knowledge, designs, architectures, GSE. Develop a scalable design reference mission (9.2 meter). Do just enough work to understand launch break points in aperture size. Demonstrate 10 pm stability is achievable on a design reference mission. Make design compatible with starshades. While segmented coronagraphs with high throughput and large bandpasses are important, make the system serviceable so you can evolve the instruments. Keep it room temperature to minimize the costs associated with cryo. Focus resources on the contrast problem. Start with the architecture and connect it to the technology needs.
Performance Evaluation of IEEE 802.11ah Networks With High-Throughput Bidirectional Traffic.
Šljivo, Amina; Kerkhove, Dwight; Tian, Le; Famaey, Jeroen; Munteanu, Adrian; Moerman, Ingrid; Hoebeke, Jeroen; De Poorter, Eli
2018-01-23
So far, existing sub-GHz wireless communication technologies focused on low-bandwidth, long-range communication with large numbers of constrained devices. Although these characteristics are fine for many Internet of Things (IoT) applications, more demanding application requirements could not be met and legacy Internet technologies such as Transmission Control Protocol/Internet Protocol (TCP/IP) could not be used. This has changed with the advent of the new IEEE 802.11ah Wi-Fi standard, which is much more suitable for reliable bidirectional communication and high-throughput applications over a wide area (up to 1 km). The standard offers great possibilities for network performance optimization through a number of physical- and link-layer configurable features. However, given that the optimal configuration parameters depend on traffic patterns, the standard does not dictate how to determine them. Such a large number of configuration options can lead to sub-optimal or even incorrect configurations. Therefore, we investigated how two key mechanisms, Restricted Access Window (RAW) grouping and Traffic Indication Map (TIM) segmentation, influence scalability, throughput, latency and energy efficiency in the presence of bidirectional TCP/IP traffic. We considered both high-throughput video streaming traffic and large-scale reliable sensing traffic and investigated TCP behavior in both scenarios when the link layer introduces long delays. This article presents the relations between attainable throughput per station and attainable number of stations, as well as the influence of RAW, TIM and TCP parameters on both. We found that up to 20 continuously streaming IP-cameras can be reliably connected via IEEE 802.11ah with a maximum average data rate of 160 kbps, whereas 10 IP-cameras can achieve average data rates of up to 255 kbps over 200 m. Up to 6960 stations transmitting every 60 s can be connected over 1 km with no lost packets. The presented results enable the fine tuning of RAW and TIM parameters for throughput-demanding reliable applications (i.e., video streaming, firmware updates) on one hand, and very dense low-throughput reliable networks with bidirectional traffic on the other hand.
Performance Evaluation of IEEE 802.11ah Networks With High-Throughput Bidirectional Traffic
Kerkhove, Dwight; Tian, Le; Munteanu, Adrian; De Poorter, Eli
2018-01-01
So far, existing sub-GHz wireless communication technologies focused on low-bandwidth, long-range communication with large numbers of constrained devices. Although these characteristics are fine for many Internet of Things (IoT) applications, more demanding application requirements could not be met and legacy Internet technologies such as Transmission Control Protocol/Internet Protocol (TCP/IP) could not be used. This has changed with the advent of the new IEEE 802.11ah Wi-Fi standard, which is much more suitable for reliable bidirectional communication and high-throughput applications over a wide area (up to 1 km). The standard offers great possibilities for network performance optimization through a number of physical- and link-layer configurable features. However, given that the optimal configuration parameters depend on traffic patterns, the standard does not dictate how to determine them. Such a large number of configuration options can lead to sub-optimal or even incorrect configurations. Therefore, we investigated how two key mechanisms, Restricted Access Window (RAW) grouping and Traffic Indication Map (TIM) segmentation, influence scalability, throughput, latency and energy efficiency in the presence of bidirectional TCP/IP traffic. We considered both high-throughput video streaming traffic and large-scale reliable sensing traffic and investigated TCP behavior in both scenarios when the link layer introduces long delays. This article presents the relations between attainable throughput per station and attainable number of stations, as well as the influence of RAW, TIM and TCP parameters on both. We found that up to 20 continuously streaming IP-cameras can be reliably connected via IEEE 802.11ah with a maximum average data rate of 160 kbps, whereas 10 IP-cameras can achieve average data rates of up to 255 kbps over 200 m. Up to 6960 stations transmitting every 60 s can be connected over 1 km with no lost packets. The presented results enable the fine tuning of RAW and TIM parameters for throughput-demanding reliable applications (i.e., video streaming, firmware updates) on one hand, and very dense low-throughput reliable networks with bidirectional traffic on the other hand. PMID:29360798
Increasing efficiency and declining cost of generating whole transcriptome profiles has made high-throughput transcriptomics a practical option for chemical bioactivity screening. The resulting data output provides information on the expression of thousands of genes and is amenab...
Increasing efficiency and declining cost of generating whole transcriptome profiles has made high-throughput transcriptomics a practical option for chemical bioactivity screening. The resulting data output provides information on the expression of thousands of genes and is amenab...
One-Cell Doubling Evaluation by Living Arrays of Yeast, ODELAY!
Herricks, Thurston; Dilworth, David J.; Mast, Fred D.; ...
2016-11-16
Cell growth is a complex phenotype widely used in systems biology to gauge the impact of genetic and environmental perturbations. Due to the magnitude of genome-wide studies, resolution is often sacrificed in favor of throughput, creating a demand for scalable, time-resolved, quantitative methods of growth assessment. We present ODELAY (One-cell Doubling Evaluation by Living Arrays of Yeast), an automated and scalable growth analysis platform. High measurement density and single-cell resolution provide a powerful tool for large-scale multiparameter growth analysis based on the modeling of microcolony expansion on solid media. Pioneered in yeast but applicable to other colony forming organisms, ODELAYmore » extracts the three key growth parameters (lag time, doubling time, and carrying capacity) that define microcolony expansion from single cells, simultaneously permitting the assessment of population heterogeneity. The utility of ODELAY is illustrated using yeast mutants, revealing a spectrum of phenotypes arising from single and combinatorial growth parameter perturbations.« less
Emerging Technologies for Assembly of Microscale Hydrogels
Kavaz, Doga; Demirel, Melik C.; Demirci, Utkan
2013-01-01
Assembly of cell encapsulating building blocks (i.e., microscale hydrogels) has significant applications in areas including regenerative medicine, tissue engineering, and cell-based in vitro assays for pharmaceutical research and drug discovery. Inspired by the repeating functional units observed in native tissues and biological systems (e.g., the lobule in liver, the nephron in kidney), assembly technologies aim to generate complex tissue structures by organizing microscale building blocks. Novel assembly technologies enable fabrication of engineered tissue constructs with controlled properties including tunable microarchitectural and predefined compositional features. Recent advances in micro- and nano-scale technologies have enabled engineering of microgel based three dimensional (3D) constructs. There is a need for high-throughput and scalable methods to assemble microscale units with a complex 3D micro-architecture. Emerging assembly methods include novel technologies based on microfluidics, acoustic and magnetic fields, nanotextured surfaces, and surface tension. In this review, we survey emerging microscale hydrogel assembly methods offering rapid, scalable microgel assembly in 3D, and provide future perspectives and discuss potential applications. PMID:23184717
One-Cell Doubling Evaluation by Living Arrays of Yeast, ODELAY!
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herricks, Thurston; Dilworth, David J.; Mast, Fred D.
Cell growth is a complex phenotype widely used in systems biology to gauge the impact of genetic and environmental perturbations. Due to the magnitude of genome-wide studies, resolution is often sacrificed in favor of throughput, creating a demand for scalable, time-resolved, quantitative methods of growth assessment. We present ODELAY (One-cell Doubling Evaluation by Living Arrays of Yeast), an automated and scalable growth analysis platform. High measurement density and single-cell resolution provide a powerful tool for large-scale multiparameter growth analysis based on the modeling of microcolony expansion on solid media. Pioneered in yeast but applicable to other colony forming organisms, ODELAYmore » extracts the three key growth parameters (lag time, doubling time, and carrying capacity) that define microcolony expansion from single cells, simultaneously permitting the assessment of population heterogeneity. The utility of ODELAY is illustrated using yeast mutants, revealing a spectrum of phenotypes arising from single and combinatorial growth parameter perturbations.« less
Screening of lipid composition for scalable fabrication of solvent free lipid microarrays
NASA Astrophysics Data System (ADS)
Ghazanfari, Lida; Lenhert, Steven
2016-12-01
Liquid microdroplet arrays on surfaces are a promising approach to the miniaturization of laboratory processes such as high throughput screening. The fluid nature of these droplets poses unique challenges and opportunities in their fabrication and application, particularly for the scalable integration of multiple materials over large areas and immersion into cell culture solution. Here we use pin spotting and nanointaglio printing to screen a library of lipids and their mixtures for their compatibility with these fabrication processes, as well as stability upon immersion into aqueous solution. More than 200 combinations of natural and synthetic oils composed of fatty acids, triglycerides, and hydrocarbons were tested for their pin-spotting and nanointaglio print quality and their ability to contain the fluorescent compound TRITC upon immersion in water. A combination of castor oil and hexanoic acid at the ratio of 1:1 (w/w) was found optimal for producing reproducible patterns that are stable upon immersion into water. This method is capable of large scale nano-materials integration.
Optical nano-woodpiles: large-area metallic photonic crystals and metamaterials
Ibbotson, Lindsey A.; Demetriadou, Angela; Croxall, Stephen; Hess, Ortwin; Baumberg, Jeremy J.
2015-01-01
Metallic woodpile photonic crystals and metamaterials operating across the visible spectrum are extremely difficult to construct over large areas, because of the intricate three-dimensional nanostructures and sub-50 nm features demanded. Previous routes use electron-beam lithography or direct laser writing but widespread application is restricted by their expense and low throughput. Scalable approaches including soft lithography, colloidal self-assembly, and interference holography, produce structures limited in feature size, material durability, or geometry. By multiply stacking gold nanowire flexible gratings, we demonstrate a scalable high-fidelity approach for fabricating flexible metallic woodpile photonic crystals, with features down to 10 nm produced in bulk and at low cost. Control of stacking sequence, asymmetry, and orientation elicits great control, with visible-wavelength band-gap reflections exceeding 60%, and with strong induced chirality. Such flexible and stretchable architectures can produce metamaterials with refractive index near zero, and are easily tuned across the IR and visible ranges. PMID:25660667
Scalable service architecture for providing strong service guarantees
NASA Astrophysics Data System (ADS)
Christin, Nicolas; Liebeherr, Joerg
2002-07-01
For the past decade, a lot of Internet research has been devoted to providing different levels of service to applications. Initial proposals for service differentiation provided strong service guarantees, with strict bounds on delays, loss rates, and throughput, but required high overhead in terms of computational complexity and memory, both of which raise scalability concerns. Recently, the interest has shifted to service architectures with low overhead. However, these newer service architectures only provide weak service guarantees, which do not always address the needs of applications. In this paper, we describe a service architecture that supports strong service guarantees, can be implemented with low computational complexity, and only requires to maintain little state information. A key mechanism of the proposed service architecture is that it addresses scheduling and buffer management in a single algorithm. The presented architecture offers no solution for controlling the amount of traffic that enters the network. Instead, we plan on exploiting feedback mechanisms of TCP congestion control algorithms for the purpose of regulating the traffic entering the network.
The Newick utilities: high-throughput phylogenetic tree processing in the UNIX shell.
Junier, Thomas; Zdobnov, Evgeny M
2010-07-01
We present a suite of Unix shell programs for processing any number of phylogenetic trees of any size. They perform frequently-used tree operations without requiring user interaction. They also allow tree drawing as scalable vector graphics (SVG), suitable for high-quality presentations and further editing, and as ASCII graphics for command-line inspection. As an example we include an implementation of bootscanning, a procedure for finding recombination breakpoints in viral genomes. C source code, Python bindings and executables for various platforms are available from http://cegg.unige.ch/newick_utils. The distribution includes a manual and example data. The package is distributed under the BSD License. thomas.junier@unige.ch
Defect Genome of Cubic Perovskites for Fuel Cell Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balachandran, Janakiraman; Lin, Lianshan; Anchell, Jonathan S.
Heterogeneities such as point defects, inherent to material systems, can profoundly influence material functionalities critical for numerous energy applications. This influence in principle can be identified and quantified through development of large defect data sets which we call the defect genome, employing high-throughput ab initio calculations. However, high-throughput screening of material models with point defects dramatically increases the computational complexity and chemical search space, creating major impediments toward developing a defect genome. In this paper, we overcome these impediments by employing computationally tractable ab initio models driven by highly scalable workflows, to study formation and interaction of various point defectsmore » (e.g., O vacancies, H interstitials, and Y substitutional dopant), in over 80 cubic perovskites, for potential proton-conducting ceramic fuel cell (PCFC) applications. The resulting defect data sets identify several promising perovskite compounds that can exhibit high proton conductivity. Furthermore, the data sets also enable us to identify and explain, insightful and novel correlations among defect energies, material identities, and defect-induced local structural distortions. Finally, such defect data sets and resultant correlations are necessary to build statistical machine learning models, which are required to accelerate discovery of new materials.« less
Defect Genome of Cubic Perovskites for Fuel Cell Applications
Balachandran, Janakiraman; Lin, Lianshan; Anchell, Jonathan S.; ...
2017-10-10
Heterogeneities such as point defects, inherent to material systems, can profoundly influence material functionalities critical for numerous energy applications. This influence in principle can be identified and quantified through development of large defect data sets which we call the defect genome, employing high-throughput ab initio calculations. However, high-throughput screening of material models with point defects dramatically increases the computational complexity and chemical search space, creating major impediments toward developing a defect genome. In this paper, we overcome these impediments by employing computationally tractable ab initio models driven by highly scalable workflows, to study formation and interaction of various point defectsmore » (e.g., O vacancies, H interstitials, and Y substitutional dopant), in over 80 cubic perovskites, for potential proton-conducting ceramic fuel cell (PCFC) applications. The resulting defect data sets identify several promising perovskite compounds that can exhibit high proton conductivity. Furthermore, the data sets also enable us to identify and explain, insightful and novel correlations among defect energies, material identities, and defect-induced local structural distortions. Finally, such defect data sets and resultant correlations are necessary to build statistical machine learning models, which are required to accelerate discovery of new materials.« less
Watson, Dionysios C.; Yung, Bryant C.; Bergamaschi, Cristina; Chowdhury, Bhabadeb; Bear, Jenifer; Stellas, Dimitris; Morales-Kastresana, Aizea; Jones, Jennifer C.; Felber, Barbara K.; Chen, Xiaoyuan; Pavlakis, George N.
2018-01-01
ABSTRACT The development of extracellular vesicles (EV) for therapeutic applications is contingent upon the establishment of reproducible, scalable, and high-throughput methods for the production and purification of clinical grade EV. Methods including ultracentrifugation (U/C), ultrafiltration, immunoprecipitation, and size-exclusion chromatography (SEC) have been employed to isolate EV, each facing limitations such as efficiency, particle purity, lengthy processing time, and/or sample volume. We developed a cGMP-compatible method for the scalable production, concentration, and isolation of EV through a strategy involving bioreactor culture, tangential flow filtration (TFF), and preparative SEC. We applied this purification method for the isolation of engineered EV carrying multiple complexes of a novel human immunostimulatory cytokine-fusion protein, heterodimeric IL-15 (hetIL-15)/lactadherin. HEK293 cells stably expressing the fusion cytokine were cultured in a hollow-fibre bioreactor. Conditioned medium was collected and EV were isolated comparing three procedures: U/C, SEC, or TFF + SEC. SEC demonstrated comparable particle recovery, size distribution, and hetIL-15 density as U/C purification. Relative to U/C, SEC preparations achieved a 100-fold reduction in ferritin concentration, a major protein-complex contaminant. Comparative proteomics suggested that SEC additionally decreased the abundance of cytoplasmic proteins not associated with EV. Combination of TFF and SEC allowed for bulk processing of large starting volumes, and resulted in bioactive EV, without significant loss in particle yield or changes in size, morphology, and hetIL-15/lactadherin density. Taken together, the combination of bioreactor culture with TFF + SEC comprises a scalable, efficient method for the production of highly purified, bioactive EV carrying hetIL-15/lactadherin, which may be useful in targeted cancer immunotherapy approaches. PMID:29535850
2018-01-01
The development of high-yielding crops with drought tolerance is necessary to increase food, feed, fiber and fuel production. Methods that create similar environmental conditions for a large number of genotypes are essential to investigate plant responses to drought in gene discovery studies. Modern facilities that control water availability for each plant remain cost-prohibited to some sections of the research community. We present an alternative cost-effective automated irrigation system scalable for a high-throughput and controlled dry-down treatment of plants. This system was tested in sorghum using two experiments. First, four genotypes were subjected to ten days of dry-down to achieve three final Volumetric Water Content (VWC) levels: drought (0.10 and 0.20 m3 m-3) and control (0.30 m3 m-3). The final average VWC was 0.11, 0.22, and 0.31 m3 m-3, respectively, and significant differences in biomass accumulation were observed between control and drought treatments. Second, 42 diverse sorghum genotypes were subjected to a seven-day dry-down treatment for a final drought stress of 0.15 m3 m-3 VWC. The final average VWC was 0.17 m3 m-3, and plants presented significant differences in photosynthetic rate during the drought period. These results demonstrate that cost-effective automation systems can successfully control substrate water content for each plant, to accurately compare their phenotypic responses to drought, and be scaled up for high-throughput phenotyping studies. PMID:29870560
Ortiz, Diego; Litvin, Alexander G; Salas Fernandez, Maria G
2018-01-01
The development of high-yielding crops with drought tolerance is necessary to increase food, feed, fiber and fuel production. Methods that create similar environmental conditions for a large number of genotypes are essential to investigate plant responses to drought in gene discovery studies. Modern facilities that control water availability for each plant remain cost-prohibited to some sections of the research community. We present an alternative cost-effective automated irrigation system scalable for a high-throughput and controlled dry-down treatment of plants. This system was tested in sorghum using two experiments. First, four genotypes were subjected to ten days of dry-down to achieve three final Volumetric Water Content (VWC) levels: drought (0.10 and 0.20 m3 m-3) and control (0.30 m3 m-3). The final average VWC was 0.11, 0.22, and 0.31 m3 m-3, respectively, and significant differences in biomass accumulation were observed between control and drought treatments. Second, 42 diverse sorghum genotypes were subjected to a seven-day dry-down treatment for a final drought stress of 0.15 m3 m-3 VWC. The final average VWC was 0.17 m3 m-3, and plants presented significant differences in photosynthetic rate during the drought period. These results demonstrate that cost-effective automation systems can successfully control substrate water content for each plant, to accurately compare their phenotypic responses to drought, and be scaled up for high-throughput phenotyping studies.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
A scalable strategy for high-throughput GFP tagging of endogenous human proteins.
Leonetti, Manuel D; Sekine, Sayaka; Kamiyama, Daichi; Weissman, Jonathan S; Huang, Bo
2016-06-21
A central challenge of the postgenomic era is to comprehensively characterize the cellular role of the ∼20,000 proteins encoded in the human genome. To systematically study protein function in a native cellular background, libraries of human cell lines expressing proteins tagged with a functional sequence at their endogenous loci would be very valuable. Here, using electroporation of Cas9 nuclease/single-guide RNA ribonucleoproteins and taking advantage of a split-GFP system, we describe a scalable method for the robust, scarless, and specific tagging of endogenous human genes with GFP. Our approach requires no molecular cloning and allows a large number of cell lines to be processed in parallel. We demonstrate the scalability of our method by targeting 48 human genes and show that the resulting GFP fluorescence correlates with protein expression levels. We next present how our protocols can be easily adapted for the tagging of a given target with GFP repeats, critically enabling the study of low-abundance proteins. Finally, we show that our GFP tagging approach allows the biochemical isolation of native protein complexes for proteomic studies. Taken together, our results pave the way for the large-scale generation of endogenously tagged human cell lines for the proteome-wide analysis of protein localization and interaction networks in a native cellular context.
Relax with CouchDB--into the non-relational DBMS era of bioinformatics.
Manyam, Ganiraju; Payton, Michelle A; Roth, Jack A; Abruzzo, Lynne V; Coombes, Kevin R
2012-07-01
With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. Copyright © 2012 Elsevier Inc. All rights reserved.
Performances of multiprocessor multidisk architectures for continuous media storage
NASA Astrophysics Data System (ADS)
Gennart, Benoit A.; Messerli, Vincent; Hersch, Roger D.
1996-03-01
Multimedia interfaces increase the need for large image databases, capable of storing and reading streams of data with strict synchronicity and isochronicity requirements. In order to fulfill these requirements, we consider a parallel image server architecture which relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one or more disks. This contribution analyzes through bottleneck performance evaluation and simulation the behavior of two multi-processor multi-disk architectures: a point-to-point architecture and a shared-bus architecture similar to current multiprocessor workstation architectures. We compare the two architectures on the basis of two multimedia algorithms: the compute-bound frame resizing by resampling and the data-bound disk-to-client stream transfer. The results suggest that the shared bus is a potential bottleneck despite its very high hardware throughput (400Mbytes/s) and that an architecture with addressable local memories located closely to their respective processors could partially remove this bottleneck. The point- to-point architecture is scalable and able to sustain high throughputs for simultaneous compute- bound and data-bound operations.
Cai, Shaobo; Pourdeyhimi, Behnam; Loboa, Elizabeth G
2017-06-28
In this study, we report a high-throughput fabrication method at industrial pilot scale to produce a silver-nanoparticles-doped nanoclay-polylactic acid composite with a novel synergistic antibacterial effect. The obtained nanocomposite has a significantly lower affinity for bacterial adhesion, allowing the loading amount of silver nanoparticles to be tremendously reduced while maintaining satisfactory antibacterial efficacy at the material interface. This is a great advantage for many antibacterial applications in which cost is a consideration. Furthermore, unlike previously reported methods that require additional chemical reduction processes to produce the silver-nanoparticles-doped nanoclay, an in situ preparation method was developed in which silver nanoparticles were created simultaneously during the composite fabrication process by thermal reduction. This is the first report to show that altered material surface submicron structures created with the loading of nanoclay enables the creation of a nanocomposite with significantly lower affinity for bacterial adhesion. This study provides a promising scalable approach to produce antibacterial polymeric products with minimal changes to industry standard equipment, fabrication processes, or raw material input cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Hagyoung; Shin, Seokyoon; Jeon, Hyeongtag, E-mail: hjeon@hanyang.ac.kr
2016-01-15
The authors developed a high throughput (70 Å/min) and scalable space-divided atomic layer deposition (ALD) system for thin film encapsulation (TFE) of flexible organic light-emitting diode (OLED) displays at low temperatures (<100 °C). In this paper, the authors report the excellent moisture barrier properties of Al{sub 2}O{sub 3} films deposited on 2G glass substrates of an industrially relevant size (370 × 470 mm{sup 2}) using the newly developed ALD system. This new ALD system reduced the ALD cycle time to less than 1 s. A growth rate of 0.9 Å/cycle was achieved using trimethylaluminum as an Al source and O{sub 3} as an O reactant. Themore » morphological features and step coverage of the Al{sub 2}O{sub 3} films were investigated using field emission scanning electron microscopy. The chemical composition was analyzed using Auger electron spectroscopy. These deposited Al{sub 2}O{sub 3} films demonstrated a good optical transmittance higher than 95% in the visible region based on the ultraviolet visible spectrometer measurements. Water vapor transmission rate lower than the detection limit of the MOCON test (less than 3.0 × 10{sup −3} g/m{sup 2} day) were obtained for the flexible substrates. Based on these results, Al{sub 2}O{sub 3} deposited using our new high-throughput and scalable spatial ALD is considered a good candidate for preparation of TFE films of flexible OLEDs.« less
NASA Astrophysics Data System (ADS)
Dervilllé, A.; Labrosse, A.; Zimmermann, Y.; Foucher, J.; Gronheid, R.; Boeckx, C.; Singh, A.; Leray, P.; Halder, S.
2016-03-01
The dimensional scaling in IC manufacturing strongly drives the demands on CD and defect metrology techniques and their measurement uncertainties. Defect review has become as important as CD metrology and both of them create a new metrology paradigm because it creates a completely new need for flexible, robust and scalable metrology software. Current, software architectures and metrology algorithms are performant but it must be pushed to another higher level in order to follow roadmap speed and requirements. For example: manage defect and CD in one step algorithm, customize algorithms and outputs features for each R&D team environment, provide software update every day or every week for R&D teams in order to explore easily various development strategies. The final goal is to avoid spending hours and days to manually tune algorithm to analyze metrology data and to allow R&D teams to stay focus on their expertise. The benefits are drastic costs reduction, more efficient R&D team and better process quality. In this paper, we propose a new generation of software platform and development infrastructure which can integrate specific metrology business modules. For example, we will show the integration of a chemistry module dedicated to electronics materials like Direct Self Assembly features. We will show a new generation of image analysis algorithms which are able to manage at the same time defect rates, images classifications, CD and roughness measurements with high throughput performances in order to be compatible with HVM. In a second part, we will assess the reliability, the customization of algorithm and the software platform capabilities to follow new specific semiconductor metrology software requirements: flexibility, robustness, high throughput and scalability. Finally, we will demonstrate how such environment has allowed a drastic reduction of data analysis cycle time.
NASA Astrophysics Data System (ADS)
Wang, Wei; Ruiz, Isaac; Lee, Ilkeun; Zaera, Francisco; Ozkan, Mihrimah; Ozkan, Cengiz S.
2015-04-01
Optimization of the electrode/electrolyte double-layer interface is a key factor for improving electrode performance of aqueous electrolyte based supercapacitors (SCs). Here, we report the improved functionality of carbon materials via a non-invasive, high-throughput, and inexpensive UV generated ozone (UV-ozone) treatment. This process allows precise tuning of the graphene and carbon nanotube hybrid foam (GM) transitionally from ultrahydrophobic to hydrophilic within 60 s. The continuous tuning of surface energy can be controlled by simply varying the UV-ozone exposure time, while the ozone-oxidized carbon nanostructure maintains its integrity. Symmetric SCs based on the UV-ozone treated GM foam demonstrated enhanced rate performance. This technique can be readily applied to other CVD-grown carbonaceous materials by taking advantage of its ease of processing, low cost, scalability, and controllability.Optimization of the electrode/electrolyte double-layer interface is a key factor for improving electrode performance of aqueous electrolyte based supercapacitors (SCs). Here, we report the improved functionality of carbon materials via a non-invasive, high-throughput, and inexpensive UV generated ozone (UV-ozone) treatment. This process allows precise tuning of the graphene and carbon nanotube hybrid foam (GM) transitionally from ultrahydrophobic to hydrophilic within 60 s. The continuous tuning of surface energy can be controlled by simply varying the UV-ozone exposure time, while the ozone-oxidized carbon nanostructure maintains its integrity. Symmetric SCs based on the UV-ozone treated GM foam demonstrated enhanced rate performance. This technique can be readily applied to other CVD-grown carbonaceous materials by taking advantage of its ease of processing, low cost, scalability, and controllability. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr06795a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wall, Andrew J.; Capo, Rosemary C.; Stewart, Brian W.
2016-09-22
This technical report presents the details of the Sr column configuration and the high-throughput Sr separation protocol. Data showing the performance of the method as well as the best practices for optimizing Sr isotope analysis by MC-ICP-MS is presented. Lastly, this report offers tools for data handling and data reduction of Sr isotope results from the Thermo Scientific Neptune software to assist in data quality assurance, which help avoid issues of data glut associated with high sample throughput rapid analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hakala, Jacqueline Alexandra
2016-11-22
This technical report presents the details of the Sr column configuration and the high-throughput Sr separation protocol. Data showing the performance of the method as well as the best practices for optimizing Sr isotope analysis by MC-ICP-MS is presented. Lastly, this report offers tools for data handling and data reduction of Sr isotope results from the Thermo Scientific Neptune software to assist in data quality assurance, which help avoid issues of data glut associated with high sample throughput rapid analysis.
KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery
NASA Astrophysics Data System (ADS)
Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan
2013-05-01
KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.
Kastner, Elisabeth; Kaur, Randip; Lowry, Deborah; Moghaddam, Behfar; Wilkinson, Alexander; Perrie, Yvonne
2014-12-30
Microfluidics has recently emerged as a new method of manufacturing liposomes, which allows for reproducible mixing in miliseconds on the nanoliter scale. Here we investigate microfluidics-based manufacturing of liposomes. The aim of these studies was to assess the parameters in a microfluidic process by varying the total flow rate (TFR) and the flow rate ratio (FRR) of the solvent and aqueous phases. Design of experiment and multivariate data analysis were used for increased process understanding and development of predictive and correlative models. High FRR lead to the bottom-up synthesis of liposomes, with a strong correlation with vesicle size, demonstrating the ability to in-process control liposomes size; the resulting liposome size correlated with the FRR in the microfluidics process, with liposomes of 50 nm being reproducibly manufactured. Furthermore, we demonstrate the potential of a high throughput manufacturing of liposomes using microfluidics with a four-fold increase in the volumetric flow rate, maintaining liposome characteristics. The efficacy of these liposomes was demonstrated in transfection studies and was modelled using predictive modeling. Mathematical modelling identified FRR as the key variable in the microfluidic process, with the highest impact on liposome size, polydispersity and transfection efficiency. This study demonstrates microfluidics as a robust and high-throughput method for the scalable and highly reproducible manufacture of size-controlled liposomes. Furthermore, the application of statistically based process control increases understanding and allows for the generation of a design-space for controlled particle characteristics. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
GiNA, an Efficient and High-Throughput Software for Horticultural Phenotyping
Diaz-Garcia, Luis; Covarrubias-Pazaran, Giovanny; Schlautman, Brandon; Zalapa, Juan
2016-01-01
Traditional methods for trait phenotyping have been a bottleneck for research in many crop species due to their intensive labor, high cost, complex implementation, lack of reproducibility and propensity to subjective bias. Recently, multiple high-throughput phenotyping platforms have been developed, but most of them are expensive, species-dependent, complex to use, and available only for major crops. To overcome such limitations, we present the open-source software GiNA, which is a simple and free tool for measuring horticultural traits such as shape- and color-related parameters of fruits, vegetables, and seeds. GiNA is multiplatform software available in both R and MATLAB® programming languages and uses conventional images from digital cameras with minimal requirements. It can process up to 11 different horticultural morphological traits such as length, width, two-dimensional area, volume, projected skin, surface area, RGB color, among other parameters. Different validation tests produced highly consistent results under different lighting conditions and camera setups making GiNA a very reliable platform for high-throughput phenotyping. In addition, five-fold cross validation between manually generated and GiNA measurements for length and width in cranberry fruits were 0.97 and 0.92. In addition, the same strategy yielded prediction accuracies above 0.83 for color estimates produced from images of cranberries analyzed with GiNA compared to total anthocyanin content (TAcy) of the same fruits measured with the standard methodology of the industry. Our platform provides a scalable, easy-to-use and affordable tool for massive acquisition of phenotypic data of fruits, seeds, and vegetables. PMID:27529547
GiNA, an Efficient and High-Throughput Software for Horticultural Phenotyping.
Diaz-Garcia, Luis; Covarrubias-Pazaran, Giovanny; Schlautman, Brandon; Zalapa, Juan
2016-01-01
Traditional methods for trait phenotyping have been a bottleneck for research in many crop species due to their intensive labor, high cost, complex implementation, lack of reproducibility and propensity to subjective bias. Recently, multiple high-throughput phenotyping platforms have been developed, but most of them are expensive, species-dependent, complex to use, and available only for major crops. To overcome such limitations, we present the open-source software GiNA, which is a simple and free tool for measuring horticultural traits such as shape- and color-related parameters of fruits, vegetables, and seeds. GiNA is multiplatform software available in both R and MATLAB® programming languages and uses conventional images from digital cameras with minimal requirements. It can process up to 11 different horticultural morphological traits such as length, width, two-dimensional area, volume, projected skin, surface area, RGB color, among other parameters. Different validation tests produced highly consistent results under different lighting conditions and camera setups making GiNA a very reliable platform for high-throughput phenotyping. In addition, five-fold cross validation between manually generated and GiNA measurements for length and width in cranberry fruits were 0.97 and 0.92. In addition, the same strategy yielded prediction accuracies above 0.83 for color estimates produced from images of cranberries analyzed with GiNA compared to total anthocyanin content (TAcy) of the same fruits measured with the standard methodology of the industry. Our platform provides a scalable, easy-to-use and affordable tool for massive acquisition of phenotypic data of fruits, seeds, and vegetables.
Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing
NASA Astrophysics Data System (ADS)
Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey
Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.
Adverse outcome pathways (AOP) link known population outcomes to a molecular initiating event (MIE) that can be quantified using high-throughput in vitro methods. Practical application of AOPs in chemical-specific risk assessment requires consideration of exposure and absorption,...
CellAtlasSearch: a scalable search engine for single cells.
Srivastava, Divyanshu; Iyer, Arvind; Kumar, Vibhor; Sengupta, Debarka
2018-05-21
Owing to the advent of high throughput single cell transcriptomics, past few years have seen exponential growth in production of gene expression data. Recently efforts have been made by various research groups to homogenize and store single cell expression from a large number of studies. The true value of this ever increasing data deluge can be unlocked by making it searchable. To this end, we propose CellAtlasSearch, a novel search architecture for high dimensional expression data, which is massively parallel as well as light-weight, thus infinitely scalable. In CellAtlasSearch, we use a Graphical Processing Unit (GPU) friendly version of Locality Sensitive Hashing (LSH) for unmatched speedup in data processing and query. Currently, CellAtlasSearch features over 300 000 reference expression profiles including both bulk and single-cell data. It enables the user query individual single cell transcriptomes and finds matching samples from the database along with necessary meta information. CellAtlasSearch aims to assist researchers and clinicians in characterizing unannotated single cells. It also facilitates noise free, low dimensional representation of single-cell expression profiles by projecting them on a wide variety of reference samples. The web-server is accessible at: http://www.cellatlassearch.com.
Chute, Christopher G; Pathak, Jyotishman; Savova, Guergana K; Bailey, Kent R; Schor, Marshall I; Hart, Lacey A; Beebe, Calvin E; Huff, Stanley M
2011-01-01
SHARPn is a collaboration among 16 academic and industry partners committed to the production and distribution of high-quality software artifacts that support the secondary use of EMR data. Areas of emphasis are data normalization, natural language processing, high-throughput phenotyping, and data quality metrics. Our work avails the industrial scalability afforded by the Unstructured Information Management Architecture (UIMA) from IBM Watson Research labs, the same framework which underpins the Watson Jeopardy demonstration. This descriptive paper outlines our present work and achievements, and presages our trajectory for the remainder of the funding period. The project is one of the four Strategic Health IT Advanced Research Projects (SHARP) projects funded by the Office of the National Coordinator in 2010. PMID:22195076
Payne, Andrew C; Andregg, Michael; Kemmish, Kent; Hamalainen, Mark; Bowell, Charlotte; Bleloch, Andrew; Klejwa, Nathan; Lehrach, Wolfgang; Schatz, Ken; Stark, Heather; Marblestone, Adam; Church, George; Own, Christopher S; Andregg, William
2013-01-01
We present "molecular threading", a surface independent tip-based method for stretching and depositing single and double-stranded DNA molecules. DNA is stretched into air at a liquid-air interface, and can be subsequently deposited onto a dry substrate isolated from solution. The design of an apparatus used for molecular threading is presented, and fluorescence and electron microscopies are used to characterize the angular distribution, straightness, and reproducibility of stretched DNA deposited in arrays onto elastomeric surfaces and thin membranes. Molecular threading demonstrates high straightness and uniformity over length scales from nanometers to micrometers, and represents an alternative to existing DNA deposition and linearization methods. These results point towards scalable and high-throughput precision manipulation of single-molecule polymers.
NASA Astrophysics Data System (ADS)
Watkins, James
2013-03-01
Roll-to-roll (R2R) technologies provide routes for continuous production of flexible, nanostructured materials and devices with high throughput and low cost. We employ additive-driven self-assembly to produce well-ordered polymer/nanoparticle hybrid materials that can serve as active device layers, we use highly filled nanoparticle/polymer hybrids for applications that require tailored dielectric constant or refractive index, and we employ R2R nanoimprint lithography for device scale patterning. Specific examples include the fabrication of flexible floating gate memory and large area films for optical/EM management. Our newly constructed R2R processing facility includes a custom designed, precision R2R UV-assisted nanoimprint lithography (NIL) system and hybrid nanostructured materials coaters.
Chute, Christopher G; Pathak, Jyotishman; Savova, Guergana K; Bailey, Kent R; Schor, Marshall I; Hart, Lacey A; Beebe, Calvin E; Huff, Stanley M
2011-01-01
SHARPn is a collaboration among 16 academic and industry partners committed to the production and distribution of high-quality software artifacts that support the secondary use of EMR data. Areas of emphasis are data normalization, natural language processing, high-throughput phenotyping, and data quality metrics. Our work avails the industrial scalability afforded by the Unstructured Information Management Architecture (UIMA) from IBM Watson Research labs, the same framework which underpins the Watson Jeopardy demonstration. This descriptive paper outlines our present work and achievements, and presages our trajectory for the remainder of the funding period. The project is one of the four Strategic Health IT Advanced Research Projects (SHARP) projects funded by the Office of the National Coordinator in 2010.
The Newick utilities: high-throughput phylogenetic tree processing in the Unix shell
Junier, Thomas; Zdobnov, Evgeny M.
2010-01-01
Summary: We present a suite of Unix shell programs for processing any number of phylogenetic trees of any size. They perform frequently-used tree operations without requiring user interaction. They also allow tree drawing as scalable vector graphics (SVG), suitable for high-quality presentations and further editing, and as ASCII graphics for command-line inspection. As an example we include an implementation of bootscanning, a procedure for finding recombination breakpoints in viral genomes. Availability: C source code, Python bindings and executables for various platforms are available from http://cegg.unige.ch/newick_utils. The distribution includes a manual and example data. The package is distributed under the BSD License. Contact: thomas.junier@unige.ch PMID:20472542
An Assessment of Gigabit Ethernet Technology and Its Applications at the NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Bakes, Catherine Murphy; Kim, Chan M.; Ramos, Calvin T.
2000-01-01
This paper describes Gigabit Ethernet and its role in supporting R&D programs at NASA Glenn. These programs require an advanced high-speed network capable of transporting multimedia traffic, including real-time visualization, high- resolution graphics, and scientific data. GigE is a 1 Gbps extension to 10 and 100 Mbps Ethernet. The IEEE 802.3z and 802.3ab standards define the MAC layer and 1000BASE-X and 1000BASE-T physical layer specifications for GigE. GigE switches and buffered distributors support IEEE 802.3x flow control. The paper also compares GigE with ATM in terms of quality of service, data rate, throughput, scalability, interoperability, network management, and cost of ownership.
Polymer waveguides for electro-optical integration in data centers and high-performance computers.
Dangel, Roger; Hofrichter, Jens; Horst, Folkert; Jubin, Daniel; La Porta, Antonio; Meier, Norbert; Soganci, Ibrahim Murat; Weiss, Jonas; Offrein, Bert Jan
2015-02-23
To satisfy the intra- and inter-system bandwidth requirements of future data centers and high-performance computers, low-cost low-power high-throughput optical interconnects will become a key enabling technology. To tightly integrate optics with the computing hardware, particularly in the context of CMOS-compatible silicon photonics, optical printed circuit boards using polymer waveguides are considered as a formidable platform. IBM Research has already demonstrated the essential silicon photonics and interconnection building blocks. A remaining challenge is electro-optical packaging, i.e., the connection of the silicon photonics chips with the system. In this paper, we present a new single-mode polymer waveguide technology and a scalable method for building the optical interface between silicon photonics chips and single-mode polymer waveguides.
Advanced technologies for scalable ATLAS conditions database access on the grid
NASA Astrophysics Data System (ADS)
Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.
2010-04-01
During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.
Novel Scalable 3-D MT Inverse Solver
NASA Astrophysics Data System (ADS)
Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.
2016-12-01
We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.
Advanced optical components for next-generation photonic networks
NASA Astrophysics Data System (ADS)
Yoo, S. J. B.
2003-08-01
Future networks will require very high throughput, carrying dominantly data-centric traffic. The role of Photonic Networks employing all-optical systems will become increasingly important in providing scalable bandwidth, agile reconfigurability, and low-power consumptions in the future. In particular, the self-similar nature of data traffic indicates that packet switching and burst switching will be beneficial in the Next Generation Photonic Networks. While the natural conclusion is to pursue Photonic Packet Switching and Photonic Burst Switching systems, there are significant challenges in realizing such a system due to practical limitations in optical component technologies. Lack of a viable all-optical memory technology will continue to drive us towards exploring rapid reconfigurability in the wavelength domain. We will introduce and discuss the advanced optical component technologies behind the Photonic Packet Routing system designed and demonstrated at UC Davis. The system is capable of packet switching and burst switching, as well as circuit switching with 600 psec switching speed and scalability to 42 petabit/sec aggregated switching capacity. By utilizing a combination of rapidly tunable wavelength conversion and a uniform-loss cyclic frequency (ULCF) arrayed waveguide grating router (AWGR), the system is capable of rapidly switching the packets in wavelength, time, and space domains. The label swapping module inside the Photonic Packet Routing system containing a Mach-Zehnder wavelength converter and a narrow-band fiber Bragg-grating achieves all-optical label swapping with optical 2R (potentially 3R) regeneration while maintaining optical transparency for the data payload. By utilizing the advanced optical component technologies, the Photonic Packet Routing system successfully demonstrated error-free, cascaded, multi-hop photonic packet switching and routing with optical-label swapping. This paper will review the advanced optical component technologies and their role in the Next Generation Photonic Networks.
Multi-neuron intracellular recording in vivo via interacting autopatching robots
Holst, Gregory L; Singer, Annabelle C; Han, Xue; Brown, Emery N
2018-01-01
The activities of groups of neurons in a circuit or brain region are important for neuronal computations that contribute to behaviors and disease states. Traditional extracellular recordings have been powerful and scalable, but much less is known about the intracellular processes that lead to spiking activity. We present a robotic system, the multipatcher, capable of automatically obtaining blind whole-cell patch clamp recordings from multiple neurons simultaneously. The multipatcher significantly extends automated patch clamping, or 'autopatching’, to guide four interacting electrodes in a coordinated fashion, avoiding mechanical coupling in the brain. We demonstrate its performance in the cortex of anesthetized and awake mice. A multipatcher with four electrodes took an average of 10 min to obtain dual or triple recordings in 29% of trials in anesthetized mice, and in 18% of the trials in awake mice, thus illustrating practical yield and throughput to obtain multiple, simultaneous whole-cell recordings in vivo. PMID:29297466
Reid, Jeffrey G; Carroll, Andrew; Veeraraghavan, Narayanan; Dahdouli, Mahmoud; Sundquist, Andreas; English, Adam; Bainbridge, Matthew; White, Simon; Salerno, William; Buhay, Christian; Yu, Fuli; Muzny, Donna; Daly, Richard; Duyk, Geoff; Gibbs, Richard A; Boerwinkle, Eric
2014-01-29
Massively parallel DNA sequencing generates staggering amounts of data. Decreasing cost, increasing throughput, and improved annotation have expanded the diversity of genomics applications in research and clinical practice. This expanding scale creates analytical challenges: accommodating peak compute demand, coordinating secure access for multiple analysts, and sharing validated tools and results. To address these challenges, we have developed the Mercury analysis pipeline and deployed it in local hardware and the Amazon Web Services cloud via the DNAnexus platform. Mercury is an automated, flexible, and extensible analysis workflow that provides accurate and reproducible genomic results at scales ranging from individuals to large cohorts. By taking advantage of cloud computing and with Mercury implemented on the DNAnexus platform, we have demonstrated a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples.
Sequential data access with Oracle and Hadoop: a performance comparison
NASA Astrophysics Data System (ADS)
Baranowski, Zbigniew; Canali, Luca; Grancher, Eric
2014-06-01
The Hadoop framework has proven to be an effective and popular approach for dealing with "Big Data" and, thanks to its scaling ability and optimised storage access, Hadoop Distributed File System-based projects such as MapReduce or HBase are seen as candidates to replace traditional relational database management systems whenever scalable speed of data processing is a priority. But do these projects deliver in practice? Does migrating to Hadoop's "shared nothing" architecture really improve data access throughput? And, if so, at what cost? Authors answer these questions-addressing cost/performance as well as raw performance- based on a performance comparison between an Oracle-based relational database and Hadoop's distributed solutions like MapReduce or HBase for sequential data access. A key feature of our approach is the use of an unbiased data model as certain data models can significantly favour one of the technologies tested.
Guimaraes, S; Pruvost, M; Daligault, J; Stoetzel, E; Bennett, E A; Côté, N M-L; Nicolas, V; Lalis, A; Denys, C; Geigl, E-M; Grange, T
2017-05-01
We present a cost-effective metabarcoding approach, aMPlex Torrent, which relies on an improved multiplex PCR adapted to highly degraded DNA, combining barcoding and next-generation sequencing to simultaneously analyse many heterogeneous samples. We demonstrate the strength of these improvements by generating a phylochronology through the genotyping of ancient rodent remains from a Moroccan cave whose stratigraphy covers the last 120 000 years. Rodents are important for epidemiology, agronomy and ecological investigations and can act as bioindicators for human- and/or climate-induced environmental changes. Efficient and reliable genotyping of ancient rodent remains has the potential to deliver valuable phylogenetic and paleoecological information. The analysis of multiple ancient skeletal remains of very small size with poor DNA preservation, however, requires a sensitive high-throughput method to generate sufficient data. We show this approach to be particularly adapted at accessing this otherwise difficult taxonomic and genetic resource. As a highly scalable, lower cost and less labour-intensive alternative to targeted sequence capture approaches, we propose the aMPlex Torrent strategy to be a useful tool for the genetic analysis of multiple degraded samples in studies involving ecology, archaeology, conservation and evolutionary biology. © 2016 John Wiley & Sons Ltd.
PLAStiCC: Predictive Look-Ahead Scheduling for Continuous dataflows on Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor K.
2014-05-27
Scalable stream processing and continuous dataflow systems are gaining traction with the rise of big data due to the need for processing high velocity data in near real time. Unlike batch processing systems such as MapReduce and workflows, static scheduling strategies fall short for continuous dataflows due to the variations in the input data rates and the need for sustained throughput. The elastic resource provisioning of cloud infrastructure is valuable to meet the changing resource needs of such continuous applications. However, multi-tenant cloud resources introduce yet another dimension of performance variability that impacts the application’s throughput. In this paper wemore » propose PLAStiCC, an adaptive scheduling algorithm that balances resource cost and application throughput using a prediction-based look-ahead approach. It not only addresses variations in the input data rates but also the underlying cloud infrastructure. In addition, we also propose several simpler static scheduling heuristics that operate in the absence of accurate performance prediction model. These static and adaptive heuristics are evaluated through extensive simulations using performance traces obtained from public and private IaaS clouds. Our results show an improvement of up to 20% in the overall profit as compared to the reactive adaptation algorithm.« less
GPU-accelerated Red Blood Cells Simulations with Transport Dissipative Particle Dynamics.
Blumers, Ansel L; Tang, Yu-Hang; Li, Zhen; Li, Xuejin; Karniadakis, George E
2017-08-01
Mesoscopic numerical simulations provide a unique approach for the quantification of the chemical influences on red blood cell functionalities. The transport Dissipative Particles Dynamics (tDPD) method can lead to such effective multiscale simulations due to its ability to simultaneously capture mesoscopic advection, diffusion, and reaction. In this paper, we present a GPU-accelerated red blood cell simulation package based on a tDPD adaptation of our red blood cell model, which can correctly recover the cell membrane viscosity, elasticity, bending stiffness, and cross-membrane chemical transport. The package essentially processes all computational workloads in parallel by GPU, and it incorporates multi-stream scheduling and non-blocking MPI communications to improve inter-node scalability. Our code is validated for accuracy and compared against the CPU counterpart for speed. Strong scaling and weak scaling are also presented to characterizes scalability. We observe a speedup of 10.1 on one GPU over all 16 cores within a single node, and a weak scaling efficiency of 91% across 256 nodes. The program enables quick-turnaround and high-throughput numerical simulations for investigating chemical-driven red blood cell phenomena and disorders.
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
Windows .NET Network Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST)
Dowd, Scot E; Zaragoza, Joaquin; Rodriguez, Javier R; Oliver, Melvin J; Payton, Paxton R
2005-01-01
Background BLAST is one of the most common and useful tools for Genetic Research. This paper describes a software application we have termed Windows .NET Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST), which enhances the BLAST utility by improving usability, fault recovery, and scalability in a Windows desktop environment. Our goal was to develop an easy to use, fault tolerant, high-throughput BLAST solution that incorporates a comprehensive BLAST result viewer with curation and annotation functionality. Results W.ND-BLAST is a comprehensive Windows-based software toolkit that targets researchers, including those with minimal computer skills, and provides the ability increase the performance of BLAST by distributing BLAST queries to any number of Windows based machines across local area networks (LAN). W.ND-BLAST provides intuitive Graphic User Interfaces (GUI) for BLAST database creation, BLAST execution, BLAST output evaluation and BLAST result exportation. This software also provides several layers of fault tolerance and fault recovery to prevent loss of data if nodes or master machines fail. This paper lays out the functionality of W.ND-BLAST. W.ND-BLAST displays close to 100% performance efficiency when distributing tasks to 12 remote computers of the same performance class. A high throughput BLAST job which took 662.68 minutes (11 hours) on one average machine was completed in 44.97 minutes when distributed to 17 nodes, which included lower performance class machines. Finally, there is a comprehensive high-throughput BLAST Output Viewer (BOV) and Annotation Engine components, which provides comprehensive exportation of BLAST hits to text files, annotated fasta files, tables, or association files. Conclusion W.ND-BLAST provides an interactive tool that allows scientists to easily utilizing their available computing resources for high throughput and comprehensive sequence analyses. The install package for W.ND-BLAST is freely downloadable from . With registration the software is free, installation, networking, and usage instructions are provided as well as a support forum. PMID:15819992
PCR cycles above routine numbers do not compromise high-throughput DNA barcoding results.
Vierna, J; Doña, J; Vizcaíno, A; Serrano, D; Jovani, R
2017-10-01
High-throughput DNA barcoding has become essential in ecology and evolution, but some technical questions still remain. Increasing the number of PCR cycles above the routine 20-30 cycles is a common practice when working with old-type specimens, which provide little amounts of DNA, or when facing annealing issues with the primers. However, increasing the number of cycles can raise the number of artificial mutations due to polymerase errors. In this work, we sequenced 20 COI libraries in the Illumina MiSeq platform. Libraries were prepared with 40, 45, 50, 55, and 60 PCR cycles from four individuals belonging to four species of four genera of cephalopods. We found no relationship between the number of PCR cycles and the number of mutations despite using a nonproofreading polymerase. Moreover, even when using a high number of PCR cycles, the resulting number of mutations was low enough not to be an issue in the context of high-throughput DNA barcoding (but may still remain an issue in DNA metabarcoding due to chimera formation). We conclude that the common practice of increasing the number of PCR cycles should not negatively impact the outcome of a high-throughput DNA barcoding study in terms of the occurrence of point mutations.
ILP-based maximum likelihood genome scaffolding
2014-01-01
Background Interest in de novo genome assembly has been renewed in the past decade due to rapid advances in high-throughput sequencing (HTS) technologies which generate relatively short reads resulting in highly fragmented assemblies consisting of contigs. Additional long-range linkage information is typically used to orient, order, and link contigs into larger structures referred to as scaffolds. Due to library preparation artifacts and erroneous mapping of reads originating from repeats, scaffolding remains a challenging problem. In this paper, we provide a scalable scaffolding algorithm (SILP2) employing a maximum likelihood model capturing read mapping uncertainty and/or non-uniformity of contig coverage which is solved using integer linear programming. A Non-Serial Dynamic Programming (NSDP) paradigm is applied to render our algorithm useful in the processing of larger mammalian genomes. To compare scaffolding tools, we employ novel quantitative metrics in addition to the extant metrics in the field. We have also expanded the set of experiments to include scaffolding of low-complexity metagenomic samples. Results SILP2 achieves better scalability throughg a more efficient NSDP algorithm than previous release of SILP. The results show that SILP2 compares favorably to previous methods OPERA and MIP in both scalability and accuracy for scaffolding single genomes of up to human size, and significantly outperforms them on scaffolding low-complexity metagenomic samples. Conclusions Equipped with NSDP, SILP2 is able to scaffold large mammalian genomes, resulting in the longest and most accurate scaffolds. The ILP formulation for the maximum likelihood model is shown to be flexible enough to handle metagenomic samples. PMID:25253180
Lessons from high-throughput protein crystallization screening: 10 years of practical experience
JR, Luft; EH, Snell; GT, DeTitta
2011-01-01
Introduction X-ray crystallography provides the majority of our structural biological knowledge at a molecular level and in terms of pharmaceutical design is a valuable tool to accelerate discovery. It is the premier technique in the field, but its usefulness is significantly limited by the need to grow well-diffracting crystals. It is for this reason that high-throughput crystallization has become a key technology that has matured over the past 10 years through the field of structural genomics. Areas covered The authors describe their experiences in high-throughput crystallization screening in the context of structural genomics and the general biomedical community. They focus on the lessons learnt from the operation of a high-throughput crystallization screening laboratory, which to date has screened over 12,500 biological macromolecules. They also describe the approaches taken to maximize the success while minimizing the effort. Through this, the authors hope that the reader will gain an insight into the efficient design of a laboratory and protocols to accomplish high-throughput crystallization on a single-, multiuser-laboratory or industrial scale. Expert Opinion High-throughput crystallization screening is readily available but, despite the power of the crystallographic technique, getting crystals is still not a solved problem. High-throughput approaches can help when used skillfully; however, they still require human input in the detailed analysis and interpretation of results to be more successful. PMID:22646073
A High-Throughput Process for the Solid-Phase Purification of Synthetic DNA Sequences
Grajkowski, Andrzej; Cieślak, Jacek; Beaucage, Serge L.
2017-01-01
An efficient process for the purification of synthetic phosphorothioate and native DNA sequences is presented. The process is based on the use of an aminopropylated silica gel support functionalized with aminooxyalkyl functions to enable capture of DNA sequences through an oximation reaction with the keto function of a linker conjugated to the 5′-terminus of DNA sequences. Deoxyribonucleoside phosphoramidites carrying this linker, as a 5′-hydroxyl protecting group, have been synthesized for incorporation into DNA sequences during the last coupling step of a standard solid-phase synthesis protocol executed on a controlled pore glass (CPG) support. Solid-phase capture of the nucleobase- and phosphate-deprotected DNA sequences released from the CPG support is demonstrated to proceed near quantitatively. Shorter than full-length DNA sequences are first washed away from the capture support; the solid-phase purified DNA sequences are then released from this support upon reaction with tetra-n-butylammonium fluoride in dry dimethylsulfoxide (DMSO) and precipitated in tetrahydrofuran (THF). The purity of solid-phase-purified DNA sequences exceeds 98%. The simulated high-throughput and scalability features of the solid-phase purification process are demonstrated without sacrificing purity of the DNA sequences. PMID:28628204
Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sulakhe, D.; Rodriguez, A.; Wilde, M.
2008-03-01
Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less
Scalable electrophysiology in intact small animals with nanoscale suspended electrode arrays
NASA Astrophysics Data System (ADS)
Gonzales, Daniel L.; Badhiwala, Krishna N.; Vercosa, Daniel G.; Avants, Benjamin W.; Liu, Zheng; Zhong, Weiwei; Robinson, Jacob T.
2017-07-01
Electrical measurements from large populations of animals would help reveal fundamental properties of the nervous system and neurological diseases. Small invertebrates are ideal for these large-scale studies; however, patch-clamp electrophysiology in microscopic animals typically requires invasive dissections and is low-throughput. To overcome these limitations, we present nano-SPEARs: suspended electrodes integrated into a scalable microfluidic device. Using this technology, we have made the first extracellular recordings of body-wall muscle electrophysiology inside an intact roundworm, Caenorhabditis elegans. We can also use nano-SPEARs to record from multiple animals in parallel and even from other species, such as Hydra littoralis. Furthermore, we use nano-SPEARs to establish the first electrophysiological phenotypes for C. elegans models for amyotrophic lateral sclerosis and Parkinson's disease, and show a partial rescue of the Parkinson's phenotype through drug treatment. These results demonstrate that nano-SPEARs provide the core technology for microchips that enable scalable, in vivo studies of neurobiology and neurological diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janjusic, Tommy; Kartsaklis, Christos
Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an applicationmore » s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).« less
[Big Data: the great opportunities and challenges to microbiome and other biomedical research].
Xu, Zhenjiang
2015-02-01
With the development of high-throughput technologies, biomedical data has been increasing exponentially in an explosive manner. This brings enormous opportunities and challenges to biomedical researchers on how to effectively utilize big data. Big data is different from traditional data in many ways, described as 3Vs - volume, variety and velocity. From the perspective of biomedical research, here I introduced the characteristics of big data, such as its messiness, re-usage and openness. Focusing on microbiome research of meta-analysis, the author discussed the prospective principles in data collection, challenges of privacy protection in data management, and the scalable tools in data analysis with examples from real life.
Lithography-based fabrication of nanopore arrays in freestanding SiN and graphene membranes
NASA Astrophysics Data System (ADS)
Verschueren, Daniel V.; Yang, Wayne; Dekker, Cees
2018-04-01
We report a simple and scalable technique for the fabrication of nanopore arrays on freestanding SiN and graphene membranes based on electron-beam lithography and reactive ion etching. By controlling the dose of the single-shot electron-beam exposure, circular nanopores of any size down to 16 nm in diameter can be fabricated in both materials at high accuracy and precision. We demonstrate the sensing capabilities of these nanopores by translocating dsDNA through pores fabricated using this method, and find signal-to-noise characteristics on par with transmission-electron-microscope-drilled nanopores. This versatile lithography-based approach allows for the high-throughput manufacturing of nanopores and can in principle be used on any substrate, in particular membranes made out of transferable two-dimensional materials.
COLA: Optimizing Stream Processing Applications via Graph Partitioning
NASA Astrophysics Data System (ADS)
Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra
In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.
NASA Astrophysics Data System (ADS)
Green, Martin L.; Takeuchi, Ichiro; Hattrick-Simpers, Jason R.
2013-06-01
High throughput (combinatorial) materials science methodology is a relatively new research paradigm that offers the promise of rapid and efficient materials screening, optimization, and discovery. The paradigm started in the pharmaceutical industry but was rapidly adopted to accelerate materials research in a wide variety of areas. High throughput experiments are characterized by synthesis of a "library" sample that contains the materials variation of interest (typically composition), and rapid and localized measurement schemes that result in massive data sets. Because the data are collected at the same time on the same "library" sample, they can be highly uniform with respect to fixed processing parameters. This article critically reviews the literature pertaining to applications of combinatorial materials science for electronic, magnetic, optical, and energy-related materials. It is expected that high throughput methodologies will facilitate commercialization of novel materials for these critically important applications. Despite the overwhelming evidence presented in this paper that high throughput studies can effectively inform commercial practice, in our perception, it remains an underutilized research and development tool. Part of this perception may be due to the inaccessibility of proprietary industrial research and development practices, but clearly the initial cost and availability of high throughput laboratory equipment plays a role. Combinatorial materials science has traditionally been focused on materials discovery, screening, and optimization to combat the extremely high cost and long development times for new materials and their introduction into commerce. Going forward, combinatorial materials science will also be driven by other needs such as materials substitution and experimental verification of materials properties predicted by modeling and simulation, which have recently received much attention with the advent of the Materials Genome Initiative. Thus, the challenge for combinatorial methodology will be the effective coupling of synthesis, characterization and theory, and the ability to rapidly manage large amounts of data in a variety of formats.
Scalable, full-colour and controllable chromotropic plasmonic printing
Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua
2015-01-01
Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization. PMID:26567803
Scalable, full-colour and controllable chromotropic plasmonic printing.
Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua
2015-11-16
Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization.
Chan, Leo Li-Ying; Smith, Tim; Kumph, Kendra A; Kuksin, Dmitry; Kessel, Sarah; Déry, Olivier; Cribbes, Scott; Lai, Ning; Qiu, Jean
2016-10-01
To ensure cell-based assays are performed properly, both cell concentration and viability have to be determined so that the data can be normalized to generate meaningful and comparable results. Cell-based assays performed in immuno-oncology, toxicology, or bioprocessing research often require measuring of multiple samples and conditions, thus the current automated cell counter that uses single disposable counting slides is not practical for high-throughput screening assays. In the recent years, a plate-based image cytometry system has been developed for high-throughput biomolecular screening assays. In this work, we demonstrate a high-throughput AO/PI-based cell concentration and viability method using the Celigo image cytometer. First, we validate the method by comparing directly to Cellometer automated cell counter. Next, cell concentration dynamic range, viability dynamic range, and consistency are determined. The high-throughput AO/PI method described here allows for 96-well to 384-well plate samples to be analyzed in less than 7 min, which greatly reduces the time required for the single sample-based automated cell counter. In addition, this method can improve the efficiency for high-throughput screening assays, where multiple cell counts and viability measurements are needed prior to performing assays such as flow cytometry, ELISA, or simply plating cells for cell culture.
Optimizing multi-dimensional high throughput screening using zebrafish
Truong, Lisa; Bugel, Sean M.; Chlebowski, Anna; Usenko, Crystal Y.; Simonich, Michael T.; Massey Simonich, Staci L.; Tanguay, Robert L.
2016-01-01
The use of zebrafish for high throughput screening (HTS) for chemical bioactivity assessments is becoming routine in the fields of drug discovery and toxicology. Here we report current recommendations from our experiences in zebrafish HTS. We compared the effects of different high throughput chemical delivery methods on nominal water concentration, chemical sorption to multi-well polystyrene plates, transcription responses, and resulting whole animal responses. We demonstrate that digital dispensing consistently yields higher data quality and reproducibility compared to standard plastic tip-based liquid handling. Additionally, we illustrate the challenges in using this sensitive model for chemical assessment when test chemicals have trace impurities. Adaptation of these better practices for zebrafish HTS should increase reproducibility across laboratories. PMID:27453428
Experiences with http/WebDAV protocols for data access in high throughput computing
NASA Astrophysics Data System (ADS)
Bernabeu, Gerard; Martinez, Francisco; Acción, Esther; Bria, Arnau; Caubet, Marc; Delfino, Manuel; Espinal, Xavier
2011-12-01
In the past, access to remote storage was considered to be at least one order of magnitude slower than local disk access. Improvement on network technologies provide the alternative of using remote disk. For those accesses one can today reach levels of throughput similar or exceeding those of local disks. Common choices as access protocols in the WLCG collaboration are RFIO, [GSI]DCAP, GRIDFTP, XROOTD and NFS. HTTP protocol shows a promising alternative as it is a simple, lightweight protocol. It also enables the use of standard technologies such as http caching or load balancing which can be used to improve service resilience and scalability or to boost performance for some use cases seen in HEP such as the "hot files". WebDAV extensions allow writing data, giving it enough functionality to work as a remote access protocol. This paper will show our experiences with the WebDAV door for dCache, in terms of functionality and performance, applied to some of the HEP work flows in the LHC Tier1 at PIC.
Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud
Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew
2015-01-01
Background Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. Results We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. Conclusions This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation. PMID:26501966
Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud.
Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew
2015-01-01
Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation.
Direct assembling methodologies for high-throughput bioscreening
Rodríguez-Dévora, Jorge I.; Shi, Zhi-dong; Xu, Tao
2012-01-01
Over the last few decades, high-throughput (HT) bioscreening, a technique that allows rapid screening of biochemical compound libraries against biological targets, has been widely used in drug discovery, stem cell research, development of new biomaterials, and genomics research. To achieve these ambitions, scaffold-free (or direct) assembly of biological entities of interest has become critical. Appropriate assembling methodologies are required to build an efficient HT bioscreening platform. The development of contact and non-contact assembling systems as a practical solution has been driven by a variety of essential attributes of the bioscreening system, such as miniaturization, high throughput, and high precision. The present article reviews recent progress on these assembling technologies utilized for the construction of HT bioscreening platforms. PMID:22021162
A massively parallel strategy for STR marker development, capture, and genotyping.
Kistler, Logan; Johnson, Stephen M; Irwin, Mitchell T; Louis, Edward E; Ratan, Aakrosh; Perry, George H
2017-09-06
Short tandem repeat (STR) variants are highly polymorphic markers that facilitate powerful population genetic analyses. STRs are especially valuable in conservation and ecological genetic research, yielding detailed information on population structure and short-term demographic fluctuations. Massively parallel sequencing has not previously been leveraged for scalable, efficient STR recovery. Here, we present a pipeline for developing STR markers directly from high-throughput shotgun sequencing data without a reference genome, and an approach for highly parallel target STR recovery. We employed our approach to capture a panel of 5000 STRs from a test group of diademed sifakas (Propithecus diadema, n = 3), endangered Malagasy rainforest lemurs, and we report extremely efficient recovery of targeted loci-97.3-99.6% of STRs characterized with ≥10x non-redundant sequence coverage. We then tested our STR capture strategy on P. diadema fecal DNA, and report robust initial results and suggestions for future implementations. In addition to STR targets, this approach also generates large, genome-wide single nucleotide polymorphism (SNP) panels from flanking regions. Our method provides a cost-effective and scalable solution for rapid recovery of large STR and SNP datasets in any species without needing a reference genome, and can be used even with suboptimal DNA more easily acquired in conservation and ecological studies. Published by Oxford University Press on behalf of Nucleic Acids Research 2017.
High-throughput measurements of the optical redox ratio using a commercial microplate reader.
Cannon, Taylor M; Shah, Amy T; Walsh, Alex J; Skala, Melissa C
2015-01-01
There is a need for accurate, high-throughput, functional measures to gauge the efficacy of potential drugs in living cells. As an early marker of drug response in cells, cellular metabolism provides an attractive platform for high-throughput drug testing. Optical techniques can noninvasively monitor NADH and FAD, two autofluorescent metabolic coenzymes. The autofluorescent redox ratio, defined as the autofluorescence intensity of NADH divided by that of FAD, quantifies relative rates of cellular glycolysis and oxidative phosphorylation. However, current microscopy methods for redox ratio quantification are time-intensive and low-throughput, limiting their practicality in drug screening. Alternatively, high-throughput commercial microplate readers quickly measure fluorescence intensities for hundreds of wells. This study found that a commercial microplate reader can differentiate the receptor status of breast cancer cell lines (p < 0.05) based on redox ratio measurements without extrinsic contrast agents. Furthermore, microplate reader redox ratio measurements resolve response (p < 0.05) and lack of response (p > 0.05) in cell lines that are responsive and nonresponsive, respectively, to the breast cancer drug trastuzumab. These studies indicate that the microplate readers can be used to measure the redox ratio in a high-throughput manner and are sensitive enough to detect differences in cellular metabolism that are consistent with microscopy results.
Discrete elements for 3D microfluidics.
Bhargava, Krisna C; Thompson, Bryant; Malmstadt, Noah
2014-10-21
Microfluidic systems are rapidly becoming commonplace tools for high-precision materials synthesis, biochemical sample preparation, and biophysical analysis. Typically, microfluidic systems are constructed in monolithic form by means of microfabrication and, increasingly, by additive techniques. These methods restrict the design and assembly of truly complex systems by placing unnecessary emphasis on complete functional integration of operational elements in a planar environment. Here, we present a solution based on discrete elements that liberates designers to build large-scale microfluidic systems in three dimensions that are modular, diverse, and predictable by simple network analysis techniques. We develop a sample library of standardized components and connectors manufactured using stereolithography. We predict and validate the flow characteristics of these individual components to design and construct a tunable concentration gradient generator with a scalable number of parallel outputs. We show that these systems are rapidly reconfigurable by constructing three variations of a device for generating monodisperse microdroplets in two distinct size regimes and in a high-throughput mode by simple replacement of emulsifier subcircuits. Finally, we demonstrate the capability for active process monitoring by constructing an optical sensing element for detecting water droplets in a fluorocarbon stream and quantifying their size and frequency. By moving away from large-scale integration toward standardized discrete elements, we demonstrate the potential to reduce the practice of designing and assembling complex 3D microfluidic circuits to a methodology comparable to that found in the electronics industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanjib, Das; Yang, Bin; Gu, Gong
Realizing the commercialization of high-performance and robust perovskite solar cells urgently requires the development of economically scalable processing techniques. Here we report a high-throughput ultrasonic spray-coating (USC) process capable of fabricating perovskite film-based solar cells on glass substrates with power conversion efficiency (PCE) as high as 13.04%. Perovskite films with high uniformity, crystallinity, and surface coverage are obtained in a single step. Moreover, we report USC processing on TiOx/ITO-coated polyethylene terephthalate (PET) substrates to realize flexible perovskite solar cells with PCE as high as 8.02% that are robust under mechanical stress. In this case, an optical curing technique was usedmore » to achieve a highly-conductive TiOx layer on flexible PET substrates for the first time. The high device performance and reliability obtained by this combination of USC processing with optical curing appears very promising for roll-to-roll manufacturing of high-efficiency, flexible perovskite solar cells.« less
Terrasso, Ana Paula; Pinto, Catarina; Serra, Margarida; Filipe, Augusto; Almeida, Susana; Ferreira, Ana Lúcia; Pedroso, Pedro; Brito, Catarina; Alves, Paula Marques
2015-07-10
There is an urgent need for new in vitro strategies to identify neurotoxic agents with speed, reliability and respect for animal welfare. Cell models should include distinct brain cell types and represent brain microenvironment to attain higher relevance. The main goal of this study was to develop and validate a human 3D neural model containing both neurons and glial cells, applicable for toxicity testing in high-throughput platforms. To achieve this, a scalable bioprocess for neural differentiation of human NTera2/cl.D1 cells in stirred culture systems was developed. Endpoints based on neuronal- and astrocytic-specific gene expression and functionality in 3D were implemented in multi-well format and used for toxicity assessment. The prototypical neurotoxicant acrylamide affected primarily neurons, impairing synaptic function; our results suggest that gene expression of the presynaptic marker synaptophysin can be used as sensitive endpoint. Chloramphenicol, described as neurotoxicant affected both cell types, with cytoskeleton markers' expression significantly reduced, particularly in astrocytes. In conclusion, a scalable and reproducible process for production of differentiated neurospheres enriched in mature neurons and functional astrocytes was obtained. This 3D approach allowed efficient production of large numbers of human differentiated neurospheres, which in combination with gene expression and functional endpoints are a powerful cell model to evaluate human neuronal and astrocytic toxicity. Copyright © 2014 Elsevier B.V. All rights reserved.
EPA CHEMICAL PRIORITIZATION COMMUNITY OF PRACTICE.
IN 2005 THE NATIONAL CENTER FOR COMPUTATIONAL TOXICOLOGY (NCCT) ORGANIZED EPA CHEMICAL PRIORITIATION COMMUNITY OF PRACTICE (CPCP) TO PROVIDE A FORUM FOR DISCUSSING THE UTILITY OF COMPUTATIONAL CHEMISTRY, HIGH-THROUGHPUT SCREENIG (HTS) AND VARIOUS TOXICOGENOMIC TECHNOLOGIES FOR CH...
Cheng, Jerome; Hipp, Jason; Monaco, James; Lucas, David R; Madabhushi, Anant; Balis, Ulysses J
2011-01-01
Spatially invariant vector quantization (SIVQ) is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector's sensitivity and specificity properties (typically by reviewing a resultant heat map). In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA) and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC) transfer function, with each assessment resulting in an associated area-under-the-curve (AUC) figure of merit. Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an additional effort directed towards attaining high-throughput capability for the SIVQ algorithm, we demonstrated the successful incorporation of it with the MATrix LABoratory (MATLAB™) application interface. The SIVQ algorithm is suitable for automated vector selection settings and high throughput computation.
Toward Scalable Trustworthy Computing Using the Human-Physiology-Immunity Metaphor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hively, Lee M; Sheldon, Frederick T
The cybersecurity landscape consists of an ad hoc patchwork of solutions. Optimal cybersecurity is difficult for various reasons: complexity, immense data and processing requirements, resource-agnostic cloud computing, practical time-space-energy constraints, inherent flaws in 'Maginot Line' defenses, and the growing number and sophistication of cyberattacks. This article defines the high-priority problems and examines the potential solution space. In that space, achieving scalable trustworthy computing and communications is possible through real-time knowledge-based decisions about cyber trust. This vision is based on the human-physiology-immunity metaphor and the human brain's ability to extract knowledge from data and information. The article outlines future steps towardmore » scalable trustworthy systems requiring a long-term commitment to solve the well-known challenges.« less
Design of a dataway processor for a parallel image signal processing system
NASA Astrophysics Data System (ADS)
Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu
1995-04-01
Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.
Hu, Peng; Fabyanic, Emily; Kwon, Deborah Y; Tang, Sheng; Zhou, Zhaolan; Wu, Hao
2017-12-07
Massively parallel single-cell RNA sequencing can precisely resolve cellular diversity in a high-throughput manner at low cost, but unbiased isolation of intact single cells from complex tissues such as adult mammalian brains is challenging. Here, we integrate sucrose-gradient-assisted purification of nuclei with droplet microfluidics to develop a highly scalable single-nucleus RNA-seq approach (sNucDrop-seq), which is free of enzymatic dissociation and nucleus sorting. By profiling ∼18,000 nuclei isolated from cortical tissues of adult mice, we demonstrate that sNucDrop-seq not only accurately reveals neuronal and non-neuronal subtype composition with high sensitivity but also enables in-depth analysis of transient transcriptional states driven by neuronal activity, at single-cell resolution, in vivo. Copyright © 2017 Elsevier Inc. All rights reserved.
FBCOT: a fast block coding option for JPEG 2000
NASA Astrophysics Data System (ADS)
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).
NASA Astrophysics Data System (ADS)
Esmaily, M.; Jofre, L.; Mani, A.; Iaccarino, G.
2018-03-01
A geometric multigrid algorithm is introduced for solving nonsymmetric linear systems resulting from the discretization of the variable density Navier-Stokes equations on nonuniform structured rectilinear grids and high-Reynolds number flows. The restriction operation is defined such that the resulting system on the coarser grids is symmetric, thereby allowing for the use of efficient smoother algorithms. To achieve an optimal rate of convergence, the sequence of interpolation and restriction operations are determined through a dynamic procedure. A parallel partitioning strategy is introduced to minimize communication while maintaining the load balance between all processors. To test the proposed algorithm, we consider two cases: 1) homogeneous isotropic turbulence discretized on uniform grids and 2) turbulent duct flow discretized on stretched grids. Testing the algorithm on systems with up to a billion unknowns shows that the cost varies linearly with the number of unknowns. This O (N) behavior confirms the robustness of the proposed multigrid method regarding ill-conditioning of large systems characteristic of multiscale high-Reynolds number turbulent flows. The robustness of our method to density variations is established by considering cases where density varies sharply in space by a factor of up to 104, showing its applicability to two-phase flow problems. Strong and weak scalability studies are carried out, employing up to 30,000 processors, to examine the parallel performance of our implementation. Excellent scalability of our solver is shown for a granularity as low as 104 to 105 unknowns per processor. At its tested peak throughput, it solves approximately 4 billion unknowns per second employing over 16,000 processors with a parallel efficiency higher than 50%.
Optimizing multi-dimensional high throughput screening using zebrafish.
Truong, Lisa; Bugel, Sean M; Chlebowski, Anna; Usenko, Crystal Y; Simonich, Michael T; Simonich, Staci L Massey; Tanguay, Robert L
2016-10-01
The use of zebrafish for high throughput screening (HTS) for chemical bioactivity assessments is becoming routine in the fields of drug discovery and toxicology. Here we report current recommendations from our experiences in zebrafish HTS. We compared the effects of different high throughput chemical delivery methods on nominal water concentration, chemical sorption to multi-well polystyrene plates, transcription responses, and resulting whole animal responses. We demonstrate that digital dispensing consistently yields higher data quality and reproducibility compared to standard plastic tip-based liquid handling. Additionally, we illustrate the challenges in using this sensitive model for chemical assessment when test chemicals have trace impurities. Adaptation of these better practices for zebrafish HTS should increase reproducibility across laboratories. Copyright © 2016 Elsevier Inc. All rights reserved.
Microfluidic multiplexing of solid-state nanopores
NASA Astrophysics Data System (ADS)
Jain, Tarun; Rasera, Benjamin C.; Guerrero, Ricardo Jose S.; Lim, Jong-Min; Karnik, Rohit
2017-12-01
Although solid-state nanopores enable electronic analysis of many clinically and biologically relevant molecular structures, there are few existing device architectures that enable high-throughput measurement of solid-state nanopores. Herein, we report a method for microfluidic integration of multiple solid-state nanopores at a high density of one nanopore per (35 µm2). By configuring microfluidic devices with microfluidic valves, the nanopores can be rinsed from a single fluid input while retaining compatibility for multichannel electrical measurements. The microfluidic valves serve the dual purpose of fluidic switching and electric switching, enabling serial multiplexing of the eight nanopores with a single pair of electrodes. Furthermore, the device architecture exhibits low noise and is compatible with electroporation-based in situ nanopore fabrication, providing a scalable platform for automated electronic measurement of a large number of integrated solid-state nanopores.
Laser post-processing of halide perovskites for enhanced photoluminescence and absorbance
NASA Astrophysics Data System (ADS)
Tiguntseva, E. Y.; Saraeva, I. N.; Kudryashov, S. I.; Ushakova, E. V.; Komissarenko, F. E.; Ishteev, A. R.; Tsypkin, A. N.; Haroldson, R.; Milichko, V. A.; Zuev, D. A.; Makarov, S. V.; Zakhidov, A. A.
2017-11-01
Hybrid halide perovskites have emerged as one of the most promising type of materials for thin-film photovoltaic and light-emitting devices. Further boosting their performance is critically important for commercialization. Here we use femtosecond laser for post-processing of organo-metalic perovskite (MAPbI3) films. The high throughput laser approaches include both ablative silicon nanoparticles integration and laser-induced annealing. By using these techniques, we achieve strong enhancement of photoluminescence as well as useful light absorption. As a result, we observed experimentally 10-fold enhancement of absorbance in a perovskite layer with the silicon nanoparticles. Direct laser annealing allows for increasing of photoluminescence over 130%, and increase absorbance over 300% in near-IR range. We believe that the developed approaches pave the way to novel scalable and highly effective designs of perovskite based devices.
Cadusch, Jasper J; Panchenko, Evgeniy; Kirkwood, Nicholas; James, Timothy D; Gibson, Brant C; Webb, Kevin J; Mulvaney, Paul; Roberts, Ann
2015-09-07
Here we present an application of a high throughput nanofabrication technique to the creation of a plasmonic metasurface and demonstrate its application to the enhancement and control of radiation by quantum dots (QDs). The metasurface consists of an array of cold-forged rectangular nanocavities in a thin silver film. High quantum efficiency graded alloy CdSe/CdS/ZnS quantum dots were spread over the metasurface and the effects of the plasmon-exciton interactions characterised. We found a four-fold increase in the QDs radiative decay rate and emission brightness, compared to QDs on glass, along with a degree of linear polarisation of 0.73 in the emitted field. Such a surface could be easily integrated with current QD display or organic solar cell designs.
Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.
Koch, S; Bosch, H; Giereth, M; Ertl, T
2011-05-01
Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.
Survey of MapReduce frame operation in bioinformatics.
Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke
2014-07-01
Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Anderson, J.; Bauer, K.; Borga, A.; Boterenbrood, H.; Chen, H.; Chen, K.; Drake, G.; Dönszelmann, M.; Francis, D.; Guest, D.; Gorini, B.; Joos, M.; Lanni, F.; Lehmann Miotto, G.; Levinson, L.; Narevicius, J.; Panduro Vazquez, W.; Roich, A.; Ryu, S.; Schreuder, F.; Schumacher, J.; Vandelli, W.; Vermeulen, J.; Whiteson, D.; Wu, W.; Zhang, J.
2016-12-01
The ATLAS Phase-I upgrade (2019) requires a Trigger and Data Acquisition (TDAQ) system able to trigger and record data from up to three times the nominal LHC instantaneous luminosity. The Front-End LInk eXchange (FELIX) system provides an infrastructure to achieve this in a scalable, detector agnostic and easily upgradeable way. It is a PC-based gateway, interfacing custom radiation tolerant optical links from front-end electronics, via PCIe Gen3 cards, to a commodity switched Ethernet or InfiniBand network. FELIX enables reducing custom electronics in favour of software running on commercial servers. The FELIX system, the design of the PCIe prototype card and the integration test results are presented in this paper.
A scalable lysyl hydroxylase 2 expression system and luciferase-based enzymatic activity assay
Guo, Hou-Fu; Cho, Eun Jeong; Devkota, Ashwini K.; Chen, Yulong; Russell, William; Phillips, George N.; Yamauchi, Mitsuo; Dalby, Kevin; Kurie, Jonathan M.
2017-01-01
Hydroxylysine aldehyde-derived collagen cross-links (HLCCs) accumulate in fibrotic tissues and certain types of cancer and are thought to drive the progression of these diseases. HLCC formation is initiated by lysyl hydroxylase 2 (LH2), an Fe(II) and α-ketoglutarate (αKG)-dependent oxygenase that hydroxylates telopeptidyl lysine residues on collagen. Development of LH2 antagonists for the treatment of these diseases will require a reliable source of recombinant LH2 protein and a non-radioactive LH2 enzymatic activity assay that is amenable to high throughput screens of small molecule libraries. However, LH2 protein generated previously using E coli– or insect-based expression systems was either insoluble or enzymatically unstable, and LH2 enzymatic activity assays have measured radioactive CO2 released from 14C-labeled αKG during its conversion to succinate. To address these deficiencies, we have developed a scalable process to purify human LH2 protein from Chinese hamster ovary cell-derived conditioned media samples and a luciferase-based assay that quantifies LH2-dependent conversion of αKG to succinate. These methodologies may be applicable to other Fe(II) and αKG-dependent oxygenase systems. PMID:28216326
BBMerge – Accurate paired shotgun read merging via overlap
Bushnell, Brian; Rood, Jonathan; Singer, Esther
2017-10-26
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
Ergatis: a web interface and scalable software system for bioinformatics workflows
Orvis, Joshua; Crabtree, Jonathan; Galens, Kevin; Gussman, Aaron; Inman, Jason M.; Lee, Eduardo; Nampally, Sreenath; Riley, David; Sundaram, Jaideep P.; Felix, Victor; Whitty, Brett; Mahurkar, Anup; Wortman, Jennifer; White, Owen; Angiuoli, Samuel V.
2010-01-01
Motivation: The growth of sequence data has been accompanied by an increasing need to analyze data on distributed computer clusters. The use of these systems for routine analysis requires scalable and robust software for data management of large datasets. Software is also needed to simplify data management and make large-scale bioinformatics analysis accessible and reproducible to a wide class of target users. Results: We have developed a workflow management system named Ergatis that enables users to build, execute and monitor pipelines for computational analysis of genomics data. Ergatis contains preconfigured components and template pipelines for a number of common bioinformatics tasks such as prokaryotic genome annotation and genome comparisons. Outputs from many of these components can be loaded into a Chado relational database. Ergatis was designed to be accessible to a broad class of users and provides a user friendly, web-based interface. Ergatis supports high-throughput batch processing on distributed compute clusters and has been used for data management in a number of genome annotation and comparative genomics projects. Availability: Ergatis is an open-source project and is freely available at http://ergatis.sourceforge.net Contact: jorvis@users.sourceforge.net PMID:20413634
BBMerge – Accurate paired shotgun read merging via overlap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bushnell, Brian; Rood, Jonathan; Singer, Esther
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
Stranges, P. Benjamin; Palla, Mirkó; Kalachikov, Sergey; Nivala, Jeff; Dorwart, Michael; Trans, Andrew; Kumar, Shiv; Porel, Mintu; Chien, Minchen; Tao, Chuanjuan; Morozova, Irina; Li, Zengmin; Shi, Shundi; Aberra, Aman; Arnold, Cleoma; Yang, Alexander; Aguirre, Anne; Harada, Eric T.; Korenblum, Daniel; Pollard, James; Bhat, Ashwini; Gremyachinskiy, Dmitriy; Bibillo, Arek; Chen, Roger; Davis, Randy; Russo, James J.; Fuller, Carl W.; Roever, Stefan; Ju, Jingyue; Church, George M.
2016-01-01
Scalable, high-throughput DNA sequencing is a prerequisite for precision medicine and biomedical research. Recently, we presented a nanopore-based sequencing-by-synthesis (Nanopore-SBS) approach, which used a set of nucleotides with polymer tags that allow discrimination of the nucleotides in a biological nanopore. Here, we designed and covalently coupled a DNA polymerase to an α-hemolysin (αHL) heptamer using the SpyCatcher/SpyTag conjugation approach. These porin–polymerase conjugates were inserted into lipid bilayers on a complementary metal oxide semiconductor (CMOS)-based electrode array for high-throughput electrical recording of DNA synthesis. The designed nanopore construct successfully detected the capture of tagged nucleotides complementary to a DNA base on a provided template. We measured over 200 tagged-nucleotide signals for each of the four bases and developed a classification method to uniquely distinguish them from each other and background signals. The probability of falsely identifying a background event as a true capture event was less than 1.2%. In the presence of all four tagged nucleotides, we observed sequential additions in real time during polymerase-catalyzed DNA synthesis. Single-polymerase coupling to a nanopore, in combination with the Nanopore-SBS approach, can provide the foundation for a low-cost, single-molecule, electronic DNA-sequencing platform. PMID:27729524
Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian
2011-08-30
Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.
Razavi, Morteza; Leigh Anderson, N; Pope, Matthew E; Yip, Richard; Pearson, Terry W
2016-09-25
Efficient robotic workflows for trypsin digestion of human plasma and subsequent antibody-mediated peptide enrichment (the SISCAPA method) were developed with the goal of improving assay precision and throughput for multiplexed protein biomarker quantification. First, an 'addition only' tryptic digestion protocol was simplified from classical methods, eliminating the need for sample cleanup, while improving reproducibility, scalability and cost. Second, methods were developed to allow multiplexed enrichment and quantification of peptide surrogates of protein biomarkers representing a very broad range of concentrations and widely different molecular masses in human plasma. The total workflow coefficients of variation (including the 3 sequential steps of digestion, SISCAPA peptide enrichment and mass spectrometric analysis) for 5 proteotypic peptides measured in 6 replicates of each of 6 different samples repeated over 6 days averaged 3.4% within-run and 4.3% across all runs. An experiment to identify sources of variation in the workflow demonstrated that MRM measurement and tryptic digestion steps each had average CVs of ∼2.7%. Because of the high purity of the peptide analytes enriched by antibody capture, the liquid chromatography step is minimized and in some cases eliminated altogether, enabling throughput levels consistent with requirements of large biomarker and clinical studies. Copyright © 2016 Elsevier B.V. All rights reserved.
2014-01-01
Background Massively parallel DNA sequencing generates staggering amounts of data. Decreasing cost, increasing throughput, and improved annotation have expanded the diversity of genomics applications in research and clinical practice. This expanding scale creates analytical challenges: accommodating peak compute demand, coordinating secure access for multiple analysts, and sharing validated tools and results. Results To address these challenges, we have developed the Mercury analysis pipeline and deployed it in local hardware and the Amazon Web Services cloud via the DNAnexus platform. Mercury is an automated, flexible, and extensible analysis workflow that provides accurate and reproducible genomic results at scales ranging from individuals to large cohorts. Conclusions By taking advantage of cloud computing and with Mercury implemented on the DNAnexus platform, we have demonstrated a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples. PMID:24475911
Integrity, standards, and QC-related issues with big data in pre-clinical drug discovery.
Brothers, John F; Ung, Matthew; Escalante-Chong, Renan; Ross, Jermaine; Zhang, Jenny; Cha, Yoonjeong; Lysaght, Andrew; Funt, Jason; Kusko, Rebecca
2018-06-01
The tremendous expansion of data analytics and public and private big datasets presents an important opportunity for pre-clinical drug discovery and development. In the field of life sciences, the growth of genetic, genomic, transcriptomic and proteomic data is partly driven by a rapid decline in experimental costs as biotechnology improves throughput, scalability, and speed. Yet far too many researchers tend to underestimate the challenges and consequences involving data integrity and quality standards. Given the effect of data integrity on scientific interpretation, these issues have significant implications during preclinical drug development. We describe standardized approaches for maximizing the utility of publicly available or privately generated biological data and address some of the common pitfalls. We also discuss the increasing interest to integrate and interpret cross-platform data. Principles outlined here should serve as a useful broad guide for existing analytical practices and pipelines and as a tool for developing additional insights into therapeutics using big data. Copyright © 2018 Elsevier Inc. All rights reserved.
Perspectives on Validation of High-Throughput Assays Supporting 21st Century Toxicity Testing1
Judson, Richard; Kavlock, Robert; Martin, Matt; Reif, David; Houck, Keith; Knudsen, Thomas; Richard, Ann; Tice, Raymond R.; Whelan, Maurice; Xia, Menghang; Huang, Ruili; Austin, Christopher; Daston, George; Hartung, Thomas; Fowle, John R.; Wooge, William; Tong, Weida; Dix, David
2014-01-01
Summary In vitro, high-throughput screening (HTS) assays are seeing increasing use in toxicity testing. HTS assays can simultaneously test many chemicals, but have seen limited use in the regulatory arena, in part because of the need to undergo rigorous, time-consuming formal validation. Here we discuss streamlining the validation process, specifically for prioritization applications in which HTS assays are used to identify a high-concern subset of a collection of chemicals. The high-concern chemicals could then be tested sooner rather than later in standard guideline bioassays. The streamlined validation process would continue to ensure the reliability and relevance of assays for this application. We discuss the following practical guidelines: (1) follow current validation practice to the extent possible and practical; (2) make increased use of reference compounds to better demonstrate assay reliability and relevance; (3) deemphasize the need for cross-laboratory testing, and; (4) implement a web-based, transparent and expedited peer review process. PMID:23338806
Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew
2009-01-01
On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.
Zu, Guoqing; Shimizu, Taiyo; Kanamori, Kazuyoshi; Zhu, Yang; Maeno, Ayaka; Kaji, Hironori; Shen, Jun; Nakanishi, Kazuki
2018-01-23
Aerogels have many attractive properties but are usually costly and mechanically brittle, which always limit their practical applications. While many efforts have been made to reinforce the aerogels, most of the reinforcement efforts sacrifice the transparency or superinsulating properties. Here we report superflexible polyvinylpolymethylsiloxane, (CH 2 CH(Si(CH 3 )O 2/2 )) n , aerogels that are facilely prepared from a single precursor vinylmethyldimethoxysilane or vinylmethyldiethoxysilane without organic cross-linkers. The method is based on consecutive processes involving radical polymerization and hydrolytic polycondensation, followed by ultralow-cost, highly scalable, ambient-pressure drying directly from alcohol as a drying medium without any modification or additional solvent exchange. The resulting aerogels and xerogels show a homogeneous, tunable, highly porous, doubly cross-linked nanostructure with the elastic polymethylsiloxane network cross-linked with flexible hydrocarbon chains. An outstanding combination of ultralow cost, high scalability, uniform pore size, high surface area, high transparency, high hydrophobicity, excellent machinability, superflexibility in compression, superflexibility in bending, and superinsulating properties has been achieved in a single aerogel or xerogel. This study represents a significant progress of porous materials and makes the practical applications of transparent flexible aerogel-based superinsulators realistic.
Electron beam throughput from raster to imaging
NASA Astrophysics Data System (ADS)
Zywno, Marek
2016-12-01
Two architectures of electron beam tools are presented: single beam MEBES Exara designed and built by Etec Systems for mask writing, and the Reflected E-Beam Lithography tool (REBL), designed and built by KLA-Tencor under a DARPA Agreement No. HR0011-07-9-0007. Both tools have implemented technologies not used before to achieve their goals. The MEBES X, renamed Exara for marketing purposes, used an air bearing stage running in vacuum to achieve smooth continuous scanning. The REBL used 2 dimensional imaging to distribute charge to a 4k pixel swath to achieve writing times on the order of 1 wafer per hour, scalable to throughput approaching optical projection tools. Three stage architectures were designed for continuous scanning of wafers: linear maglev, rotary maglev, and dual linear maglev.
2016-01-01
The development of a practical and scalable process for the asymmetric synthesis of sitagliptin is reported. Density functional theory calculations reveal that two noncovalent interactions are responsible for the high diastereoselection. The first is an intramolecular hydrogen bond between the enamide NH and the boryl mesylate S=O, consistent with MsOH being crucial for high selectivity. The second is a novel C–H···F interaction between the aryl C5-fluoride and the methyl of the mesylate ligand. PMID:25799267
Searching Across the International Space Station Databases
NASA Technical Reports Server (NTRS)
Maluf, David A.; McDermott, William J.; Smith, Ernest E.; Bell, David G.; Gurram, Mohana
2007-01-01
Data access in the enterprise generally requires us to combine data from different sources and different formats. It is advantageous thus to focus on the intersection of the knowledge across sources and domains; keeping irrelevant knowledge around only serves to make the integration more unwieldy and more complicated than necessary. A context search over multiple domain is proposed in this paper to use context sensitive queries to support disciplined manipulation of domain knowledge resources. The objective of a context search is to provide the capability for interrogating many domain knowledge resources, which are largely semantically disjoint. The search supports formally the tasks of selecting, combining, extending, specializing, and modifying components from a diverse set of domains. This paper demonstrates a new paradigm in composition of information for enterprise applications. In particular, it discusses an approach to achieving data integration across multiple sources, in a manner that does not require heavy investment in database and middleware maintenance. This lean approach to integration leads to cost-effectiveness and scalability of data integration with an underlying schemaless object-relational database management system. This highly scalable, information on demand system framework, called NX-Search, which is an implementation of an information system built on NETMARK. NETMARK is a flexible, high-throughput open database integration framework for managing, storing, and searching unstructured or semi-structured arbitrary XML and HTML used widely at the National Aeronautics Space Administration (NASA) and industry.
Słomka, Marcin; Sobalska-Kwapis, Marta; Wachulec, Monika; Bartosz, Grzegorz; Strapagiel, Dominik
2017-11-03
High resolution melting (HRM) is a convenient method for gene scanning as well as genotyping of individual and multiple single nucleotide polymorphisms (SNPs). This rapid, simple, closed-tube, homogenous, and cost-efficient approach has the capacity for high specificity and sensitivity, while allowing easy transition to high-throughput scale. In this paper, we provide examples from our laboratory practice of some problematic issues which can affect the performance and data analysis of HRM results, especially with regard to reference curve-based targeted genotyping. We present those examples in order of the typical experimental workflow, and discuss the crucial significance of the respective experimental errors and limitations for the quality and analysis of results. The experimental details which have a decisive impact on correct execution of a HRM genotyping experiment include type and quality of DNA source material, reproducibility of isolation method and template DNA preparation, primer and amplicon design, automation-derived preparation and pipetting inconsistencies, as well as physical limitations in melting curve distinction for alternative variants and careful selection of samples for validation by sequencing. We provide a case-by-case analysis and discussion of actual problems we encountered and solutions that should be taken into account by researchers newly attempting HRM genotyping, especially in a high-throughput setup.
Słomka, Marcin; Sobalska-Kwapis, Marta; Wachulec, Monika; Bartosz, Grzegorz
2017-01-01
High resolution melting (HRM) is a convenient method for gene scanning as well as genotyping of individual and multiple single nucleotide polymorphisms (SNPs). This rapid, simple, closed-tube, homogenous, and cost-efficient approach has the capacity for high specificity and sensitivity, while allowing easy transition to high-throughput scale. In this paper, we provide examples from our laboratory practice of some problematic issues which can affect the performance and data analysis of HRM results, especially with regard to reference curve-based targeted genotyping. We present those examples in order of the typical experimental workflow, and discuss the crucial significance of the respective experimental errors and limitations for the quality and analysis of results. The experimental details which have a decisive impact on correct execution of a HRM genotyping experiment include type and quality of DNA source material, reproducibility of isolation method and template DNA preparation, primer and amplicon design, automation-derived preparation and pipetting inconsistencies, as well as physical limitations in melting curve distinction for alternative variants and careful selection of samples for validation by sequencing. We provide a case-by-case analysis and discussion of actual problems we encountered and solutions that should be taken into account by researchers newly attempting HRM genotyping, especially in a high-throughput setup. PMID:29099791
Reyon, Deepak; Maeder, Morgan L.; Khayter, Cyd; Tsai, Shengdar Q.; Foley, Jonathan E.; Sander, Jeffry D.; Joung, J. Keith
2013-01-01
Customized DNA-binding domains made using Transcription Activator-Like Effector (TALE) repeats are rapidly growing in importance as widely applicable research tools. TALE nucleases (TALENs), composed of an engineered array of TALE repeats fused to the FokI nuclease domain, have been used successfully for directed genome editing in multiple different organisms and cell types. TALE transcription factors (TALE-TFs), consisting of engineered TALE repeat arrays linked to a transcriptional regulatory domain, have been used to up- or down-regulate expression of endogenous genes in human cells and plants. Here we describe a detailed protocol for practicing the recently described Fast Ligation-based Automatable Solid-phase High-throughput (FLASH) assembly method. FLASH enables automated high-throughput construction of engineered TALE repeats using an automated liquid handling robot or manually using a multi-channel pipet. With the automated version of FLASH, a single researcher can construct up to 96 DNA fragments encoding various length TALE repeat arrays in one day and then clone these to construct sequence-verified TALEN or TALE-TF expression plasmids in one week or less. Plas-mids required to practice FLASH are available by request from the Joung Lab (http://www.jounglab.org/). We also describe here improvements to the Zinc Finger and TALE Targeter (ZiFiT Targeter) webserver (http://ZiFiTBeta.partners.org) that facilitate the design and construction of FLASH TALE repeat arrays in high-throughput. PMID:23821439
Virtual Machine-level Software Transactional Memory: Principles, Techniques, and Implementation
2015-08-13
against non-VM STMs, with the same algorithm inside the VM versus “outside”it. Our competitor non-VM STMs include Deuce, ObjectFabric, Multiverse , and...TL2 ByteSTM/NOrec Non-VM/NOrec Deuce/TL2 Object Fabric Multiverse JVSTM (a) 20% writes. (b) 80% writes. Fig. 1 Throughput for Linked-List. Higher is...When Scalability Meets Consistency: Genuine Multiversion Update-Serializable Partial Data Replication. In ICDCS, pages 455–465, 2012. [34] D
Hosokawa, Masahito; Nishikawa, Yohei; Kogawa, Masato; Takeyama, Haruko
2017-07-12
Massively parallel single-cell genome sequencing is required to further understand genetic diversities in complex biological systems. Whole genome amplification (WGA) is the first step for single-cell sequencing, but its throughput and accuracy are insufficient in conventional reaction platforms. Here, we introduce single droplet multiple displacement amplification (sd-MDA), a method that enables massively parallel amplification of single cell genomes while maintaining sequence accuracy and specificity. Tens of thousands of single cells are compartmentalized in millions of picoliter droplets and then subjected to lysis and WGA by passive droplet fusion in microfluidic channels. Because single cells are isolated in compartments, their genomes are amplified to saturation without contamination. This enables the high-throughput acquisition of contamination-free and cell specific sequence reads from single cells (21,000 single-cells/h), resulting in enhancement of the sequence data quality compared to conventional methods. This method allowed WGA of both single bacterial cells and human cancer cells. The obtained sequencing coverage rivals those of conventional techniques with superior sequence quality. In addition, we also demonstrate de novo assembly of uncultured soil bacteria and obtain draft genomes from single cell sequencing. This sd-MDA is promising for flexible and scalable use in single-cell sequencing.
Parallel processing of genomics data
NASA Astrophysics Data System (ADS)
Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario
2016-10-01
The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.
Mapping RNA-seq Reads with STAR
Dobin, Alexander; Gingeras, Thomas R.
2015-01-01
Mapping of large sets of high-throughput sequencing reads to a reference genome is one of the foundational steps in RNA-seq data analysis. The STAR software package performs this task with high levels of accuracy and speed. In addition to detecting annotated and novel splice junctions, STAR is capable of discovering more complex RNA sequence arrangements, such as chimeric and circular RNA. STAR can align spliced sequences of any length with moderate error rates providing scalability for emerging sequencing technologies. STAR generates output files that can be used for many downstream analyses such as transcript/gene expression quantification, differential gene expression, novel isoform reconstruction, signal visualization, and so forth. In this unit we describe computational protocols that produce various output files, use different RNA-seq datatypes, and utilize different mapping strategies. STAR is Open Source software that can be run on Unix, Linux or Mac OS X systems. PMID:26334920
Mapping RNA-seq Reads with STAR.
Dobin, Alexander; Gingeras, Thomas R
2015-09-03
Mapping of large sets of high-throughput sequencing reads to a reference genome is one of the foundational steps in RNA-seq data analysis. The STAR software package performs this task with high levels of accuracy and speed. In addition to detecting annotated and novel splice junctions, STAR is capable of discovering more complex RNA sequence arrangements, such as chimeric and circular RNA. STAR can align spliced sequences of any length with moderate error rates, providing scalability for emerging sequencing technologies. STAR generates output files that can be used for many downstream analyses such as transcript/gene expression quantification, differential gene expression, novel isoform reconstruction, and signal visualization. In this unit, we describe computational protocols that produce various output files, use different RNA-seq datatypes, and utilize different mapping strategies. STAR is open source software that can be run on Unix, Linux, or Mac OS X systems. Copyright © 2015 John Wiley & Sons, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karthik, Rajasekar
2014-01-01
In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack withmore » Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.« less
Adverse outcome pathway (AOP) development II: Best practices
Organization of existing and emerging toxicological knowledge into adverse outcome pathway (AOP) descriptions can facilitate greater application of mechanistic data, including high throughput in vitro, high content omics and imaging, and biomarkers, in risk-based decision-making....
A direct-to-drive neural data acquisition system.
Kinney, Justin P; Bernstein, Jacob G; Meyer, Andrew J; Barber, Jessica B; Bolivar, Marti; Newbold, Bryan; Scholvin, Jorg; Moore-Kochlacs, Caroline; Wentz, Christian T; Kopell, Nancy J; Boyden, Edward S
2015-01-01
Driven by the increasing channel count of neural probes, there is much effort being directed to creating increasingly scalable electrophysiology data acquisition (DAQ) systems. However, all such systems still rely on personal computers for data storage, and thus are limited by the bandwidth and cost of the computers, especially as the scale of recording increases. Here we present a novel architecture in which a digital processor receives data from an analog-to-digital converter, and writes that data directly to hard drives, without the need for a personal computer to serve as an intermediary in the DAQ process. This minimalist architecture may support exceptionally high data throughput, without incurring costs to support unnecessary hardware and overhead associated with personal computers, thus facilitating scaling of electrophysiological recording in the future.
A direct-to-drive neural data acquisition system
Kinney, Justin P.; Bernstein, Jacob G.; Meyer, Andrew J.; Barber, Jessica B.; Bolivar, Marti; Newbold, Bryan; Scholvin, Jorg; Moore-Kochlacs, Caroline; Wentz, Christian T.; Kopell, Nancy J.; Boyden, Edward S.
2015-01-01
Driven by the increasing channel count of neural probes, there is much effort being directed to creating increasingly scalable electrophysiology data acquisition (DAQ) systems. However, all such systems still rely on personal computers for data storage, and thus are limited by the bandwidth and cost of the computers, especially as the scale of recording increases. Here we present a novel architecture in which a digital processor receives data from an analog-to-digital converter, and writes that data directly to hard drives, without the need for a personal computer to serve as an intermediary in the DAQ process. This minimalist architecture may support exceptionally high data throughput, without incurring costs to support unnecessary hardware and overhead associated with personal computers, thus facilitating scaling of electrophysiological recording in the future. PMID:26388740
Analyzing microtomography data with Python and the scikit-image library.
Gouillart, Emmanuelle; Nunez-Iglesias, Juan; van der Walt, Stéfan
2017-01-01
The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. Users require tools that combine interactivity, versatility, and performance. scikit-image is an open-source image processing toolkit for the Python language that supports a large variety of file formats and is compatible with 2D and 3D images. The toolkit exposes a simple programming interface, with thematic modules grouping functions according to their purpose, such as image restoration, segmentation, and measurements. scikit-image users benefit from a rich scientific Python ecosystem that contains many powerful libraries for tasks such as visualization or machine learning. scikit-image combines a gentle learning curve, versatile image processing capabilities, and the scalable performance required for the high-throughput analysis of X-ray imaging data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, J.; Bauer, K.; Borga, A.
The ATLAS Phase-I upgrade (2019) requires a Trigger and Data Acquisition (TDAQ) system able to trigger and record data from up to three times the nominal LHC instantaneous luminosity. Furthermore, the Front-End LInk eXchange (FELIX) system provides an infrastructure to achieve this in a scalable, detector agnostic and easily upgradeable way. It is a PC-based gateway, interfacing custom radiation tolerant optical links from front-end electronics, via PCIe Gen3 cards, to a commodity switched Ethernet or InfiniBand network. FELIX enables reducing custom electronics in favour of software running on commercial servers. Here, the FELIX system, the design of the PCIe prototypemore » card and the integration test results are presented.« less
Anderson, J.; Bauer, K.; Borga, A.; ...
2016-12-13
The ATLAS Phase-I upgrade (2019) requires a Trigger and Data Acquisition (TDAQ) system able to trigger and record data from up to three times the nominal LHC instantaneous luminosity. Furthermore, the Front-End LInk eXchange (FELIX) system provides an infrastructure to achieve this in a scalable, detector agnostic and easily upgradeable way. It is a PC-based gateway, interfacing custom radiation tolerant optical links from front-end electronics, via PCIe Gen3 cards, to a commodity switched Ethernet or InfiniBand network. FELIX enables reducing custom electronics in favour of software running on commercial servers. Here, the FELIX system, the design of the PCIe prototypemore » card and the integration test results are presented.« less
Scalability of voltage-controlled filamentary and nanometallic resistance memory devices.
Lu, Yang; Lee, Jong Ho; Chen, I-Wei
2017-08-31
Much effort has been devoted to device and materials engineering to realize nanoscale resistance random access memory (RRAM) for practical applications, but a rational physical basis to be relied on to design scalable devices spanning many length scales is still lacking. In particular, there is no clear criterion for switching control in those RRAM devices in which resistance changes are limited to localized nanoscale filaments that experience concentrated heat, electric current and field. Here, we demonstrate voltage-controlled resistance switching, always at a constant characteristic critical voltage, for macro and nanodevices in both filamentary RRAM and nanometallic RRAM, and the latter switches uniformly and does not require a forming process. As a result, area-scalability can be achieved under a device-area-proportional current compliance for the low resistance state of the filamentary RRAM, and for both the low and high resistance states of the nanometallic RRAM. This finding will help design area-scalable RRAM at the nanoscale. It also establishes an analogy between RRAM and synapses, in which signal transmission is also voltage-controlled.
Pilling, Michael J; Henderson, Alex; Bird, Benjamin; Brown, Mick D; Clarke, Noel W; Gardner, Peter
2016-06-23
Infrared microscopy has become one of the key techniques in the biomedical research field for interrogating tissue. In partnership with multivariate analysis and machine learning techniques, it has become widely accepted as a method that can distinguish between normal and cancerous tissue with both high sensitivity and high specificity. While spectral histopathology (SHP) is highly promising for improved clinical diagnosis, several practical barriers currently exist, which need to be addressed before successful implementation in the clinic. Sample throughput and speed of acquisition are key barriers and have been driven by the high volume of samples awaiting histopathological examination. FTIR chemical imaging utilising FPA technology is currently state-of-the-art for infrared chemical imaging, and recent advances in its technology have dramatically reduced acquisition times. Despite this, infrared microscopy measurements on a tissue microarray (TMA), often encompassing several million spectra, takes several hours to acquire. The problem lies with the vast quantities of data that FTIR collects; each pixel in a chemical image is derived from a full infrared spectrum, itself composed of thousands of individual data points. Furthermore, data management is quickly becoming a barrier to clinical translation and poses the question of how to store these incessantly growing data sets. Recently, doubts have been raised as to whether the full spectral range is actually required for accurate disease diagnosis using SHP. These studies suggest that once spectral biomarkers have been predetermined it may be possible to diagnose disease based on a limited number of discrete spectral features. In this current study, we explore the possibility of utilising discrete frequency chemical imaging for acquiring high-throughput, high-resolution chemical images. Utilising a quantum cascade laser imaging microscope with discrete frequency collection at key diagnostic wavelengths, we demonstrate that we can diagnose prostate cancer with high sensitivity and specificity. Finally we extend the study to a large patient dataset utilising tissue microarrays, and show that high sensitivity and specificity can be achieved using high-throughput, rapid data collection, thereby paving the way for practical implementation in the clinic.
High-throughput GPU-based LDPC decoding
NASA Astrophysics Data System (ADS)
Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin
2010-08-01
Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.
Automating PACS quality control with the Vanderbilt image processing enterprise resource
NASA Astrophysics Data System (ADS)
Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.
2012-02-01
Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.
H.264 Layered Coded Video over Wireless Networks: Channel Coding and Modulation Constraints
NASA Astrophysics Data System (ADS)
Ghandi, M. M.; Barmada, B.; Jones, E. V.; Ghanbari, M.
2006-12-01
This paper considers the prioritised transmission of H.264 layered coded video over wireless channels. For appropriate protection of video data, methods such as prioritised forward error correction coding (FEC) or hierarchical quadrature amplitude modulation (HQAM) can be employed, but each imposes system constraints. FEC provides good protection but at the price of a high overhead and complexity. HQAM is less complex and does not introduce any overhead, but permits only fixed data ratios between the priority layers. Such constraints are analysed and practical solutions are proposed for layered transmission of data-partitioned and SNR-scalable coded video where combinations of HQAM and FEC are used to exploit the advantages of both coding methods. Simulation results show that the flexibility of SNR scalability and absence of picture drift imply that SNR scalability as modelled is superior to data partitioning in such applications.
Kim, Eung-Sam; Ahn, Eun Hyun; Chung, Euiheon; Kim, Deok-Ho
2013-01-01
Nanotechnology-based tools are beginning to emerge as promising platforms for quantitative high-throughput analysis of live cells and tissues. Despite unprecedented progress made over the last decade, a challenge still lies in integrating emerging nanotechnology-based tools into macroscopic biomedical apparatuses for practical purposes in biomedical sciences. In this review, we discuss the recent advances and limitations in the analysis and control of mechanical, biochemical, fluidic, and optical interactions in the interface areas of nanotechnology-based materials and living cells in both in vitro and in vivo settings. PMID:24258011
Kim, Eung-Sam; Ahn, Eun Hyun; Chung, Euiheon; Kim, Deok-Ho
2013-12-01
Nanotechnology-based tools are beginning to emerge as promising platforms for quantitative high-throughput analysis of live cells and tissues. Despite unprecedented progress made over the last decade, a challenge still lies in integrating emerging nanotechnology-based tools into macroscopic biomedical apparatuses for practical purposes in biomedical sciences. In this review, we discuss the recent advances and limitations in the analysis and control of mechanical, biochemical, fluidic, and optical interactions in the interface areas of nanotechnologybased materials and living cells in both in vitro and in vivo settings.
Hadoop distributed batch processing for Gaia: a success story
NASA Astrophysics Data System (ADS)
Riello, Marco
2015-12-01
The DPAC Cambridge Data Processing Centre (DPCI) is responsible for the photometric calibration of the Gaia data including the low resolution spectra. The large data volume produced by Gaia (~26 billion transits/year), the complexity of its data stream and the self-calibrating approach pose unique challenges for scalability, reliability and robustness of both the software pipelines and the operations infrastructure. DPCI has been the first in DPAC to realise the potential of Hadoop and Map/Reduce and to adopt them as the core technologies for its infrastructure. This has proven a winning choice allowing DPCI unmatched processing throughput and reliability within DPAC to the point that other DPCs have started following our footsteps. In this talk we will present the software infrastructure developed to build the distributed and scalable batch data processing system that is currently used in production at DPCI and the excellent results in terms of performance of the system.
Scalable Lunar Surface Networks and Adaptive Orbit Access
NASA Technical Reports Server (NTRS)
Wang, Xudong
2015-01-01
Teranovi Technologies, Inc., has developed innovative network architecture, protocols, and algorithms for both lunar surface and orbit access networks. A key component of the overall architecture is a medium access control (MAC) protocol that includes a novel mechanism of overlaying time division multiple access (TDMA) and carrier sense multiple access with collision avoidance (CSMA/CA), ensuring scalable throughput and quality of service. The new MAC protocol is compatible with legacy Institute of Electrical and Electronics Engineers (IEEE) 802.11 networks. Advanced features include efficiency power management, adaptive channel width adjustment, and error control capability. A hybrid routing protocol combines the advantages of ad hoc on-demand distance vector (AODV) routing and disruption/delay-tolerant network (DTN) routing. Performance is significantly better than AODV or DTN and will be particularly effective for wireless networks with intermittent links, such as lunar and planetary surface networks and orbit access networks.
Multi-Center Traffic Management Advisor Operational Field Test Results
NASA Technical Reports Server (NTRS)
Farley, Todd; Landry, Steven J.; Hoang, Ty; Nickelson, Monicarol; Levin, Kerry M.; Rowe, Dennis W.
2005-01-01
The Multi-Center Traffic Management Advisor (McTMA) is a research prototype system which seeks to bring time-based metering into the mainstream of air traffic control (ATC) operations. Time-based metering is an efficient alternative to traditional air traffic management techniques such as distance-based spacing (miles-in-trail spacing) and managed arrival reservoirs (airborne holding). While time-based metering has demonstrated significant benefit in terms of arrival throughput and arrival delay, its use to date has been limited to arrival operations at just nine airports nationally. Wide-scale adoption of time-based metering has been hampered, in part, by the limited scalability of metering automation. In order to realize the full spectrum of efficiency benefits possible with time-based metering, a much more modular, scalable time-based metering capability is required. With its distributed metering architecture, multi-center TMA offers such a capability.
Unmanned aerial vehicles for high-throughput phenotyping and agronomic research
USDA-ARS?s Scientific Manuscript database
Advances in automation and data science have led agriculturists to seek real-time, high-quality, high-volume crop data to accelerate crop improvement through breeding and to optimize agronomic practices. Breeders have recently gained massive data-collection capability in genome sequencing of plants....
Abdiaj, Irini; Alcázar, Jesús
2017-12-01
We report herein the transfer of dual photoredox and nickel catalysis for C(sp 2 )C(sp 3 ) cross coupling form batch to flow. This new procedure clearly improves the scalability of the previous batch reaction by the reactor's size and operating time reduction, and allows the preparation of interesting compounds for drug discovery in multigram amounts. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Barcoding Strategy Enabling Higher-Throughput Library Screening by Microscopy.
Chen, Robert; Rishi, Harneet S; Potapov, Vladimir; Yamada, Masaki R; Yeh, Vincent J; Chow, Thomas; Cheung, Celia L; Jones, Austin T; Johnson, Terry D; Keating, Amy E; DeLoache, William C; Dueber, John E
2015-11-20
Dramatic progress has been made in the design and build phases of the design-build-test cycle for engineering cells. However, the test phase usually limits throughput, as many outputs of interest are not amenable to rapid analytical measurements. For example, phenotypes such as motility, morphology, and subcellular localization can be readily measured by microscopy, but analysis of these phenotypes is notoriously slow. To increase throughput, we developed microscopy-readable barcodes (MiCodes) composed of fluorescent proteins targeted to discernible organelles. In this system, a unique barcode can be genetically linked to each library member, making possible the parallel analysis of phenotypes of interest via microscopy. As a first demonstration, we MiCoded a set of synthetic coiled-coil leucine zipper proteins to allow an 8 × 8 matrix to be tested for specific interactions in micrographs consisting of mixed populations of cells. A novel microscopy-readable two-hybrid fluorescence localization assay for probing candidate interactions in the cytosol was also developed using a bait protein targeted to the peroxisome and a prey protein tagged with a fluorescent protein. This work introduces a generalizable, scalable platform for making microscopy amenable to higher-throughput library screening experiments, thereby coupling the power of imaging with the utility of combinatorial search paradigms.
Fogel, Ronen; Limson, Janice; Seshia, Ashwin A
2016-06-30
Resonant and acoustic wave devices have been researched for several decades for application in the gravimetric sensing of a variety of biological and chemical analytes. These devices operate by coupling the measurand (e.g. analyte adsorption) as a modulation in the physical properties of the acoustic wave (e.g. resonant frequency, acoustic velocity, dissipation) that can then be correlated with the amount of adsorbed analyte. These devices can also be miniaturized with advantages in terms of cost, size and scalability, as well as potential additional features including integration with microfluidics and electronics, scaled sensitivities associated with smaller dimensions and higher operational frequencies, the ability to multiplex detection across arrays of hundreds of devices embedded in a single chip, increased throughput and the ability to interrogate a wider range of modes including within the same device. Additionally, device fabrication is often compatible with semiconductor volume batch manufacturing techniques enabling cost scalability and a high degree of precision and reproducibility in the manufacturing process. Integration with microfluidics handling also enables suitable sample pre-processing/separation/purification/amplification steps that could improve selectivity and the overall signal-to-noise ratio. Three device types are reviewed here: (i) bulk acoustic wave sensors, (ii) surface acoustic wave sensors, and (iii) micro/nano-electromechanical system (MEMS/NEMS) sensors. © 2016 The Author(s). Published by Portland Press Limited on behalf of the Biochemical Society.
Flexible session management in a distributed environment
NASA Astrophysics Data System (ADS)
Miller, Zach; Bradley, Dan; Tannenbaum, Todd; Sfiligoi, Igor
2010-04-01
Many secure communication libraries used by distributed systems, such as SSL, TLS, and Kerberos, fail to make a clear distinction between the authentication, session, and communication layers. In this paper we introduce CEDAR, the secure communication library used by the Condor High Throughput Computing software, and present the advantages to a distributed computing system resulting from CEDAR's separation of these layers. Regardless of the authentication method used, CEDAR establishes a secure session key, which has the flexibility to be used for multiple capabilities. We demonstrate how a layered approach to security sessions can avoid round-trips and latency inherent in network authentication. The creation of a distinct session management layer allows for optimizations to improve scalability by way of delegating sessions to other components in the system. This session delegation creates a chain of trust that reduces the overhead of establishing secure connections and enables centralized enforcement of system-wide security policies. Additionally, secure channels based upon UDP datagrams are often overlooked by existing libraries; we show how CEDAR's structure accommodates this as well. As an example of the utility of this work, we show how the use of delegated security sessions and other techniques inherent in CEDAR's architecture enables US CMS to meet their scalability requirements in deploying Condor over large-scale, wide-area grid systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong
The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less
Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao
2016-04-01
The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.
Trapnell, Cole; Roberts, Adam; Goff, Loyal; Pertea, Geo; Kim, Daehwan; Kelley, David R; Pimentel, Harold; Salzberg, Steven L; Rinn, John L; Pachter, Lior
2012-01-01
Recent advances in high-throughput cDNA sequencing (RNA-seq) can reveal new genes and splice variants and quantify expression genome-wide in a single assay. The volume and complexity of data from RNA-seq experiments necessitate scalable, fast and mathematically principled analysis software. TopHat and Cufflinks are free, open-source software tools for gene discovery and comprehensive expression analysis of high-throughput mRNA sequencing (RNA-seq) data. Together, they allow biologists to identify new genes and new splice variants of known ones, as well as compare gene and transcript expression under two or more conditions. This protocol describes in detail how to use TopHat and Cufflinks to perform such analyses. It also covers several accessory tools and utilities that aid in managing data, including CummeRbund, a tool for visualizing RNA-seq analysis results. Although the procedure assumes basic informatics skills, these tools assume little to no background with RNA-seq analysis and are meant for novices and experts alike. The protocol begins with raw sequencing reads and produces a transcriptome assembly, lists of differentially expressed and regulated genes and transcripts, and publication-quality visualizations of analysis results. The protocol's execution time depends on the volume of transcriptome sequencing data and available computing resources but takes less than 1 d of computer time for typical experiments and ~1 h of hands-on time. PMID:22383036
2011-01-01
Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105
A fully roll-to-roll gravure-printed carbon nanotube-based active matrix for multi-touch sensors
Lee, Wookyu; Koo, Hyunmo; Sun, Junfeng; Noh, Jinsoo; Kwon, Kye-Si; Yeom, Chiseon; Choi, Younchang; Chen, Kevin; Javey, Ali; Cho, Gyoujin
2015-01-01
Roll-to-roll (R2R) printing has been pursued as a commercially viable high-throughput technology to manufacture flexible, disposable, and inexpensive printed electronic devices. However, in recent years, pessimism has prevailed because of the barriers faced when attempting to fabricate and integrate thin film transistors (TFTs) using an R2R printing method. In this paper, we report 20 × 20 active matrices (AMs) based on single-walled carbon nanotubes (SWCNTs) with a resolution of 9.3 points per inch (ppi) resolution, obtained using a fully R2R gravure printing process. By using SWCNTs as the semiconducting layer and poly(ethylene terephthalate) (PET) as the substrate, we have obtained a device yield above 98%, and extracted the key scalability factors required for a feasible R2R gravure manufacturing process. Multi-touch sensor arrays were achieved by laminating a pressure sensitive rubber onto the SWCNT-TFT AM. This R2R gravure printing system overcomes the barriers associated with the registration accuracy of printing each layer and the variation of the threshold voltage (Vth). By overcoming these barriers, the R2R gravure printing method can be viable as an advanced manufacturing technology, thus enabling the high-throughput production of flexible, disposable, and human-interactive cutting-edge electronic devices based on SWCNT-TFT AMs. PMID:26635237
Hard-tip, soft-spring lithography.
Shim, Wooyoung; Braunschweig, Adam B; Liao, Xing; Chai, Jinan; Lim, Jong Kuk; Zheng, Gengfeng; Mirkin, Chad A
2011-01-27
Nanofabrication strategies are becoming increasingly expensive and equipment-intensive, and consequently less accessible to researchers. As an alternative, scanning probe lithography has become a popular means of preparing nanoscale structures, in part owing to its relatively low cost and high resolution, and a registration accuracy that exceeds most existing technologies. However, increasing the throughput of cantilever-based scanning probe systems while maintaining their resolution and registration advantages has from the outset been a significant challenge. Even with impressive recent advances in cantilever array design, such arrays tend to be highly specialized for a given application, expensive, and often difficult to implement. It is therefore difficult to imagine commercially viable production methods based on scanning probe systems that rely on conventional cantilevers. Here we describe a low-cost and scalable cantilever-free tip-based nanopatterning method that uses an array of hard silicon tips mounted onto an elastomeric backing. This method-which we term hard-tip, soft-spring lithography-overcomes the throughput problems of cantilever-based scanning probe systems and the resolution limits imposed by the use of elastomeric stamps and tips: it is capable of delivering materials or energy to a surface to create arbitrary patterns of features with sub-50-nm resolution over centimetre-scale areas. We argue that hard-tip, soft-spring lithography is a versatile nanolithography strategy that should be widely adopted by academic and industrial researchers for rapid prototyping applications.
Leverage hadoop framework for large scale clinical informatics applications.
Dong, Xiao; Bahroos, Neil; Sadhu, Eugene; Jackson, Tommie; Chukhman, Morris; Johnson, Robert; Boyd, Andrew; Hynes, Denise
2013-01-01
In this manuscript, we present our experiences using the Apache Hadoop framework for high data volume and computationally intensive applications, and discuss some best practice guidelines in a clinical informatics setting. There are three main aspects in our approach: (a) process and integrate diverse, heterogeneous data sources using standard Hadoop programming tools and customized MapReduce programs; (b) after fine-grained aggregate results are obtained, perform data analysis using the Mahout data mining library; (c) leverage the column oriented features in HBase for patient centric modeling and complex temporal reasoning. This framework provides a scalable solution to meet the rapidly increasing, imperative "Big Data" needs of clinical and translational research. The intrinsic advantage of fault tolerance, high availability and scalability of Hadoop platform makes these applications readily deployable at the enterprise level cluster environment.
Lithography Assisted Fiber-Drawing Nanomanufacturing
Gholipour, Behrad; Bastock, Paul; Cui, Long; Craig, Christopher; Khan, Khouler; Hewak, Daniel W.; Soci, Cesare
2016-01-01
We present a high-throughput and scalable technique for the production of metal nanowires embedded in glass fibres by taking advantage of thin film properties and patterning techniques commonly used in planar microfabrication. This hybrid process enables the fabrication of single nanowires and nanowire arrays encased in a preform material within a single fibre draw, providing an alternative to costly and time-consuming iterative fibre drawing. This method allows the combination of materials with different thermal properties to create functional optoelectronic nanostructures. As a proof of principle of the potential of this technique, centimetre long gold nanowires (bulk Tm = 1064 °C) embedded in silicate glass fibres (Tg = 567 °C) were drawn in a single step with high aspect ratios (>104); such nanowires can be released from the glass matrix and show relatively high electrical conductivity. Overall, this fabrication method could enable mass manufacturing of metallic nanowires for plasmonics and nonlinear optics applications, as well as the integration of functional multimaterial structures for completely fiberised optoelectronic devices. PMID:27739543
Lithography Assisted Fiber-Drawing Nanomanufacturing
NASA Astrophysics Data System (ADS)
Gholipour, Behrad; Bastock, Paul; Cui, Long; Craig, Christopher; Khan, Khouler; Hewak, Daniel W.; Soci, Cesare
2016-10-01
We present a high-throughput and scalable technique for the production of metal nanowires embedded in glass fibres by taking advantage of thin film properties and patterning techniques commonly used in planar microfabrication. This hybrid process enables the fabrication of single nanowires and nanowire arrays encased in a preform material within a single fibre draw, providing an alternative to costly and time-consuming iterative fibre drawing. This method allows the combination of materials with different thermal properties to create functional optoelectronic nanostructures. As a proof of principle of the potential of this technique, centimetre long gold nanowires (bulk Tm = 1064 °C) embedded in silicate glass fibres (Tg = 567 °C) were drawn in a single step with high aspect ratios (>104) such nanowires can be released from the glass matrix and show relatively high electrical conductivity. Overall, this fabrication method could enable mass manufacturing of metallic nanowires for plasmonics and nonlinear optics applications, as well as the integration of functional multimaterial structures for completely fiberised optoelectronic devices.
Evaluation of Advanced Polymers for Additive Manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rios, Orlando; Carter, William G.; Kutchko, Cindy
The goal of this Manufacturing Demonstration Facility (MDF) technical collaboration project between Oak Ridge National Laboratory (ORNL) and PPG Industries, Inc. (PPG) was to evaluate the feasibility of using conventional coatings chemistry and technology to build up material layer-by-layer. The PPG-ORNL study successfully demonstrated that polymeric coatings formulations may overcome many limitations of common thermoplastics used in additive manufacturing (AM), allow lightweight nozzle design for material deposition, and increase build rate. The materials effort focused on layer-by-layer deposition of coatings with each layer fusing together. The combination of materials and deposition results in an additively manufactured build that has sufficientmore » mechanical properties to bear the load of additional layers, yet is capable of bonding across the z-layers to improve build direction strength. The formulation properties were tuned to enable a novel, high-throughput deposition method that is highly scalable, compatible with high loading of reinforcing fillers, and inherently low-cost.« less
Initial steps towards a production platform for DNA sequence analysis on the grid.
Luyf, Angela C M; van Schaik, Barbera D C; de Vries, Michel; Baas, Frank; van Kampen, Antoine H C; Olabarriaga, Silvia D
2010-12-14
Bioinformatics is confronted with a new data explosion due to the availability of high throughput DNA sequencers. Data storage and analysis becomes a problem on local servers, and therefore it is needed to switch to other IT infrastructures. Grid and workflow technology can help to handle the data more efficiently, as well as facilitate collaborations. However, interfaces to grids are often unfriendly to novice users. In this study we reused a platform that was developed in the VL-e project for the analysis of medical images. Data transfer, workflow execution and job monitoring are operated from one graphical interface. We developed workflows for two sequence alignment tools (BLAST and BLAT) as a proof of concept. The analysis time was significantly reduced. All workflows and executables are available for the members of the Dutch Life Science Grid and the VL-e Medical virtual organizations All components are open source and can be transported to other grid infrastructures. The availability of in-house expertise and tools facilitates the usage of grid resources by new users. Our first results indicate that this is a practical, powerful and scalable solution to address the capacity and collaboration issues raised by the deployment of next generation sequencers. We currently adopt this methodology on a daily basis for DNA sequencing and other applications. More information and source code is available via http://www.bioinformaticslaboratory.nl/
High-throughput countercurrent microextraction in passive mode.
Xie, Tingliang; Xu, Cong
2018-05-15
Although microextraction is much more efficient than conventional macroextraction, its practical application has been limited by low throughputs and difficulties in constructing robust countercurrent microextraction (CCME) systems. In this work, a robust CCME process was established based on a novel passive microextractor with four units without any moving parts. The passive microextractor has internal recirculation and can efficiently mix two immiscible liquids. The hydraulic characteristics as well as the extraction and back-extraction performance of the passive CCME were investigated experimentally. The recovery efficiencies of the passive CCME were 1.43-1.68 times larger than the best values achieved using cocurrent extraction. Furthermore, the total throughput of the passive CCME developed in this work was about one to three orders of magnitude higher than that of other passive CCME systems reported in the literature. Therefore, a robust CCME process with high throughputs has been successfully constructed, which may promote the application of passive CCME in a wide variety of fields.
NASA Astrophysics Data System (ADS)
Biercamp, Joachim; Adamidis, Panagiotis; Neumann, Philipp
2017-04-01
With the exa-scale era approaching, length and time scales used for climate research on one hand and numerical weather prediction on the other hand blend into each other. The Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE) represents a European consortium comprising partners from climate, weather and HPC in their effort to address key scientific challenges that both communities have in common. A particular challenge is to reach global models with spatial resolutions that allow simulating convective clouds and small-scale ocean eddies. These simulations would produce better predictions of trends and provide much more fidelity in the representation of high-impact regional events. However, running such models in operational mode, i.e with sufficient throughput in ensemble mode clearly will require exa-scale computing and data handling capability. We will discuss the ESiWACE initiative and relate it to work-in-progress on high-resolution simulations in Europe. We present recent strong scalability measurements from ESiWACE to demonstrate current computability in weather and climate simulation. A special focus in this particular talk is on the Icosahedal Nonhydrostatic (ICON) model used for a comparison of high resolution regional and global simulations with high quality observation data. We demonstrate that close-to-optimal parallel efficiency can be achieved in strong scaling global resolution experiments on Mistral/DKRZ, e.g. 94% for 5km resolution simulations using 36k cores on Mistral/DKRZ. Based on our scalability and high-resolution experiments, we deduce and extrapolate future capabilities for ICON that are expected for weather and climate research at exascale.
The novel high-performance 3-D MT inverse solver
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Geraskin, Alexey; Kuvshinov, Alexey
2016-04-01
We present novel, robust, scalable, and fast 3-D magnetotelluric (MT) inverse solver. The solver is written in multi-language paradigm to make it as efficient, readable and maintainable as possible. Separation of concerns and single responsibility concepts go through implementation of the solver. As a forward modelling engine a modern scalable solver extrEMe, based on contracting integral equation approach, is used. Iterative gradient-type (quasi-Newton) optimization scheme is invoked to search for (regularized) inverse problem solution, and adjoint source approach is used to calculate efficiently the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT responses, and supports massive parallelization. Moreover, different parallelization strategies implemented in the code allow optimal usage of available computational resources for a given problem statement. To parameterize an inverse domain the so-called mask parameterization is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to HPC Piz Daint (6th supercomputer in the world) demonstrate practically linear scalability of the code up to thousands of nodes.
Caron, Leslie; Kher, Devaki; Lee, Kian Leong; McKernan, Robert; Dumevska, Biljana; Hidalgo, Alejandro; Li, Jia; Yang, Henry; Main, Heather; Ferri, Giulia; Petek, Lisa M; Poellinger, Lorenz; Miller, Daniel G; Gabellini, Davide; Schmidt, Uli
2016-09-01
: Facioscapulohumeral muscular dystrophy (FSHD) represents a major unmet clinical need arising from the progressive weakness and atrophy of skeletal muscles. The dearth of adequate experimental models has severely hampered our understanding of the disease. To date, no treatment is available for FSHD. Human embryonic stem cells (hESCs) potentially represent a renewable source of skeletal muscle cells (SkMCs) and provide an alternative to invasive patient biopsies. We developed a scalable monolayer system to differentiate hESCs into mature SkMCs within 26 days, without cell sorting or genetic manipulation. Here we show that SkMCs derived from FSHD1-affected hESC lines exclusively express the FSHD pathogenic marker double homeobox 4 and exhibit some of the defects reported in FSHD. FSHD1 myotubes are thinner when compared with unaffected and Becker muscular dystrophy myotubes, and differentially regulate genes involved in cell cycle control, oxidative stress response, and cell adhesion. This cellular model will be a powerful tool for studying FSHD and will ultimately assist in the development of effective treatments for muscular dystrophies. This work describes an efficient and highly scalable monolayer system to differentiate human pluripotent stem cells (hPSCs) into skeletal muscle cells (SkMCs) and demonstrates disease-specific phenotypes in SkMCs derived from both embryonic and induced hPSCs affected with facioscapulohumeral muscular dystrophy. This study represents the first human stem cell-based cellular model for a muscular dystrophy that is suitable for high-throughput screening and drug development. ©AlphaMed Press.
Designing and application of SAN extension interface based on CWDM
NASA Astrophysics Data System (ADS)
Qin, Leihua; Yu, Shengsheng; Zhou, Jingli
2005-11-01
As Fibre Channel (FC) becomes the protocol of choice within corporate data centers, enterprises are increasingly deploying SANs in their data central. In order to mitigate the risk of losing data and improve the availability of data, more and more enterprises are increasingly adopting storage extension technologies to replicate their business critical data to a secondary site. Transmitting this information over distance requires a carrier grade environment with zero data loss, scalable throughput, low jitter, high security and ability to travel long distance. To address this business requirements, there are three basic architectures for storage extension, they are Storage over Internet Protocol, Storage over Synchronous Optical Network/Synchronous Digital Hierarchy (SONET/SDH) and Storage over Dense Wavelength Division Multiplexing (DWDM). Each approach varies in functionality, complexity, cost, scalability, security, availability , predictable behavior (bandwidth, jitter, latency) and multiple carrier limitations. Compared with these connectiviy technologies,Coarse Wavelength Division Multiplexing (CWDM) is a Simplified, Low Cost and High Performance connectivity solutions for enterprises to deploy their storage extension. In this paper, we design a storage extension connectivity over CWDM and test it's electrical characteristic and random read and write performance of disk array through the CWDM connectivity, testing result show us that the performance of the connectivity over CWDM is acceptable. Furthermore, we propose three kinds of network architecture of SAN extension based on CWDM interface. Finally the credit-Based flow control mechanism of FC, and the relationship between credits and extension distance is analyzed.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
Decentralized Orchestration of Composite Ogc Web Processing Services in the Cloud
NASA Astrophysics Data System (ADS)
Xiao, F.; Shea, G. Y. K.; Cao, J.
2016-09-01
Current web-based GIS or RS applications generally rely on centralized structure, which has inherent drawbacks such as single points of failure, network congestion, and data inconsistency, etc. The inherent disadvantages of traditional GISs need to be solved for new applications on Internet or Web. Decentralized orchestration offers performance improvements in terms of increased throughput and scalability and lower response time. This paper investigates build time and runtime issues related to decentralized orchestration of composite geospatial processing services based on OGC WPS standard specification. A case study of dust storm detection was demonstrated to evaluate the proposed method and the experimental results indicate that the method proposed in this study is effective for its ability to produce the high quality solution at a low cost of communications for geospatial processing service composition problem.
Scalable-manufactured randomized glass-polymer hybrid metamaterial for daytime radiative cooling
NASA Astrophysics Data System (ADS)
Zhai, Yao; Ma, Yaoguang; David, Sabrina N.; Zhao, Dongliang; Lou, Runnan; Tan, Gang; Yang, Ronggui; Yin, Xiaobo
2017-03-01
Passive radiative cooling draws heat from surfaces and radiates it into space as infrared radiation to which the atmosphere is transparent. However, the energy density mismatch between solar irradiance and the low infrared radiation flux from a near-ambient-temperature surface requires materials that strongly emit thermal energy and barely absorb sunlight. We embedded resonant polar dielectric microspheres randomly in a polymeric matrix, resulting in a metamaterial that is fully transparent to the solar spectrum while having an infrared emissivity greater than 0.93 across the atmospheric window. When backed with a silver coating, the metamaterial shows a noontime radiative cooling power of 93 watts per square meter under direct sunshine. More critically, we demonstrated high-throughput, economical roll-to-roll manufacturing of the metamaterial, which is vital for promoting radiative cooling as a viable energy technology.
Jackson, Rebecca D; Best, Thomas M; Borlawsky, Tara B; Lai, Albert M; James, Stephen; Gurcan, Metin N
2012-01-01
The conduct of clinical and translational research regularly involves the use of a variety of heterogeneous and large-scale data resources. Scalable methods for the integrative analysis of such resources, particularly when attempting to leverage computable domain knowledge in order to generate actionable hypotheses in a high-throughput manner, remain an open area of research. In this report, we describe both a generalizable design pattern for such integrative knowledge-anchored hypothesis discovery operations and our experience in applying that design pattern in the experimental context of a set of driving research questions related to the publicly available Osteoarthritis Initiative data repository. We believe that this ‘test bed’ project and the lessons learned during its execution are both generalizable and representative of common clinical and translational research paradigms. PMID:22647689
Formation of droplet interface bilayers in a Teflon tube
NASA Astrophysics Data System (ADS)
Walsh, Edmond; Feuerborn, Alexander; Cook, Peter R.
2016-09-01
Droplet-interface bilayers (DIBs) have applications in disciplines ranging from biology to computing. We present a method for forming them manually using a Teflon tube attached to a syringe pump; this method is simple enough it should be accessible to those without expertise in microfluidics. It exploits the properties of interfaces between three immiscible liquids, and uses fluid flow through the tube to pack together drops coated with lipid monolayers to create bilayers at points of contact. It is used to create functional nanopores in DIBs composed of phosphocholine using the protein α-hemolysin (αHL), to demonstrate osmotically-driven mass transfer of fluid across surfactant-based DIBs, and to create arrays of DIBs. The approach is scalable, and thousands of DIBs can be prepared using a robot in one hour; therefore, it is feasible to use it for high throughput applications.
Optimizing transformations for automated, high throughput analysis of flow cytometry data
2010-01-01
Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce variability in the location of discovered cell populations across samples, and decrease the misclassification (mis-gating) of individual events when compared to default-parameter counterparts. Conclusions Our results indicate that the preferred transformation for fluorescence channels is a parameter- optimized biexponential or generalized Box-Cox, in accordance with current best practices. Interestingly, for populations in the scatter channels, we find that the optimized hyperbolic arcsine may be a better choice in a high-throughput setting than current standard practice of no transformation. However, generally speaking, the choice of transformation remains data-dependent. We have implemented our algorithm in the BioConductor package, flowTrans, which is publicly available. PMID:21050468
Optimizing transformations for automated, high throughput analysis of flow cytometry data.
Finak, Greg; Perez, Juan-Manuel; Weng, Andrew; Gottardo, Raphael
2010-11-04
In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce variability in the location of discovered cell populations across samples, and decrease the misclassification (mis-gating) of individual events when compared to default-parameter counterparts. Our results indicate that the preferred transformation for fluorescence channels is a parameter- optimized biexponential or generalized Box-Cox, in accordance with current best practices. Interestingly, for populations in the scatter channels, we find that the optimized hyperbolic arcsine may be a better choice in a high-throughput setting than current standard practice of no transformation. However, generally speaking, the choice of transformation remains data-dependent. We have implemented our algorithm in the BioConductor package, flowTrans, which is publicly available.
Pathak, Jyotishman; Bailey, Kent R; Beebe, Calvin E; Bethard, Steven; Carrell, David S; Chen, Pei J; Dligach, Dmitriy; Endle, Cory M; Hart, Lacey A; Haug, Peter J; Huff, Stanley M; Kaggal, Vinod C; Li, Dingcheng; Liu, Hongfang; Marchant, Kyle; Masanz, James; Miller, Timothy; Oniki, Thomas A; Palmer, Martha; Peterson, Kevin J; Rea, Susan; Savova, Guergana K; Stancl, Craig R; Sohn, Sunghwan; Solbrig, Harold R; Suesse, Dale B; Tao, Cui; Taylor, David P; Westberg, Les; Wu, Stephen; Zhuo, Ning; Chute, Christopher G
2013-01-01
Research objective To develop scalable informatics infrastructure for normalization of both structured and unstructured electronic health record (EHR) data into a unified, concept-based model for high-throughput phenotype extraction. Materials and methods Software tools and applications were developed to extract information from EHRs. Representative and convenience samples of both structured and unstructured data from two EHR systems—Mayo Clinic and Intermountain Healthcare—were used for development and validation. Extracted information was standardized and normalized to meaningful use (MU) conformant terminology and value set standards using Clinical Element Models (CEMs). These resources were used to demonstrate semi-automatic execution of MU clinical-quality measures modeled using the Quality Data Model (QDM) and an open-source rules engine. Results Using CEMs and open-source natural language processing and terminology services engines—namely, Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) and Common Terminology Services (CTS2)—we developed a data-normalization platform that ensures data security, end-to-end connectivity, and reliable data flow within and across institutions. We demonstrated the applicability of this platform by executing a QDM-based MU quality measure that determines the percentage of patients between 18 and 75 years with diabetes whose most recent low-density lipoprotein cholesterol test result during the measurement year was <100 mg/dL on a randomly selected cohort of 273 Mayo Clinic patients. The platform identified 21 and 18 patients for the denominator and numerator of the quality measure, respectively. Validation results indicate that all identified patients meet the QDM-based criteria. Conclusions End-to-end automated systems for extracting clinical information from diverse EHR systems require extensive use of standardized vocabularies and terminologies, as well as robust information models for storing, discovering, and processing that information. This study demonstrates the application of modular and open-source resources for enabling secondary use of EHR data through normalization into standards-based, comparable, and consistent format for high-throughput phenotyping to identify patient cohorts. PMID:24190931
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jaekyun; Kim, Myung -Gil; Kim, Jaehyun
The success of silicon based high density integrated circuits ignited explosive expansion of microelectronics. Although the inorganic semiconductors have shown superior carrier mobilities for conventional high speed switching devices, the emergence of unconventional applications, such as flexible electronics, highly sensitive photosensors, large area sensor array, and tailored optoelectronics, brought intensive research on next generation electronic materials. The rationally designed multifunctional soft electronic materials, organic and carbon-based semiconductors, are demonstrated with low-cost solution process, exceptional mechanical stability, and on-demand optoelectronic properties. Unfortunately, the industrial implementation of the soft electronic materials has been hindered due to lack of scalable fine-patterning methods. Inmore » this report, we demonstrated facile general route for high throughput sub-micron patterning of soft materials, using spatially selective deep-ultraviolet irradiation. For organic and carbon-based materials, the highly energetic photons (e.g. deep-ultraviolet rays) enable direct photo-conversion from conducting/semiconducting to insulating state through molecular dissociation and disordering with spatial resolution down to a sub-μm-scale. As a result, the successful demonstration of organic semiconductor circuitry promise our result proliferate industrial adoption of soft materials for next generation electronics.« less
Leung, Kaston; Klaus, Anders; Lin, Bill K; Laks, Emma; Biele, Justina; Lai, Daniel; Bashashati, Ali; Huang, Yi-Fei; Aniba, Radhouane; Moksa, Michelle; Steif, Adi; Mes-Masson, Anne-Marie; Hirst, Martin; Shah, Sohrab P; Aparicio, Samuel; Hansen, Carl L
2016-07-26
The genomes of large numbers of single cells must be sequenced to further understanding of the biological significance of genomic heterogeneity in complex systems. Whole genome amplification (WGA) of single cells is generally the first step in such studies, but is prone to nonuniformity that can compromise genomic measurement accuracy. Despite recent advances, robust performance in high-throughput single-cell WGA remains elusive. Here, we introduce droplet multiple displacement amplification (MDA), a method that uses commercially available liquid dispensing to perform high-throughput single-cell MDA in nanoliter volumes. The performance of droplet MDA is characterized using a large dataset of 129 normal diploid cells, and is shown to exceed previously reported single-cell WGA methods in amplification uniformity, genome coverage, and/or robustness. We achieve up to 80% coverage of a single-cell genome at 5× sequencing depth, and demonstrate excellent single-nucleotide variant (SNV) detection using targeted sequencing of droplet MDA product to achieve a median allelic dropout of 15%, and using whole genome sequencing to achieve false and true positive rates of 9.66 × 10(-6) and 68.8%, respectively, in a G1-phase cell. We further show that droplet MDA allows for the detection of copy number variants (CNVs) as small as 30 kb in single cells of an ovarian cancer cell line and as small as 9 Mb in two high-grade serous ovarian cancer samples using only 0.02× depth. Droplet MDA provides an accessible and scalable method for performing robust and accurate CNV and SNV measurements on large numbers of single cells.
Validation and implementation of a novel high-throughput behavioral phenotyping instrument for mice
Brodkin, Jesse; Frank, Dana; Grippo, Ryan; Hausfater, Michal; Gulinello, Maria; Achterholt, Nils; Gutzen, Christian
2015-01-01
Background Behavioral assessment of mutant mouse models and novel candidate drugs is a slow and labor intensive process. This limitation produces a significant impediment to CNS drug discovery. New method By combining video and vibration analysis we created an automated system that provides the most detailed description of mouse behavior available. Our system (The Behavioral Spectrometer) allowed for the rapid assessment of behavioral abnormalities in the BTBR model of Autism, the restraint model of stress and the irritant model of inflammatory pain. Results We found that each model produced a unique alteration of the spectrum of behavior emitted by the mice. BTBR mice engaged in more grooming and less rearing behaviors. Prior restraint stress produced dramatic increases in grooming activity at the expense of locomotor behavior. Pain produced profound decreases in emitted behavior that were reversible with analgesic treatment. Comparison with existing method(s) We evaluated our system through a direct comparison on the same subjects with the current “gold standard” of human observation of video recordings. Using the same mice evaluated over the same range of behaviors, the Behavioral Spectrometer produced a quantitative categorization of behavior that was highly correlated with the scores produced by trained human observers (r=0.97). Conclusions Our results show that this new system is a highly valid and sensitive method to characterize behavioral effects in mice. As a fully automated and easily scalable instrument the Behavioral Spectrometer represents a high-throughput behavioral tool that reduces the time and labor involved in behavioral research. PMID:24384067
Efficient Data Gathering in 3D Linear Underwater Wireless Sensor Networks Using Sink Mobility
Akbar, Mariam; Javaid, Nadeem; Khan, Ayesha Hussain; Imran, Muhammad; Shoaib, Muhammad; Vasilakos, Athanasios
2016-01-01
Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability. PMID:27007373
Efficient Data Gathering in 3D Linear Underwater Wireless Sensor Networks Using Sink Mobility.
Akbar, Mariam; Javaid, Nadeem; Khan, Ayesha Hussain; Imran, Muhammad; Shoaib, Muhammad; Vasilakos, Athanasios
2016-03-19
Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability.
External evaluation of the Dimension Vista 1500® intelligent lab system.
Bruneel, Arnaud; Dehoux, Monique; Barnier, Anne; Boutten, Anne
2012-09-01
Dimension Vista® analyzer combines four technologies (photometry, nephelometry, V-LYTE® integrated multisensor potentiometry, and LOCI® chemiluminescence) into one high-throughput system. We assessed analytical performance of assays routinely performed in our emergency laboratory according to the VALTEC protocol, and practicability. Precision was good for most parameters. Analytical domain was large and suitable for undiluted analysis in most clinical settings encountered in our hospital. Data were comparable and correlated to our routine analyzers (Roche Modular DP®, Abbott AXSYM®, Siemens Dimension® RxL, and BN ProSpec®). Performance of nephelometric and LOCI modules was excellent. Functional sensitivity of high-sensitivity C-reactive protein and cardiac troponin I were 0.165 mg/l and 0.03 ng/ml, respectively (coefficient of variation; CV < 10%). The influence of interfering substances (i.e., hemoglobin, bilirubin, or lipids) was moderate, and Dimension Vista® specifically alerted for interference according to HIL (hemolysis, icterus, lipemia) indices. Good instrument performance and full functionality (no reagent or sample carryover in the conditions evaluated, effective sample-volume detection, and clot detection) were confirmed. Simulated routine testing demonstrated excellent practicability, throughput, ease of use of software and security. Performance and practicability of Dimension Vista® are highly suitable for both routine and emergency use. Since no volume detection and thus no warning is available on limited sample racks, pediatric samples require special caution to the Siemens protocol to be analyzed in secured conditions. Our experience in routine practice is also discussed, i.e., the impact of daily workload, "manual" steps resulting from dilutions and pediatric samples, maintenances, flex hydration on instrument's performance on throughput and turnaround time. © 2012 Wiley Periodicals, Inc.
Life in the fast lane for protein crystallization and X-ray crystallography
NASA Technical Reports Server (NTRS)
Pusey, Marc L.; Liu, Zhi-Jie; Tempel, Wolfram; Praissman, Jeremy; Lin, Dawei; Wang, Bi-Cheng; Gavira, Jose A.; Ng, Joseph D.
2005-01-01
The common goal for structural genomic centers and consortiums is to decipher as quickly as possible the three-dimensional structures for a multitude of recombinant proteins derived from known genomic sequences. Since X-ray crystallography is the foremost method to acquire atomic resolution for macromolecules, the limiting step is obtaining protein crystals that can be useful of structure determination. High-throughput methods have been developed in recent years to clone, express, purify, crystallize and determine the three-dimensional structure of a protein gene product rapidly using automated devices, commercialized kits and consolidated protocols. However, the average number of protein structures obtained for most structural genomic groups has been very low compared to the total number of proteins purified. As more entire genomic sequences are obtained for different organisms from the three kingdoms of life, only the proteins that can be crystallized and whose structures can be obtained easily are studied. Consequently, an astonishing number of genomic proteins remain unexamined. In the era of high-throughput processes, traditional methods in molecular biology, protein chemistry and crystallization are eclipsed by automation and pipeline practices. The necessity for high-rate production of protein crystals and structures has prevented the usage of more intellectual strategies and creative approaches in experimental executions. Fundamental principles and personal experiences in protein chemistry and crystallization are minimally exploited only to obtain "low-hanging fruit" protein structures. We review the practical aspects of today's high-throughput manipulations and discuss the challenges in fast pace protein crystallization and tools for crystallography. Structural genomic pipelines can be improved with information gained from low-throughput tactics that may help us reach the higher-bearing fruits. Examples of recent developments in this area are reported from the efforts of the Southeast Collaboratory for Structural Genomics (SECSG).
Life in the Fast Lane for Protein Crystallization and X-Ray Crystallography
NASA Technical Reports Server (NTRS)
Pusey, Marc L.; Liu, Zhi-Jie; Tempel, Wolfram; Praissman, Jeremy; Lin, Dawei; Wang, Bi-Cheng; Gavira, Jose A.; Ng, Joseph D.
2004-01-01
The common goal for structural genomic centers and consortiums is to decipher as quickly as possible the three-dimensional structures for a multitude of recombinant proteins derived from known genomic sequences. Since X-ray crystallography is the foremost method to acquire atomic resolution for macromolecules, the limiting step is obtaining protein crystals that can be useful of structure determination. High-throughput methods have been developed in recent years to clone, express, purify, crystallize and determine the three-dimensional structure of a protein gene product rapidly using automated devices, commercialized kits and consolidated protocols. However, the average number of protein structures obtained for most structural genomic groups has been very low compared to the total number of proteins purified. As more entire genomic sequences are obtained for different organisms from the three kingdoms of life, only the proteins that can be crystallized and whose structures can be obtained easily are studied. Consequently, an astonishing number of genomic proteins remain unexamined. In the era of high-throughput processes, traditional methods in molecular biology, protein chemistry and crystallization are eclipsed by automation and pipeline practices. The necessity for high rate production of protein crystals and structures has prevented the usage of more intellectual strategies and creative approaches in experimental executions. Fundamental principles and personal experiences in protein chemistry and crystallization are minimally exploited only to obtain "low-hanging fruit" protein structures. We review the practical aspects of today s high-throughput manipulations and discuss the challenges in fast pace protein crystallization and tools for crystallography. Structural genomic pipelines can be improved with information gained from low-throughput tactics that may help us reach the higher-bearing fruits. Examples of recent developments in this area are reported from the efforts of the Southeast Collaboratory for Structural Genomics (SECSG).
Code of Federal Regulations, 2014 CFR
2014-07-01
... Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000 Gallons of Gasoline or More1... CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Source Category: Gasoline... Criteria and Management Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000 Gallons of Gasoline or More1... CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Source Category: Gasoline... Criteria and Management Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000 Gallons of Gasoline or More1... CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Source Category: Gasoline... Criteria and Management Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000 Gallons of Gasoline or More1... CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants for Source Category: Gasoline... Criteria and Management Practices for Gasoline Dispensing Facilities With Monthly Throughput of 100,000...
ERIC Educational Resources Information Center
Capobianco, Brenda M.; DeLisi, Jacqueline; Radloff, Jeffrey
2018-01-01
In an effort to document teachers' enactments of new reform in science teaching, valid and scalable measures of science teaching using engineering design are needed. This study describes the development and testing of an approach for documenting and characterizing elementary science teachers' multiday enactments of engineering design-based science…
High Fidelity Simulations of Large-Scale Wireless Networks (Plus-Up)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onunkwo, Uzoma
Sandia has built a strong reputation in scalable network simulation and emulation for cyber security studies to protect our nation’s critical information infrastructures. Georgia Tech has preeminent reputation in academia for excellence in scalable discrete event simulations, with strong emphasis on simulating cyber networks. Many of the experts in this field, such as Dr. Richard Fujimoto, Dr. George Riley, and Dr. Chris Carothers, have strong affiliations with Georgia Tech. The collaborative relationship that we intend to immediately pursue is in high fidelity simulations of practical large-scale wireless networks using ns-3 simulator via Dr. George Riley. This project will have mutualmore » benefits in bolstering both institutions’ expertise and reputation in the field of scalable simulation for cyber-security studies. This project promises to address high fidelity simulations of large-scale wireless networks. This proposed collaboration is directly in line with Georgia Tech’s goals for developing and expanding the Communications Systems Center, the Georgia Tech Broadband Institute, and Georgia Tech Information Security Center along with its yearly Emerging Cyber Threats Report. At Sandia, this work benefits the defense systems and assessment area with promise for large-scale assessment of cyber security needs and vulnerabilities of our nation’s critical cyber infrastructures exposed to wireless communications.« less
NASA Astrophysics Data System (ADS)
Haag, Sebastian; Bernhardt, Henning; Rübenach, Olaf; Haverkamp, Tobias; Müller, Tobias; Zontar, Daniel; Brecher, Christian
2015-02-01
In many applications for high-power diode lasers, the production of beam-shaping and homogenizing optical systems experience rising volumes and dynamical market demands. The automation of assembly processes on flexible and reconfigurable machines can contribute to a more responsive and scalable production. The paper presents a flexible mounting device designed for the challenging assembly of side-tab based optical systems. It provides design elements for precisely referencing and fixating two optical elements in a well-defined geometric relation. Side tabs are presented to the machine allowing the application of glue and a rotating mechanism allows the attachment to the optical elements. The device can be adjusted to fit different form factors and it can be used in high-volume assembly machines. The paper shows the utilization of the device for a collimation module consisting of a fast-axis and a slow-axis collimation lens. Results regarding the repeatability and process capability of bonding side tab assemblies as well as estimates from 3D simulation for overall performance indicators achieved such as cycle time and throughput will be discussed.
High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.
Simonyan, Vahan; Mazumder, Raja
2014-09-30
The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.
High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis
Simonyan, Vahan; Mazumder, Raja
2014-01-01
The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis. PMID:25271953
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
Targeted post-mortem computed tomography cardiac angiography: proof of concept.
Saunders, Sarah L; Morgan, Bruno; Raj, Vimal; Robinson, Claire E; Rutty, Guy N
2011-07-01
With the increasing use and availability of multi-detector computed tomography and magnetic resonance imaging in autopsy practice, there has been an international push towards the development of the so-called near virtual autopsy. However, currently, a significant obstacle to the consideration as to whether or not near virtual autopsies could one day replace the conventional invasive autopsy is the failure of post-mortem imaging to yield detailed information concerning the coronary arteries. To date, a cost-effective, practical solution to allow high throughput imaging has not been presented within the forensic literature. We present a proof of concept paper describing a simple, quick, cost-effective, manual, targeted in situ post-mortem cardiac angiography method using a minimally invasive approach, to be used with multi-detector computed tomography for high throughput cadaveric imaging which can be used in permanent or temporary mortuaries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anazagasty, Cristain; Hianik, Tibor; Ivanov, Ilia N
Proliferation of environmental sensors for internet of things (IoT) applications has increased the need for low-cost platforms capable of accommodating multiple sensors. Quartz crystal microbalance (QCM) crystals coated with nanometer-thin sensor films are suitable for use in high-resolution (~1 ng) selective gas sensor applications. We demonstrate a scalable array for measuring frequency response of six QCM sensors controlled by low-cost Arduino microcontrollers and a USB multiplexer. Gas pulses and data acquisition were controlled by a LabVIEW user interface. We test the sensor array by measuring the frequency shift of crystals coated with different compositions of polymer composites based on poly(3,4-ethylenedioxythiophene):polystyrenemore » sulfonate (PEDOT:PSS) while films are exposed to water vapor and oxygen inside a controlled environmental chamber. Our sensor array exhibits comparable performance to that of a commercial QCM system, while enabling high-throughput 6 QCM testing for under $1,000. We use deep neural network structures to process sensor response and demonstrate that the QCM array is suitable for gas sensing, environmental monitoring, and electronic-nose applications.« less
PRADA: pipeline for RNA sequencing data analysis.
Torres-García, Wandaliz; Zheng, Siyuan; Sivachenko, Andrey; Vegesna, Rahulsimham; Wang, Qianghu; Yao, Rong; Berger, Michael F; Weinstein, John N; Getz, Gad; Verhaak, Roel G W
2014-08-01
Technological advances in high-throughput sequencing necessitate improved computational tools for processing and analyzing large-scale datasets in a systematic automated manner. For that purpose, we have developed PRADA (Pipeline for RNA-Sequencing Data Analysis), a flexible, modular and highly scalable software platform that provides many different types of information available by multifaceted analysis starting from raw paired-end RNA-seq data: gene expression levels, quality metrics, detection of unsupervised and supervised fusion transcripts, detection of intragenic fusion variants, homology scores and fusion frame classification. PRADA uses a dual-mapping strategy that increases sensitivity and refines the analytical endpoints. PRADA has been used extensively and successfully in the glioblastoma and renal clear cell projects of The Cancer Genome Atlas program. http://sourceforge.net/projects/prada/ gadgetz@broadinstitute.org or rverhaak@mdanderson.org Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
How Much Higher Can HTCondor Fly?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fajardo, E. M.; Dost, J. M.; Holzman, B.
The HTCondor high throughput computing system is heavily used in the high energy physics (HEP) community as the batch system for several Worldwide LHC Computing Grid (WLCG) resources. Moreover, it is the backbone of GlidelnWMS, the pilot system used by the computing organization of the Compact Muon Solenoid (CMS) experiment. To prepare for LHC Run 2, we probed the scalability limits of new versions and configurations of HTCondor with a goal of reaching 200,000 simultaneous running jobs in a single internationally distributed dynamic pool.In this paper, we first describe how we created an opportunistic distributed testbed capable of exercising runsmore » with 200,000 simultaneous jobs without impacting production. This testbed methodology is appropriate not only for scale testing HTCondor, but potentially for many other services. In addition to the test conditions and the testbed topology, we include the suggested configuration options used to obtain the scaling results, and describe some of the changes to HTCondor inspired by our testing that enabled sustained operations at scales well beyond previous limits.« less
Choi, Heejin; Tzeranis, Dimitrios S.; Cha, Jae Won; Clémenceau, Philippe; de Jong, Sander J. G.; van Geest, Lambertus K.; Moon, Joong Ho; Yannas, Ioannis V.; So, Peter T. C.
2012-01-01
Fluorescence and phosphorescence lifetime imaging are powerful techniques for studying intracellular protein interactions and for diagnosing tissue pathophysiology. While lifetime-resolved microscopy has long been in the repertoire of the biophotonics community, current implementations fall short in terms of simultaneously providing 3D resolution, high throughput, and good tissue penetration. This report describes a new highly efficient lifetime-resolved imaging method that combines temporal focusing wide-field multiphoton excitation and simultaneous acquisition of lifetime information in frequency domain using a nanosecond gated imager from a 3D-resolved plane. This approach is scalable allowing fast volumetric imaging limited only by the available laser peak power. The accuracy and performance of the proposed method is demonstrated in several imaging studies important for understanding peripheral nerve regeneration processes. Most importantly, the parallelism of this approach may enhance the imaging speed of long lifetime processes such as phosphorescence by several orders of magnitude. PMID:23187477
3D bioprinting for vascularized tissue fabrication
Richards, Dylan; Jia, Jia; Yost, Michael; Markwald, Roger; Mei, Ying
2016-01-01
3D bioprinting holds remarkable promise for rapid fabrication of 3D tissue engineering constructs. Given its scalability, reproducibility, and precise multi-dimensional control that traditional fabrication methods do not provide, 3D bioprinting provides a powerful means to address one of the major challenges in tissue engineering: vascularization. Moderate success of current tissue engineering strategies have been attributed to the current inability to fabricate thick tissue engineering constructs that contain endogenous, engineered vasculature or nutrient channels that can integrate with the host tissue. Successful fabrication of a vascularized tissue construct requires synergy between high throughput, high-resolution bioprinting of larger perfusable channels and instructive bioink that promotes angiogenic sprouting and neovascularization. This review aims to cover the recent progress in the field of 3D bioprinting of vascularized tissues. It will cover the methods of bioprinting vascularized constructs, bioink for vascularization, and perspectives on recent innovations in 3D printing and biomaterials for the next generation of 3D bioprinting for vascularized tissue fabrication. PMID:27230253
Machine learning and data science in soft materials engineering
NASA Astrophysics Data System (ADS)
Ferguson, Andrew L.
2018-01-01
In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by ‘de-jargonizing’ data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.
Morales-Navarrete, Hernán; Segovia-Miranda, Fabián; Klukowski, Piotr; Meyer, Kirstin; Nonaka, Hidenori; Marsico, Giovanni; Chernykh, Mikhail; Kalaidzidis, Alexander; Zerial, Marino; Kalaidzidis, Yannis
2015-01-01
A prerequisite for the systems biology analysis of tissues is an accurate digital three-dimensional reconstruction of tissue structure based on images of markers covering multiple scales. Here, we designed a flexible pipeline for the multi-scale reconstruction and quantitative morphological analysis of tissue architecture from microscopy images. Our pipeline includes newly developed algorithms that address specific challenges of thick dense tissue reconstruction. Our implementation allows for a flexible workflow, scalable to high-throughput analysis and applicable to various mammalian tissues. We applied it to the analysis of liver tissue and extracted quantitative parameters of sinusoids, bile canaliculi and cell shapes, recognizing different liver cell types with high accuracy. Using our platform, we uncovered an unexpected zonation pattern of hepatocytes with different size, nuclei and DNA content, thus revealing new features of liver tissue organization. The pipeline also proved effective to analyse lung and kidney tissue, demonstrating its generality and robustness. DOI: http://dx.doi.org/10.7554/eLife.11214.001 PMID:26673893
Machine learning and data science in soft materials engineering.
Ferguson, Andrew L
2018-01-31
In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by 'de-jargonizing' data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.
Inertial-ordering-assisted droplet microfluidics for high-throughput single-cell RNA-sequencing.
Moon, Hui-Sung; Je, Kwanghwi; Min, Jae-Woong; Park, Donghyun; Han, Kyung-Yeon; Shin, Seung-Ho; Park, Woong-Yang; Yoo, Chang Eun; Kim, Shin-Hyun
2018-02-27
Single-cell RNA-seq reveals the cellular heterogeneity inherent in the population of cells, which is very important in many clinical and research applications. Recent advances in droplet microfluidics have achieved the automatic isolation, lysis, and labeling of single cells in droplet compartments without complex instrumentation. However, barcoding errors occurring in the cell encapsulation process because of the multiple-beads-in-droplet and insufficient throughput because of the low concentration of beads for avoiding multiple-beads-in-a-droplet remain important challenges for precise and efficient expression profiling of single cells. In this study, we developed a new droplet-based microfluidic platform that significantly improved the throughput while reducing barcoding errors through deterministic encapsulation of inertially ordered beads. Highly concentrated beads containing oligonucleotide barcodes were spontaneously ordered in a spiral channel by an inertial effect, which were in turn encapsulated in droplets one-by-one, while cells were simultaneously encapsulated in the droplets. The deterministic encapsulation of beads resulted in a high fraction of single-bead-in-a-droplet and rare multiple-beads-in-a-droplet although the bead concentration increased to 1000 μl -1 , which diminished barcoding errors and enabled accurate high-throughput barcoding. We successfully validated our device with single-cell RNA-seq. In addition, we found that multiple-beads-in-a-droplet, generated using a normal Drop-Seq device with a high concentration of beads, underestimated transcript numbers and overestimated cell numbers. This accurate high-throughput platform can expand the capability and practicality of Drop-Seq in single-cell analysis.
NASA Technical Reports Server (NTRS)
Quir, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy; Nakashima, Michael A.; Moision, Bruce E.
2012-01-01
A decoder was developed that decodes a serial concatenated pulse position modulation (SCPPM) encoded information sequence. The decoder takes as input a sequence of four bit log-likelihood ratios (LLR) for each PPM slot in a codeword via a XAUI 10-Gb/s quad optical fiber interface. If the decoder is unavailable, it passes the LLRs on to the next decoder via a XAUI 10-Gb/s quad optical fiber interface. Otherwise, it decodes the sequence and outputs information bits through a 1-GB/s Ethernet UDP/IP (User Datagram Protocol/Internet Protocol) interface. The throughput for a single decoder unit is 150-Mb/s at an average of four decoding iterations; by connecting a number of decoder units in series, a decoding rate equal to that of the aggregate rate is achieved. The unit is controlled through a 1-GB/s Ethernet UDP/IP interface. This ground station decoder was developed to demonstrate a deep space optical communication link capability, and is unique in the scalable design to achieve real-time SCPP decoding at the aggregate data rate.
SCTP as scalable video coding transport
NASA Astrophysics Data System (ADS)
Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.
2013-12-01
This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.
Yeo, David C; Wiraja, Christian; Zhou, Yingying; Tay, Hui Min; Xu, Chenjie; Hou, Han Wei
2015-09-23
Engineering cells with active-ingredient-loaded micro/nanoparticles is becoming increasingly popular for imaging and therapeutic applications. A critical yet inadequately addressed issue during its implementation concerns the significant number of particles that remain unbound following the engineering process, which inadvertently generate signals and impart transformative effects onto neighboring nontarget cells. Here we demonstrate that those unbound micro/nanoparticles remaining in solution can be efficiently separated from the particle-labeled cells by implementing a fast, continuous, and high-throughput Dean flow fractionation (DFF) microfluidic device. As proof-of-concept, we applied the DFF microfluidic device for buffer exchange to sort labeled suspension cells (THP-1) from unbound fluorescent dye and dye-loaded micro/nanoparticles. Compared to conventional centrifugation, the depletion efficiency of free dyes or particles was improved 20-fold and the mislabeling of nontarget bystander cells by free particles was minimized. The microfluidic device was adapted to further accommodate heterogeneous-sized mesenchymal stem cells (MSCs). Complete removal of unbound nanoparticles using DFF led to the usage of engineered MSCs without exerting off-target transformative effects on the functional properties of neighboring endothelial cells. Apart from its effectiveness in removing free particles, this strategy is also efficient and scalable. It could continuously process cell solutions with concentrations up to 10(7) cells·mL(-1) (cell densities commonly encountered during cell therapy) without observable loss of performance. Successful implementation of this technology is expected to pave the way for interference-free clinical application of micro/nanoparticle engineered cells.
2014-01-01
Background Next-generation DNA sequencing (NGS) technologies have made huge impacts in many fields of biological research, but especially in evolutionary biology. One area where NGS has shown potential is for high-throughput sequencing of complete mtDNA genomes (of humans and other animals). Despite the increasing use of NGS technologies and a better appreciation of their importance in answering biological questions, there remain significant obstacles to the successful implementation of NGS-based projects, especially for new users. Results Here we present an ‘A to Z’ protocol for obtaining complete human mitochondrial (mtDNA) genomes – from DNA extraction to consensus sequence. Although designed for use on humans, this protocol could also be used to sequence small, organellar genomes from other species, and also nuclear loci. This protocol includes DNA extraction, PCR amplification, fragmentation of PCR products, barcoding of fragments, sequencing using the 454 GS FLX platform, and a complete bioinformatics pipeline (primer removal, reference-based mapping, output of coverage plots and SNP calling). Conclusions All steps in this protocol are designed to be straightforward to implement, especially for researchers who are undertaking next-generation sequencing for the first time. The molecular steps are scalable to large numbers (hundreds) of individuals and all steps post-DNA extraction can be carried out in 96-well plate format. Also, the protocol has been assembled so that individual ‘modules’ can be swapped out to suit available resources. PMID:24460871
Rameez, Shahid; Mostafa, Sigma S; Miller, Christopher; Shukla, Abhinav A
2014-01-01
Decreasing the timeframe for cell culture process development has been a key goal toward accelerating biopharmaceutical development. Advanced Microscale Bioreactors (ambr™) is an automated micro-bioreactor system with miniature single-use bioreactors with a 10-15 mL working volume controlled by an automated workstation. This system was compared to conventional bioreactor systems in terms of its performance for the production of a monoclonal antibody in a recombinant Chinese Hamster Ovary cell line. The miniaturized bioreactor system was found to produce cell culture profiles that matched across scales to 3 L, 15 L, and 200 L stirred tank bioreactors. The processes used in this article involve complex feed formulations, perturbations, and strict process control within the design space, which are in-line with processes used for commercial scale manufacturing of biopharmaceuticals. Changes to important process parameters in ambr™ resulted in predictable cell growth, viability and titer changes, which were in good agreement to data from the conventional larger scale bioreactors. ambr™ was found to successfully reproduce variations in temperature, dissolved oxygen (DO), and pH conditions similar to the larger bioreactor systems. Additionally, the miniature bioreactors were found to react well to perturbations in pH and DO through adjustments to the Proportional and Integral control loop. The data presented here demonstrates the utility of the ambr™ system as a high throughput system for cell culture process development. © 2014 American Institute of Chemical Engineers.
Radio-over-fiber using an optical antenna based on Rydberg states of atoms
NASA Astrophysics Data System (ADS)
Deb, A. B.; Kjærgaard, N.
2018-05-01
We provide an experimental demonstration of a direct fiber-optic link for RF transmission ("radio-over-fiber") using a sensitive optical antenna based on a rubidium vapor cell. The scheme relies on measuring the transmission of laser light at an electromagnetically induced transparency resonance that involves highly excited Rydberg states. By dressing pairs of Rydberg states using microwave fields that act as local oscillators, we encoded RF signals in the optical frequency domain. The light carrying the information is linked via a virtually lossless optical fiber to a photodetector where the signal is retrieved. We demonstrate a signal bandwidth in excess of 1 MHz limited by the available coupling laser power and atomic optical density. Our sensitive, non-metallic and readily scalable optical antenna for microwaves allows extremely low-levels of optical power (˜1 μW) throughput in the fiber-optic link. It offers a promising future platform for emerging wireless network infrastructures.
Tanaka, Yukinori; Kasahara, Ken; Hirose, Yutaka; Murakami, Kiriko; Kugimiya, Rie; Ochi, Kozo
2013-07-01
A subset of rifampin resistance (rpoB) mutations result in the overproduction of antibiotics in various actinomycetes, including Streptomyces, Saccharopolyspora, and Amycolatopsis, with H437Y and H437R rpoB mutations effective most frequently. Moreover, the rpoB mutations markedly activate (up to 70-fold at the transcriptional level) the cryptic/silent secondary metabolite biosynthetic gene clusters of these actinomycetes, which are not activated under general stressful conditions, with the exception of treatment with rare earth elements. Analysis of the metabolite profile demonstrated that the rpoB mutants produced many metabolites, which were not detected in the wild-type strains. This approach utilizing rifampin resistance mutations is characterized by its feasibility and potential scalability to high-throughput studies and would be useful to activate and to enhance the yields of metabolites for discovery and biochemical characterization.
Pharmaceutical spray drying: solid-dose process technology platform for the 21st century.
Snyder, Herman E
2012-07-01
Requirement for precise control of solid-dosage particle properties created with a scalable process technology are continuing to expand in the pharmaceutical industry. Alternate methods of drug delivery, limited active drug substance solubility and the need to improve drug product stability under room-temperature conditions are some of the pharmaceutical applications that can benefit from spray-drying technology. Used widely for decades in other industries with production rates up to several tons per hour, pharmaceutical uses for spray drying are expanding beyond excipient production and solvent removal from crystalline material. Creation of active pharmaceutical-ingredient particles with combinations of unique target properties are now more common. This review of spray-drying technology fundamentals provides a brief perspective on the internal process 'mechanics', which combine with both the liquid and solid properties of a formulation to enable high-throughput, continuous manufacturing of precision powder properties.
Functional CAR models for large spatially correlated functional datasets.
Zhang, Lin; Baladandayuthapani, Veerabhadran; Zhu, Hongxiao; Baggerly, Keith A; Majewski, Tadeusz; Czerniak, Bogdan A; Morris, Jeffrey S
2016-01-01
We develop a functional conditional autoregressive (CAR) model for spatially correlated data for which functions are collected on areal units of a lattice. Our model performs functional response regression while accounting for spatial correlations with potentially nonseparable and nonstationary covariance structure, in both the space and functional domains. We show theoretically that our construction leads to a CAR model at each functional location, with spatial covariance parameters varying and borrowing strength across the functional domain. Using basis transformation strategies, the nonseparable spatial-functional model is computationally scalable to enormous functional datasets, generalizable to different basis functions, and can be used on functions defined on higher dimensional domains such as images. Through simulation studies, we demonstrate that accounting for the spatial correlation in our modeling leads to improved functional regression performance. Applied to a high-throughput spatially correlated copy number dataset, the model identifies genetic markers not identified by comparable methods that ignore spatial correlations.
Low-cost fabrication technologies for nanostructures: state-of-the-art and potential
NASA Astrophysics Data System (ADS)
Santos, A.; Deen, M. J.; Marsal, L. F.
2015-01-01
In the last decade, some low-cost nanofabrication technologies used in several disciplines of nanotechnology have demonstrated promising results in terms of versatility and scalability for producing innovative nanostructures. While conventional nanofabrication technologies such as photolithography are and will be an important part of nanofabrication, some low-cost nanofabrication technologies have demonstrated outstanding capabilities for large-scale production, providing high throughputs with acceptable resolution and broad versatility. Some of these nanotechnological approaches are reviewed in this article, providing information about the fundamentals, limitations and potential future developments towards nanofabrication processes capable of producing a broad range of nanostructures. Furthermore, in many cases, these low-cost nanofabrication approaches can be combined with traditional nanofabrication technologies. This combination is considered a promising way of generating innovative nanostructures suitable for a broad range of applications such as in opto-electronics, nano-electronics, photonics, sensing, biotechnology or medicine.
Protein footprinting by pyrite shrink-wrap laminate.
Leser, Micheal; Pegan, Jonathan; El Makkaoui, Mohammed; Schlatterer, Joerg C; Khine, Michelle; Law, Matt; Brenowitz, Michael
2015-04-07
The structure of macromolecules and their complexes dictate their biological function. In "footprinting", the solvent accessibility of the residues that constitute proteins, DNA and RNA can be determined from their reactivity to an exogenous reagent such as the hydroxyl radical (·OH). While ·OH generation for protein footprinting is achieved by radiolysis, photolysis and electrochemistry, we present a simpler solution. A thin film of pyrite (cubic FeS2) nanocrystals deposited onto a shape memory polymer (commodity shrink-wrap film) generates sufficient ·OH via Fenton chemistry for oxidative footprinting analysis of proteins. We demonstrate that varying either time or H2O2 concentration yields the required ·OH dose-oxidation response relationship. A simple and scalable sample handling protocol is enabled by thermoforming the "pyrite shrink-wrap laminate" into a standard microtiter plate format. The low cost and malleability of the laminate facilitates its integration into high throughput screening and microfluidic devices.
Accelerating Time Integration for the Shallow Water Equations on the Sphere Using GPUs
Archibald, R.; Evans, K. J.; Salinger, A.
2015-06-01
The push towards larger and larger computational platforms has made it possible for climate simulations to resolve climate dynamics across multiple spatial and temporal scales. This direction in climate simulation has created a strong need to develop scalable timestepping methods capable of accelerating throughput on high performance computing. This study details the recent advances in the implementation of implicit time stepping of the spectral element dynamical core within the United States Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) on graphical processing units (GPU) based machines. We demonstrate how solvers in the Trilinos project are interfaced with ACMEmore » and GPU kernels to increase computational speed of the residual calculations in the implicit time stepping method for the atmosphere dynamics. We demonstrate the optimization gains and data structure reorganization that facilitates the performance improvements.« less
ATLAST and JWST Segmented Telescope Design Considerations
NASA Technical Reports Server (NTRS)
Feinberg, Lee
2016-01-01
To the extent it makes sense, leverage JWST (James Webb Space Telescope) knowledge, designs, architectures. GSE (Ground Support Equipment) good starting point. Develop a full end-to-end architecture that closes. Try to avoid recreating the wheel except where needed. Optimize from there (mainly for stability and coronagraphy). Develop a scalable design reference mission (9.2 meters). Do just enough work to understand launch break points in aperture size Demonstrate 10 pm (phase modulation) stability is achievable on a design reference mission. A really key design driver is the most robust stability possible!!! Make design compatible with starshades. While segmented coronagraphs with high throughput and large bandpasses are important, make the system serviceable so you can evolve the instruments. Keep it room temperature to minimize the costs associated with cryo. Focus resources on the contrast problem. Start with the architecture and connect it to the technology needs.
Self-Configuration and Self-Optimization Process in Heterogeneous Wireless Networks
Guardalben, Lucas; Villalba, Luis Javier García; Buiati, Fábio; Sobral, João Bosco Mangueira; Camponogara, Eduardo
2011-01-01
Self-organization in Wireless Mesh Networks (WMN) is an emergent research area, which is becoming important due to the increasing number of nodes in a network. Consequently, the manual configuration of nodes is either impossible or highly costly. So it is desirable for the nodes to be able to configure themselves. In this paper, we propose an alternative architecture for self-organization of WMN based on Optimized Link State Routing Protocol (OLSR) and the ad hoc on demand distance vector (AODV) routing protocols as well as using the technology of software agents. We argue that the proposed self-optimization and self-configuration modules increase the throughput of network, reduces delay transmission and network load, decreases the traffic of HELLO messages according to network’s scalability. By simulation analysis, we conclude that the self-optimization and self-configuration mechanisms can significantly improve the performance of OLSR and AODV protocols in comparison to the baseline protocols analyzed. PMID:22346584
Self-configuration and self-optimization process in heterogeneous wireless networks.
Guardalben, Lucas; Villalba, Luis Javier García; Buiati, Fábio; Sobral, João Bosco Mangueira; Camponogara, Eduardo
2011-01-01
Self-organization in Wireless Mesh Networks (WMN) is an emergent research area, which is becoming important due to the increasing number of nodes in a network. Consequently, the manual configuration of nodes is either impossible or highly costly. So it is desirable for the nodes to be able to configure themselves. In this paper, we propose an alternative architecture for self-organization of WMN based on Optimized Link State Routing Protocol (OLSR) and the ad hoc on demand distance vector (AODV) routing protocols as well as using the technology of software agents. We argue that the proposed self-optimization and self-configuration modules increase the throughput of network, reduces delay transmission and network load, decreases the traffic of HELLO messages according to network's scalability. By simulation analysis, we conclude that the self-optimization and self-configuration mechanisms can significantly improve the performance of OLSR and AODV protocols in comparison to the baseline protocols analyzed.
Myneni, Sahiti; Cobb, Nathan K; Cohen, Trevor
2016-01-01
Analysis of user interactions in online communities could improve our understanding of health-related behaviors and inform the design of technological solutions that support behavior change. However, to achieve this we would need methods that provide granular perspective, yet are scalable. In this paper, we present a methodology for high-throughput semantic and network analysis of large social media datasets, combining semi-automated text categorization with social network analytics. We apply this method to derive content-specific network visualizations of 16,492 user interactions in an online community for smoking cessation. Performance of the categorization system was reasonable (average F-measure of 0.74, with system-rater reliability approaching rater-rater reliability). The resulting semantically specific network analysis of user interactions reveals content- and behavior-specific network topologies. Implications for socio-behavioral health and wellness platforms are also discussed.
Porous microwells for geometry-selective, large-scale microparticle arrays
NASA Astrophysics Data System (ADS)
Kim, Jae Jung; Bong, Ki Wan; Reátegui, Eduardo; Irimia, Daniel; Doyle, Patrick S.
2017-01-01
Large-scale microparticle arrays (LSMAs) are key for material science and bioengineering applications. However, previous approaches suffer from trade-offs between scalability, precision, specificity and versatility. Here, we present a porous microwell-based approach to create large-scale microparticle arrays with complex motifs. Microparticles are guided to and pushed into microwells by fluid flow through small open pores at the bottom of the porous well arrays. A scaling theory allows for the rational design of LSMAs to sort and array particles on the basis of their size, shape, or modulus. Sequential particle assembly allows for proximal and nested particle arrangements, as well as particle recollection and pattern transfer. We demonstrate the capabilities of the approach by means of three applications: high-throughput single-cell arrays; microenvironment fabrication for neutrophil chemotaxis; and complex, covert tags by the transfer of an upconversion nanocrystal-laden LSMA.
NASA Astrophysics Data System (ADS)
Tang, Nicholas C.; Chilkoti, Ashutosh
2016-04-01
Most genes are synthesized using seamless assembly methods that rely on the polymerase chain reaction (PCR). However, PCR of genes encoding repetitive proteins either fails or generates nonspecific products. Motivated by the need to efficiently generate new protein polymers through high-throughput gene synthesis, here we report a codon-scrambling algorithm that enables the PCR-based gene synthesis of repetitive proteins by exploiting the codon redundancy of amino acids and finding the least-repetitive synonymous gene sequence. We also show that the codon-scrambling problem is analogous to the well-known travelling salesman problem, and obtain an exact solution to it by using De Bruijn graphs and a modern mixed integer linear programme solver. As experimental proof of the utility of this approach, we use it to optimize the synthetic genes for 19 repetitive proteins, and show that the gene fragments are amenable to PCR-based gene assembly and recombinant expression.
Bioinformatics and Microarray Data Analysis on the Cloud.
Calabrese, Barbara; Cannataro, Mario
2016-01-01
High-throughput platforms such as microarray, mass spectrometry, and next-generation sequencing are producing an increasing volume of omics data that needs large data storage and computing power. Cloud computing offers massive scalable computing and storage, data sharing, on-demand anytime and anywhere access to resources and applications, and thus, it may represent the key technology for facing those issues. In fact, in the recent years it has been adopted for the deployment of different bioinformatics solutions and services both in academia and in the industry. Although this, cloud computing presents several issues regarding the security and privacy of data, that are particularly important when analyzing patients data, such as in personalized medicine. This chapter reviews main academic and industrial cloud-based bioinformatics solutions; with a special focus on microarray data analysis solutions and underlines main issues and problems related to the use of such platforms for the storage and analysis of patients data.
NASA Astrophysics Data System (ADS)
Hegde, Ganapathi; Vaya, Pukhraj
2013-10-01
This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.
Membrane-less microfiltration using inertial microfluidics
Warkiani, Majid Ebrahimi; Tay, Andy Kah Ping; Guan, Guofeng; Han, Jongyoon
2015-01-01
Microfiltration is a ubiquitous and often crucial part of many industrial processes, including biopharmaceutical manufacturing. Yet, all existing filtration systems suffer from the issue of membrane clogging, which fundamentally limits the efficiency and reliability of the filtration process. Herein, we report the development of a membrane-less microfiltration system by massively parallelizing inertial microfluidics to achieve a macroscopic volume processing rates (~ 500 mL/min). We demonstrated the systems engineered for CHO (10–20 μm) and yeast (3–5 μm) cells filtration, which are two main cell types used for large-scale bioreactors. Our proposed system can replace existing filtration membrane and provide passive (no external force fields), continuous filtration, thus eliminating the need for membrane replacement. This platform has the desirable combinations of high throughput, low-cost, and scalability, making it compatible for a myriad of microfiltration applications and industrial purposes. PMID:26154774
High-performance, scalable optical network-on-chip architectures
NASA Astrophysics Data System (ADS)
Tan, Xianfang
The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of GWOR in optical communication and BFT in non-uniform traffic communication and three-dimension (3D) implementation. 5. A cycle-accurate NoC simulator is developed to evaluate the performance of proposed HONoC architectures. It is a comprehensive platform that can simulate both electronic and optical NoCs. Different size HONoC architectures are evaluated in terms of throughput, latency and energy dissipation. Simulation results confirm that HONoC achieves good network performance with lower power consumption.
Scuba: scalable kernel-based gene prioritization.
Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio
2018-01-25
The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .
Fu, Wei; Zhu, Pengyu; Wei, Shuang; Zhixin, Du; Wang, Chenguang; Wu, Xiyang; Li, Feiwu; Zhu, Shuifang
2017-04-01
Among all of the high-throughput detection methods, PCR-based methodologies are regarded as the most cost-efficient and feasible methodologies compared with the next-generation sequencing or ChIP-based methods. However, the PCR-based methods can only achieve multiplex detection up to 15-plex due to limitations imposed by the multiplex primer interactions. The detection throughput cannot meet the demands of high-throughput detection, such as SNP or gene expression analysis. Therefore, in our study, we have developed a new high-throughput PCR-based detection method, multiplex enrichment quantitative PCR (ME-qPCR), which is a combination of qPCR and nested PCR. The GMO content detection results in our study showed that ME-qPCR could achieve high-throughput detection up to 26-plex. Compared to the original qPCR, the Ct values of ME-qPCR were lower for the same group, which showed that ME-qPCR sensitivity is higher than the original qPCR. The absolute limit of detection for ME-qPCR could achieve levels as low as a single copy of the plant genome. Moreover, the specificity results showed that no cross-amplification occurred for irrelevant GMO events. After evaluation of all of the parameters, a practical evaluation was performed with different foods. The more stable amplification results, compared to qPCR, showed that ME-qPCR was suitable for GMO detection in foods. In conclusion, ME-qPCR achieved sensitive, high-throughput GMO detection in complex substrates, such as crops or food samples. In the future, ME-qPCR-based GMO content identification may positively impact SNP analysis or multiplex gene expression of food or agricultural samples. Graphical abstract For the first-step amplification, four primers (A, B, C, and D) have been added into the reaction volume. In this manner, four kinds of amplicons have been generated. All of these four amplicons could be regarded as the target of second-step PCR. For the second-step amplification, three parallels have been taken for the final evaluation. After the second evaluation, the final amplification curves and melting curves have been achieved.
USDA-ARS?s Scientific Manuscript database
A scalable and modular LED illumination dome for microscopic scientific photography is described and illustrated, and methods for constructing such a dome are detailed. Dome illumination for insect specimens has become standard practice across the field of insect systematics, but many dome designs ...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Management Practices § 63.11118 Requirements for facilities with monthly throughput of 100,000 gallons of...)(1) or paragraph (b)(2) of this section. (1) Each management practice in Table 1 to this subpart that...) Operates using management practices at least as stringent as those in Table 1 to this subpart. (ii) Your...
A scalable self-priming fractal branching microchannel net chip for digital PCR.
Zhu, Qiangyuan; Xu, Yanan; Qiu, Lin; Ma, Congcong; Yu, Bingwen; Song, Qi; Jin, Wei; Jin, Qinhan; Liu, Jinyu; Mu, Ying
2017-05-02
As an absolute quantification method at the single-molecule level, digital PCR has been widely used in many bioresearch fields, such as next generation sequencing, single cell analysis, gene editing detection and so on. However, existing digital PCR methods still have some disadvantages, including high cost, sample loss, and complicated operation. In this work, we develop an exquisite scalable self-priming fractal branching microchannel net digital PCR chip. This chip with a special design inspired by natural fractal-tree systems has an even distribution and 100% compartmentalization of the sample without any sample loss, which is not available in existing chip-based digital PCR methods. A special 10 nm nano-waterproof layer was created to prevent the solution from evaporating. A vacuum pre-packaging method called self-priming reagent introduction is used to passively drive the reagent flow into the microchannel nets, so that this chip can realize sequential reagent loading and isolation within a couple of minutes, which is very suitable for point-of-care detection. When the number of positive microwells stays in the range of 100 to 4000, the relative uncertainty is below 5%, which means that one panel can detect an average of 101 to 15 374 molecules by the Poisson distribution. This chip is proved to have an excellent ability for single molecule detection and quantification of low expression of hHF-MSC stem cell markers. Due to its potential for high throughput, high density, low cost, lack of sample and reagent loss, self-priming even compartmentalization and simple operation, we envision that this device will significantly expand and extend the application range of digital PCR involving rare samples, liquid biopsy detection and point-of-care detection with higher sensitivity and accuracy.
Hyperspectral Cubesat Constellation for Rapid Natural Hazard Response
NASA Astrophysics Data System (ADS)
Mandl, D.; Huemmrich, K. F.; Ly, V. T.; Handy, M.; Ong, L.; Crum, G.
2015-12-01
With the advent of high performance space networks that provide total coverage for Cubesats, the paradigm for low cost, high temporal coverage with hyperspectral instruments becomes more feasible. The combination of ground cloud computing resources, high performance with low power consumption onboard processing, total coverage for the cubesats and social media provide an opprotunity for an architecture that provides cost-effective hyperspectral data products for natural hazard response and decision support. This paper provides a series of pathfinder efforts to create a scalable Intelligent Payload Module(IPM) that has flown on a variety of airborne vehicles including Cessna airplanes, Citation jets and a helicopter and will fly on an Unmanned Aerial System (UAS) hexacopter to monitor natural phenomena. The IPM's developed thus far were developed on platforms that emulate a satellite environment which use real satellite flight software, real ground software. In addition, science processing software has been developed that perform hyperspectral processing onboard using various parallel processing techniques to enable creation of onboard hyperspectral data products while consuming low power. A cubesat design was developed that is low cost and that is scalable to larger consteallations and thus can provide daily hyperspectral observations for any spot on earth. The design was based on the existing IPM prototypes and metrics that were developed over the past few years and a shrunken IPM that can perform up to 800 Mbps throughput. Thus this constellation of hyperspectral cubesats could be constantly monitoring spectra with spectral angle mappers after Level 0, Level 1 Radiometric Correction, Atmospheric Correction processing. This provides the opportunity daily monitoring of any spot on earth on a daily basis at 30 meter resolution which is not available today.
A Robust Scalable Transportation System Concept
NASA Technical Reports Server (NTRS)
Hahn, Andrew; DeLaurentis, Daniel
2006-01-01
This report documents the 2005 Revolutionary System Concept for Aeronautics (RSCA) study entitled "A Robust, Scalable Transportation System Concept". The objective of the study was to generate, at a high-level of abstraction, characteristics of a new concept for the National Airspace System, or the new NAS, under which transportation goals such as increased throughput, delay reduction, and improved robustness could be realized. Since such an objective can be overwhelmingly complex if pursued at the lowest levels of detail, instead a System-of-Systems (SoS) approach was adopted to model alternative air transportation architectures at a high level. The SoS approach allows the consideration of not only the technical aspects of the NAS", but also incorporates policy, socio-economic, and alternative transportation system considerations into one architecture. While the representations of the individual systems are basic, the higher level approach allows for ways to optimize the SoS at the network level, determining the best topology (i.e. configuration of nodes and links). The final product (concept) is a set of rules of behavior and network structure that not only satisfies national transportation goals, but represents the high impact rules that accomplish those goals by getting the agents to "do the right thing" naturally. The novel combination of Agent Based Modeling and Network Theory provides the core analysis methodology in the System-of-Systems approach. Our method of approach is non-deterministic which means, fundamentally, it asks and answers different questions than deterministic models. The nondeterministic method is necessary primarily due to our marriage of human systems with technological ones in a partially unknown set of future worlds. Our goal is to understand and simulate how the SoS, human and technological components combined, evolve.
The ATLAS Event Service: A new approach to event processing
NASA Astrophysics Data System (ADS)
Calafiura, P.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.
2015-12-01
The ATLAS Event Service (ES) implements a new fine grained approach to HEP event processing, designed to be agile and efficient in exploiting transient, short-lived resources such as HPC hole-filling, spot market commercial clouds, and volunteer computing. Input and output control and data flows, bookkeeping, monitoring, and data storage are all managed at the event level in an implementation capable of supporting ATLAS-scale distributed processing throughputs (about 4M CPU-hours/day). Input data flows utilize remote data repositories with no data locality or pre-staging requirements, minimizing the use of costly storage in favor of strongly leveraging powerful networks. Object stores provide a highly scalable means of remotely storing the quasi-continuous, fine grained outputs that give ES based applications a very light data footprint on a processing resource, and ensure negligible losses should the resource suddenly vanish. We will describe the motivations for the ES system, its unique features and capabilities, its architecture and the highly scalable tools and technologies employed in its implementation, and its applications in ATLAS processing on HPCs, commercial cloud resources, volunteer computing, and grid resources. Notice: This manuscript has been authored by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
A frequency and sensitivity tunable microresonator array for high-speed quantum processor readout
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whittaker, J. D., E-mail: jwhittaker@dwavesys.com; Swenson, L. J.; Volkmann, M. H.
Superconducting microresonators have been successfully utilized as detection elements for a wide variety of applications. With multiplexing factors exceeding 1000 detectors per transmission line, they are the most scalable low-temperature detector technology demonstrated to date. For high-throughput applications, fewer detectors can be coupled to a single wire but utilize a larger per-detector bandwidth. For all existing designs, fluctuations in fabrication tolerances result in a non-uniform shift in resonance frequency and sensitivity, which ultimately limits the efficiency of bandwidth utilization. Here, we present the design, implementation, and initial characterization of a superconducting microresonator readout integrating two tunable inductances per detector. Wemore » demonstrate that these tuning elements provide independent control of both the detector frequency and sensitivity, allowing us to maximize the transmission line bandwidth utilization. Finally, we discuss the integration of these detectors in a multilayer fabrication stack for high-speed readout of the D-Wave quantum processor, highlighting the use of control and routing circuitry composed of single-flux-quantum loops to minimize the number of control wires at the lowest temperature stage.« less
Self-leveling 2D DPN probe arrays
NASA Astrophysics Data System (ADS)
Haaheim, Jason R.; Val, Vadim; Solheim, Ed; Bussan, John; Fragala, J.; Nelson, Mike
2010-02-01
Dip Pen Nanolithography® (DPN®) is a direct write scanning probe-based technique which operates under ambient conditions, making it suitable to deposit a wide range of biological and inorganic materials. Precision nanoscale deposition is a fundamental requirement to advance nanoscale technology in commercial applications, and tailoring chemical composition and surface structure on the sub-100 nm scale benefits researchers in areas ranging from cell adhesion to cell-signaling and biomimetic membranes. These capabilities naturally suggest a "Desktop Nanofab" concept - a turnkey system that allows a non-expert user to rapidly create high resolution, scalable nanostructures drawing upon well-characterized ink and substrate pairings. In turn, this system is fundamentally supported by a portfolio of MEMS devices tailored for microfluidic ink delivery, directed placement of nanoscale materials, and cm2 tip arrays for high-throughput nanofabrication. Massively parallel two-dimensional nanopatterning is now commercially available via NanoInk's 2D nano PrintArray™, making DPN a high-throughput (>3×107 μm2 per hour), flexible and versatile method for precision nanoscale pattern formation. However, cm2 arrays of nanoscopic tips introduce the nontrivial problem of getting them all evenly touching the surface to ensure homogeneous deposition; this requires extremely precise leveling of the array. Herein, we describe how we have made the process simple by way of a selfleveling gimbal attachment, coupled with semi-automated software leveling routines which bring the cm^2 chip to within 0.002 degrees of co-planarity. This excellent co-planarity yields highly homogeneous features across a square centimeter, with <6% feature size standard deviation. We have engineered the devices to be easy to use, wire-free, and fully integrated with both of our patterning tools: the DPN 5000, and the NLP 2000.
High-throughput ultraviolet photoacoustic microscopy with multifocal excitation
NASA Astrophysics Data System (ADS)
Imai, Toru; Shi, Junhui; Wong, Terence T. W.; Li, Lei; Zhu, Liren; Wang, Lihong V.
2018-03-01
Ultraviolet photoacoustic microscopy (UV-PAM) is a promising intraoperative tool for surgical margin assessment (SMA), one that can provide label-free histology-like images with high resolution. In this study, using a microlens array and a one-dimensional (1-D) array ultrasonic transducer, we developed a high-throughput multifocal UV-PAM (MF-UV-PAM). Our new system achieved a 1.6 ± 0.2 μm lateral resolution and produced images 40 times faster than the previously developed point-by-point scanning UV-PAM. MF-UV-PAM provided a readily comprehensible photoacoustic image of a mouse brain slice with specific absorption contrast in ˜16 min, highlighting cell nuclei. Individual cell nuclei could be clearly resolved, showing its practical potential for intraoperative SMA.
Projecting 2D gene expression data into 3D and 4D space.
Gerth, Victor E; Katsuyama, Kaori; Snyder, Kevin A; Bowes, Jeff B; Kitayama, Atsushi; Ueno, Naoto; Vize, Peter D
2007-04-01
Video games typically generate virtual 3D objects by texture mapping an image onto a 3D polygonal frame. The feeling of movement is then achieved by mathematically simulating camera movement relative to the polygonal frame. We have built customized scripts that adapt video game authoring software to texture mapping images of gene expression data onto b-spline based embryo models. This approach, known as UV mapping, associates two-dimensional (U and V) coordinates within images to the three dimensions (X, Y, and Z) of a b-spline model. B-spline model frameworks were built either from confocal data or de novo extracted from 2D images, once again using video game authoring approaches. This system was then used to build 3D models of 182 genes expressed in developing Xenopus embryos and to implement these in a web-accessible database. Models can be viewed via simple Internet browsers and utilize openGL hardware acceleration via a Shockwave plugin. Not only does this database display static data in a dynamic and scalable manner, the UV mapping system also serves as a method to align different images to a common framework, an approach that may make high-throughput automated comparisons of gene expression patterns possible. Finally, video game systems also have elegant methods for handling movement, allowing biomechanical algorithms to drive the animation of models. With further development, these biomechanical techniques offer practical methods for generating virtual embryos that recapitulate morphogenesis.
ECHO: A reference-free short-read error correction algorithm
Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.
2011-01-01
Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625
Selecting the most appropriate time points to profile in high-throughput studies
Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv
2017-01-01
Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972
High temperature semiconductor diode laser pumps for high energy laser applications
NASA Astrophysics Data System (ADS)
Campbell, Jenna; Semenic, Tadej; Guinn, Keith; Leisher, Paul O.; Bhunia, Avijit; Mashanovitch, Milan; Renner, Daniel
2018-02-01
Existing thermal management technologies for diode laser pumps place a significant load on the size, weight and power consumption of High Power Solid State and Fiber Laser systems, thus making current laser systems very large, heavy, and inefficient in many important practical applications. To mitigate this thermal management burden, it is desirable for diode pumps to operate efficiently at high heat sink temperatures. In this work, we have developed a scalable cooling architecture, based on jet-impingement technology with industrial coolant, for efficient cooling of diode laser bars. We have demonstrated 60% electrical-to-optical efficiency from a 9xx nm two-bar laser stack operating with propylene-glycolwater coolant, at 50 °C coolant temperature. To our knowledge, this is the highest efficiency achieved from a diode stack using 50 °C industrial fluid coolant. The output power is greater than 100 W per bar. Stacks with additional laser bars are currently in development, as this cooler architecture is scalable to a 1 kW system. This work will enable compact and robust fiber-coupled diode pump modules for high energy laser applications.
Systems 2020: Strategic Initiative
2010-08-29
research areas that enable agile, assured, efficient, and scalable systems engineering approaches to support the development of these systems. This...To increase development efficiency and ensure flexible solutions in the field, systems engineers need powerful, agile, interoperable, and scalable...design and development will be transformed as a result of Systems 2020, along with complementary enabling acquisition practice improvements initiated in
Shen, Daozhi; Zou, Guisheng; Liu, Lei; Zhao, Wenzheng; Wu, Aiping; Duley, Walter W; Zhou, Y Norman
2018-02-14
Miniaturization of energy storage devices can significantly decrease the overall size of electronic systems. However, this miniaturization is limited by the reduction of electrode dimensions and the reproducible transfer of small electrolyte drops. This paper reports first a simple scalable direct writing method for the production of ultraminiature microsupercapacitor (MSC) electrodes, based on femtosecond laser reduced graphene oxide (fsrGO) interlaced pads. These pads, separated by 2 μm spacing, are 100 μm long and 8 μm wide. A second stage involves the accurate transfer of an electrolyte microdroplet on top of each individual electrode, which can avoid any interference of the electrolyte with other electronic components. Abundant in-plane mesopores in fsrGO induced by a fs laser together with ultrashort interelectrode spacing enables MSCs to exhibit a high specific capacitance (6.3 mF cm -2 and 105 F cm -3 ) and ∼100% retention after 1000 cycles. An all graphene resistor-capacitor (RC) filter is also constructed by combining the MSC and a fsrGO resistor, which is confirmed to exhibit highly enhanced performance characteristics. This new hybrid technique combining fs laser direct writing and precise microdroplet transfer easily enables scalable production of ultraminiature MSCs, which is believed to be significant for practical application of micro-supercapacitor microelectronic systems.
Wang, Lanfang; Song, Chuang; Shi, Yi; Dang, Liyun; Jin, Ying; Jiang, Hong; Lu, Qingyi; Gao, Feng
2016-04-11
Two-dimensional nanosheets with high specific surface areas and fascinating physical and chemical properties have attracted tremendous interests because of their promising potentials in both fundamental research and practical applications. However, the problem of developing a universal strategy with a facile and cost-effective synthesis process for multi-type ultrathin 2 D nanostructures remains unresolved. Herein, we report a generalized low-temperature fabrication of scalable multi-type 2 D nanosheets including metal hydroxides (such as Ni(OH)2, Co(OH)2, Cd(OH)2, and Mg(OH)2), metal oxides (such as ZnO and Mn3O4), and layered mixed transition-metal hydroxides (Ni-Co LDH, Ni-Fe LDH, Co-Fe LDH, and Ni-Co-Fe layered ternary hydroxides) through the rational employment of a green soft-template. The synthesized crystalline inorganic nanosheets possess confined thickness, resulting in ultrahigh surface atom ratios and chemically reactive facets. Upon evaluation as electrode materials for pseudocapacitors, the Ni-Co LDH nanosheets exhibit a high specific capacitance of 1087 F g(-1) at a current density of 1 A g(-1), and excellent stability, with 103% retention after 500 cycles. This strategy is facile and scalable for the production of high-quality ultrathin crystalline inorganic nanosheets, with the possibility of extension to the preparation of other complex nanosheets. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Predicting organ toxicity using in vitro bioactivity data and chemical structure
Animal testing alone cannot practically evaluate the health hazard posed by tens of thousands of environmental chemicals. Computational approaches together with high-throughput experimental data may provide more efficient means to predict chemical toxicity. Here, we use a superv...
NASA Astrophysics Data System (ADS)
Hu, Huan; Siu, Vince S.; Gifford, Stacey M.; Kim, Sungcheol; Lu, Minhua; Meyer, Pablo; Stolovitzky, Gustavo A.
2017-12-01
The recently discovered bactericidal properties of nanostructures on wings of insects such as cicadas and dragonflies have inspired the development of similar nanostructured surfaces for antibacterial applications. Since most antibacterial applications require nanostructures covering a considerable amount of area, a practical fabrication method needs to be cost-effective and scalable. However, most reported nanofabrication methods require either expensive equipment or a high temperature process, limiting cost efficiency and scalability. Here, we report a simple, fast, low-cost, and scalable antibacterial surface nanofabrication methodology. Our method is based on metal-assisted chemical etching that only requires etching a single crystal silicon substrate in a mixture of silver nitrate and hydrofluoric acid for several minutes. We experimentally studied the effects of etching time on the morphology of the silicon nanospikes and the bactericidal properties of the resulting surface. We discovered that 6 minutes of etching results in a surface containing silicon nanospikes with optimal geometry. The bactericidal properties of the silicon nanospikes were supported by bacterial plating results, fluorescence images, and scanning electron microscopy images.
Phonon-based scalable platform for chip-scale quantum computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reinke, Charles M.; El-Kady, Ihab
Here, we present a scalable phonon-based quantum computer on a phononic crystal platform. Practical schemes involve selective placement of a single acceptor atom in the peak of the strain field in a high-Q phononic crystal cavity that enables coupling of the phonon modes to the energy levels of the atom. We show theoretical optimization of the cavity design and coupling waveguide, along with estimated performance figures of the coupled system. A qubit can be created by entangling a phonon at the resonance frequency of the cavity with the atom states. Qubits based on this half-sound, half-matter quasi-particle, called a phoniton,more » may outcompete other quantum architectures in terms of combined emission rate, coherence lifetime, and fabrication demands.« less
Phonon-based scalable platform for chip-scale quantum computing
Reinke, Charles M.; El-Kady, Ihab
2016-12-19
Here, we present a scalable phonon-based quantum computer on a phononic crystal platform. Practical schemes involve selective placement of a single acceptor atom in the peak of the strain field in a high-Q phononic crystal cavity that enables coupling of the phonon modes to the energy levels of the atom. We show theoretical optimization of the cavity design and coupling waveguide, along with estimated performance figures of the coupled system. A qubit can be created by entangling a phonon at the resonance frequency of the cavity with the atom states. Qubits based on this half-sound, half-matter quasi-particle, called a phoniton,more » may outcompete other quantum architectures in terms of combined emission rate, coherence lifetime, and fabrication demands.« less
The TOTEM DAQ based on the Scalable Readout System (SRS)
NASA Astrophysics Data System (ADS)
Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio
2018-02-01
The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.
Scalable and sustainable electrochemical allylic C-H oxidation
NASA Astrophysics Data System (ADS)
Horn, Evan J.; Rosen, Brandon R.; Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D.; Baran, Phil S.
2016-05-01
New methods and strategies for the direct functionalization of C-H bonds are beginning to reshape the field of retrosynthetic analysis, affecting the synthesis of natural products, medicines and materials. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C-H functionalization, owing to the utility of enones and allylic alcohols as versatile intermediates, and their prevalence in natural and unnatural materials. Allylic oxidations have featured in hundreds of syntheses, including some natural product syntheses regarded as “classics”. Despite many attempts to improve the efficiency and practicality of this transformation, the majority of conditions still use highly toxic reagents (based around toxic elements such as chromium or selenium) or expensive catalysts (such as palladium or rhodium). These requirements are problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. This oxidation strategy is therefore rarely used for large-scale synthetic applications, limiting the adoption of this retrosynthetic strategy by industrial scientists. Here we describe an electrochemical C-H oxidation strategy that exhibits broad substrate scope, operational simplicity and high chemoselectivity. It uses inexpensive and readily available materials, and represents a scalable allylic C-H oxidation (demonstrated on 100 grams), enabling the adoption of this C-H oxidation strategy in large-scale industrial settings without substantial environmental impact.
Jiang, Liren
2017-01-01
Background The aim was to develop scalable Whole Slide Imaging (sWSI), a WSI system based on mainstream smartphones coupled with regular optical microscopes. This ultra-low-cost solution should offer diagnostic-ready imaging quality on par with standalone scanners, supporting both oil and dry objective lenses of different magnifications, and reasonably high throughput. These performance metrics should be evaluated by expert pathologists and match those of high-end scanners. Objective The aim was to develop scalable Whole Slide Imaging (sWSI), a whole slide imaging system based on smartphones coupled with optical microscopes. This ultra-low-cost solution should offer diagnostic-ready imaging quality on par with standalone scanners, supporting both oil and dry object lens of different magnification. All performance metrics should be evaluated by expert pathologists and match those of high-end scanners. Methods In the sWSI design, the digitization process is split asynchronously between light-weight clients on smartphones and powerful cloud servers. The client apps automatically capture FoVs at up to 12-megapixel resolution and process them in real-time to track the operation of users, then give instant feedback of guidance. The servers first restitch each pair of FoVs, then automatically correct the unknown nonlinear distortion introduced by the lens of the smartphone on the fly, based on pair-wise stitching, before finally combining all FoVs into one gigapixel VS for each scan. These VSs can be viewed using Internet browsers anywhere. In the evaluation experiment, 100 frozen section slides from patients randomly selected among in-patients of the participating hospital were scanned by both a high-end Leica scanner and sWSI. All VSs were examined by senior pathologists whose diagnoses were compared against those made using optical microscopy as ground truth to evaluate the image quality. Results The sWSI system is developed for both Android and iPhone smartphones and is currently being offered to the public. The image quality is reliable and throughput is approximately 1 FoV per second, yielding a 15-by-15 mm slide under 20X object lens in approximately 30-35 minutes, with little training required for the operator. The expected cost for setup is approximately US $100 and scanning each slide costs between US $1 and $10, making sWSI highly cost-effective for infrequent or low-throughput usage. In the clinical evaluation of sample-wise diagnostic reliability, average accuracy scores achieved by sWSI-scan-based diagnoses were as follows: 0.78 for breast, 0.88 for uterine corpus, 0.68 for thyroid, and 0.50 for lung samples. The respective low-sensitivity rates were 0.05, 0.05, 0.13, and 0.25 while the respective low-specificity rates were 0.18, 0.08, 0.20, and 0.25. The participating pathologists agreed that the overall quality of sWSI was generally on par with that produced by high-end scanners, and did not affect diagnosis in most cases. Pathologists confirmed that sWSI is reliable enough for standard diagnoses of most tissue categories, while it can be used for quick screening of difficult cases. Conclusions As an ultra-low-cost alternative to whole slide scanners, diagnosis-ready VS quality and robustness for commercial usage is achieved in the sWSI solution. Operated on main-stream smartphones installed on normal optical microscopes, sWSI readily offers affordable and reliable WSI to resource-limited or infrequent clinical users. PMID:28916508
Analysis and Testing of Mobile Wireless Networks
NASA Technical Reports Server (NTRS)
Alena, Richard; Evenson, Darin; Rundquist, Victor; Clancy, Daniel (Technical Monitor)
2002-01-01
Wireless networks are being used to connect mobile computing elements in more applications as the technology matures. There are now many products (such as 802.11 and 802.11b) which ran in the ISM frequency band and comply with wireless network standards. They are being used increasingly to link mobile Intranet into Wired networks. Standard methods of analyzing and testing their performance and compatibility are needed to determine the limits of the technology. This paper presents analytical and experimental methods of determining network throughput, range and coverage, and interference sources. Both radio frequency (BE) domain and network domain analysis have been applied to determine wireless network throughput and range in the outdoor environment- Comparison of field test data taken under optimal conditions, with performance predicted from RF analysis, yielded quantitative results applicable to future designs. Layering multiple wireless network- sooners can increase performance. Wireless network components can be set to different radio frequency-hopping sequences or spreading functions, allowing more than one sooner to coexist. Therefore, we ran multiple 802.11-compliant systems concurrently in the same geographical area to determine interference effects and scalability, The results can be used to design of more robust networks which have multiple layers of wireless data communication paths and provide increased throughput overall.
Progress Report: Transportable Gasifier for On-Farm Disposal ...
Report A prototype transportable gasifier intended to process a minimum of 25 tons per day of animal mortalities (scalable to 200 tons per day) was built as part of an interagency effort involving the U.S. Environmental Protection Agency, the Department of Homeland Security, the U.S. Department of Agriculture, and the Department of Defense as well as the State of North Carolina. This effort is intended to demonstrate the feasibility of gasification for disposal of contaminated carcasses and to identify technical challenges and improvements that will simplify, improve, and enhance the gasifier system as a mobile response tool. Initial testing of the prototype in 2008 and 2010 demonstrated partial success by meeting the transportability and rapid deployment requirements. However, the throughput of animal carcasses was approximately 1/3 of the intended design capacity. Modifications have been made to the fuel system, burner system, feed system, control system, power distribution, and ash handling system to increase its operating capacity to the rated design throughput. Further testing will be performed to demonstrate the throughput as well as to demonstrate the ability of the unit to operate around the clock for an extended period of time. This report gives a status update on the progress of the project. Purpose is to give an update on the Transportable Animal Carcass Gasifier.
Grandjean, Geoffrey; Graham, Ryan; Bartholomeusz, Geoffrey
2011-11-01
In recent years high throughput screening operations have become a critical application in functional and translational research. Although a seemingly unmanageable amount of data is generated by these high-throughput, large-scale techniques, through careful planning, an effective Laboratory Information Management System (LIMS) can be developed and implemented in order to streamline all phases of a workflow. Just as important as data mining and analysis procedures at the end of complex processes is the tracking of individual steps of applications that generate such data. Ultimately, the use of a customized LIMS will enable users to extract meaningful results from large datasets while trusting the robustness of their assays. To illustrate the design of a custom LIMS, this practical example is provided to highlight the important aspects of the design of a LIMS to effectively modulate all aspects of an siRNA screening service. This system incorporates inventory management, control of workflow, data handling and interaction with investigators, statisticians and administrators. All these modules are regulated in a synchronous manner within the LIMS. © 2011 Bentham Science Publishers
High dimensional biological data retrieval optimization with NoSQL technology.
Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike
2014-01-01
High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data.
High dimensional biological data retrieval optimization with NoSQL technology
2014-01-01
Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data. PMID:25435347
Evaluation of a New Remote Handling Design for High Throughput Annular Centrifugal Contactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
David H. Meikrantz; Troy G. Garn; Jack D. Law
2009-09-01
Advanced designs of nuclear fuel recycling plants are expected to include more ambitious goals for aqueous based separations including; higher separations efficiency, high-level waste minimization, and a greater focus on continuous processes to minimize cost and footprint. Therefore, Annular Centrifugal Contactors (ACCs) are destined to play a more important role for such future processing schemes. Previous efforts defined and characterized the performance of commercial 5 cm and 12.5 cm single-stage ACCs in a “cold” environment. The next logical step, the design and evaluation of remote capable pilot scale ACCs in a “hot” or radioactive environment was reported earlier. This reportmore » includes the development of remote designs for ACCs that can process the large throughput rates needed in future nuclear fuel recycling plants. Novel designs were developed for the remote interconnection of contactor units, clean-in-place and drain connections, and a new solids removal collection chamber. A three stage, 12.5 cm diameter rotor module has been constructed and evaluated for operational function and remote handling in highly radioactive environments. This design is scalable to commercial CINC ACC models from V-05 to V-20 with total throughput rates ranging from 20 to 650 liters per minute. The V-05R three stage prototype was manufactured by the commercial vendor for ACCs in the U.S., CINC mfg. It employs three standard V-05 clean-in-place (CIP) units modified for remote service and replacement via new methods of connection for solution inlets, outlets, drain and CIP. Hydraulic testing and functional checks were successfully conducted and then the prototype was evaluated for remote handling and maintenance suitability. Removal and replacement of the center position V-05R ACC unit in the three stage prototype was demonstrated using an overhead rail mounted PaR manipulator. This evaluation confirmed the efficacy of this innovative design for interconnecting and cleaning individual stages while retaining the benefits of commercially reliable ACC equipment for remote applications in the nuclear industry. Minor modifications and suggestions for improved manual remote servicing by the remote handling specialists were provided but successful removal and replacement was demonstrated in the first prototype.« less
Scalable 96-well Plate Based iPSC Culture and Production Using a Robotic Liquid Handling System.
Conway, Michael K; Gerger, Michael J; Balay, Erin E; O'Connell, Rachel; Hanson, Seth; Daily, Neil J; Wakatsuki, Tetsuro
2015-05-14
Continued advancement in pluripotent stem cell culture is closing the gap between bench and bedside for using these cells in regenerative medicine, drug discovery and safety testing. In order to produce stem cell derived biopharmaceutics and cells for tissue engineering and transplantation, a cost-effective cell-manufacturing technology is essential. Maintenance of pluripotency and stable performance of cells in downstream applications (e.g., cell differentiation) over time is paramount to large scale cell production. Yet that can be difficult to achieve especially if cells are cultured manually where the operator can introduce significant variability as well as be prohibitively expensive to scale-up. To enable high-throughput, large-scale stem cell production and remove operator influence novel stem cell culture protocols using a bench-top multi-channel liquid handling robot were developed that require minimal technician involvement or experience. With these protocols human induced pluripotent stem cells (iPSCs) were cultured in feeder-free conditions directly from a frozen stock and maintained in 96-well plates. Depending on cell line and desired scale-up rate, the operator can easily determine when to passage based on a series of images showing the optimal colony densities for splitting. Then the necessary reagents are prepared to perform a colony split to new plates without a centrifugation step. After 20 passages (~3 months), two iPSC lines maintained stable karyotypes, expressed stem cell markers, and differentiated into cardiomyocytes with high efficiency. The system can perform subsequent high-throughput screening of new differentiation protocols or genetic manipulation designed for 96-well plates. This technology will reduce the labor and technical burden to produce large numbers of identical stem cells for a myriad of applications.
ACTS 118x Final Report High-Speed TCP Interoperability Testing
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Zernic, Mike; Hoder, Douglas J.; Brooks, David E.; Beering, Dave R.; Welch, Arun
1999-01-01
With the recent explosion of the Internet and the enormous business opportunities available to communication system providers, great interest has developed in improving the efficiency of data transfer using the Transmission Control Protocol (TCP) of the Internet Protocol (IP) suite. The satellite system providers are interested in solving TCP efficiency problems associated with long delays and error-prone links. Similarly, the terrestrial community is interested in solving TCP problems over high-bandwidth links. Whereas the wireless community is interested in improving TCP performance over bandwidth constrained, error-prone links. NASA realized that solutions had already been proposed for most of the problems associated with efficient data transfer over large bandwidth-delay links (which include satellite links). The solutions are detailed in various Internet Engineering Task Force (IETF) Request for Comments (RFCs). Unfortunately, most of these solutions had not been tested at high-speed (155+ Mbps). Therefore, the NASA's ACTS experiments program initiated a series of TCP experiments to demonstrate scalability of TCP/IP and determine how far the protocol can be optimized over a 622 Mbps satellite link. These experiments were known as the 118i and 118j experiments. During the 118i and 118j experiments, NASA worked closely with SUN Microsystems and FORE Systems to improve the operating system, TCP stacks. and network interface cards and drivers. We were able to obtain instantaneous data throughput rates of greater than 520 Mbps and average throughput rates of 470 Mbps using TCP over Asynchronous Transfer Mode (ATM) over a 622 Mbps Synchronous Optical Network (SONET) OC12 link. Following the success of these experiments and the successful government/industry collaboration, a new series of experiments. the 118x experiments. were developed.
Development of a central nervous system axonal myelination assay for high throughput screening.
Lariosa-Willingham, Karen D; Rosler, Elen S; Tung, Jay S; Dugas, Jason C; Collins, Tassie L; Leonoudakis, Dmitri
2016-04-22
Regeneration of new myelin is impaired in persistent multiple sclerosis (MS) lesions, leaving neurons unable to function properly and subject to further degeneration. Current MS therapies attempt to ameliorate autoimmune-mediated demyelination, but none directly promote the regeneration of lost and damaged myelin of the central nervous system (CNS). Development of new drugs that stimulate remyelination has been hampered by the inability to evaluate axonal myelination in a rapid CNS culture system. We established a high throughput cell-based assay to identify compounds that promote myelination. Culture methods were developed for initiating myelination in vitro using primary embryonic rat cortical cells. We developed an immunofluorescent phenotypic image analysis method to quantify the morphological alignment of myelin characteristic of the initiation of myelination. Using γ-secretase inhibitors as promoters of myelination, the optimal growth, time course and compound treatment conditions were established in a 96 well plate format. We have characterized the cortical myelination assay by evaluating the cellular composition of the cultures and expression of markers of differentiation over the time course of the assay. We have validated the assay scalability and consistency by screening the NIH clinical collection library of 727 compounds and identified ten compounds that promote myelination. Half maximal effective concentration (EC50) values for these compounds were determined to rank them according to potency. We have designed the first high capacity in vitro assay that assesses myelination of live axons. This assay will be ideal for screening large compound libraries to identify new drugs that stimulate myelination. Identification of agents capable of promoting the myelination of axons will likely lead to the development of new therapeutics for MS patients.
Standardized Method for High-throughput Sterilization of Arabidopsis Seeds.
Lindsey, Benson E; Rivero, Luz; Calhoun, Chistopher S; Grotewold, Erich; Brkljacic, Jelena
2017-10-17
Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and inexpensive seed sterilization for a large number of Arabidopsis lines.
Standardized Method for High-throughput Sterilization of Arabidopsis Seeds
Calhoun, Chistopher S.; Grotewold, Erich; Brkljacic, Jelena
2017-01-01
Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and inexpensive seed sterilization for a large number of Arabidopsis lines. PMID:29155739
Astronomy In The Cloud: Using Mapreduce For Image Coaddition
NASA Astrophysics Data System (ADS)
Wiley, Keith; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-01-01
In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computational challenges such as anomaly detection, classification, and moving object tracking. Since such studies require the highest quality data, methods such as image coaddition, i.e., registration, stacking, and mosaicing, will be critical to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources, e.g., asteroids, or transient objects, e.g., supernovae, these datastreams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this paper we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data is partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources, i.e., platforms where Hadoop is offered as a service. We report on our experience implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multi-terabyte imaging dataset provides a good testbed for algorithm development since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image coaddition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results compring their performance. This work is funded by the NSF and by NASA.
Astronomy in the Cloud: Using MapReduce for Image Co-Addition
NASA Astrophysics Data System (ADS)
Wiley, K.; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-03-01
In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification and moving-object tracking. Since such studies benefit from the highest-quality data, methods such as image co-addition, i.e., astrometric registration followed by per-pixel summation, will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this article we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data are partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources: i.e., platforms where Hadoop is offered as a service. We report on our experience of implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multiterabyte imaging data set provides a good testbed for algorithm development, since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image co-addition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.
Robust DNA Isolation and High-throughput Sequencing Library Construction for Herbarium Specimens.
Saeidi, Saman; McKain, Michael R; Kellogg, Elizabeth A
2018-03-08
Herbaria are an invaluable source of plant material that can be used in a variety of biological studies. The use of herbarium specimens is associated with a number of challenges including sample preservation quality, degraded DNA, and destructive sampling of rare specimens. In order to more effectively use herbarium material in large sequencing projects, a dependable and scalable method of DNA isolation and library preparation is needed. This paper demonstrates a robust, beginning-to-end protocol for DNA isolation and high-throughput library construction from herbarium specimens that does not require modification for individual samples. This protocol is tailored for low quality dried plant material and takes advantage of existing methods by optimizing tissue grinding, modifying library size selection, and introducing an optional reamplification step for low yield libraries. Reamplification of low yield DNA libraries can rescue samples derived from irreplaceable and potentially valuable herbarium specimens, negating the need for additional destructive sampling and without introducing discernible sequencing bias for common phylogenetic applications. The protocol has been tested on hundreds of grass species, but is expected to be adaptable for use in other plant lineages after verification. This protocol can be limited by extremely degraded DNA, where fragments do not exist in the desired size range, and by secondary metabolites present in some plant material that inhibit clean DNA isolation. Overall, this protocol introduces a fast and comprehensive method that allows for DNA isolation and library preparation of 24 samples in less than 13 h, with only 8 h of active hands-on time with minimal modifications.
All solid state mid-infrared dual-comb spectroscopy platform based on QCL technology
NASA Astrophysics Data System (ADS)
Hugi, Andreas; Geiser, Markus; Villares, Gustavo; Cappelli, Francesco; Blaser, Stephane; Faist, Jérôme
2015-01-01
We develop a spectroscopy platform for industrial applications based on semiconductor quantum cascade laser (QCL) frequency combs. The platform's key features will be an unmatched combination of bandwidth of 100 cm-1, resolution of 100 kHz, speed of ten to hundreds of μs as well as size and robustness, opening doors to beforehand unreachable markets. The sensor can be built extremely compact and robust since the laser source is an all-electrically pumped semiconductor optical frequency comb and no mechanical elements are required. However, the parallel acquisition of dual-comb spectrometers comes at the price of enormous data-rates. For system scalability, robustness and optical simplicity we use free-running QCL combs. Therefore no complicated optical locking mechanisms are required. To reach high signal-to-noise ratios, we develop an algorithm, which is based on combination of coherent and non-coherent averaging. This algorithm is specifically optimized for free-running and small footprint, therefore high-repetition rate, comb sources. As a consequence, our system generates data-rates of up to 3.2 GB/sec. These data-rates need to be reduced by several orders of magnitude in real-time in order to be useful for spectral fitting algorithms. We present the development of a data-treatment solution, which reaches a single-channel throughput of 22% using a standard laptop-computer. Using a state-of-the art desktop computer, the throughput is increased to 43%. This is combined with a data-acquisition board to a stand-alone data processing unit, allowing real-time industrial process observation and continuous averaging to achieve highest signal fidelity.
Competitive Genomic Screens of Barcoded Yeast Libraries
Urbanus, Malene; Proctor, Michael; Heisler, Lawrence E.; Giaever, Guri; Nislow, Corey
2011-01-01
By virtue of advances in next generation sequencing technologies, we have access to new genome sequences almost daily. The tempo of these advances is accelerating, promising greater depth and breadth. In light of these extraordinary advances, the need for fast, parallel methods to define gene function becomes ever more important. Collections of genome-wide deletion mutants in yeasts and E. coli have served as workhorses for functional characterization of gene function, but this approach is not scalable, current gene-deletion approaches require each of the thousands of genes that comprise a genome to be deleted and verified. Only after this work is complete can we pursue high-throughput phenotyping. Over the past decade, our laboratory has refined a portfolio of competitive, miniaturized, high-throughput genome-wide assays that can be performed in parallel. This parallelization is possible because of the inclusion of DNA 'tags', or 'barcodes,' into each mutant, with the barcode serving as a proxy for the mutation and one can measure the barcode abundance to assess mutant fitness. In this study, we seek to fill the gap between DNA sequence and barcoded mutant collections. To accomplish this we introduce a combined transposon disruption-barcoding approach that opens up parallel barcode assays to newly sequenced, but poorly characterized microbes. To illustrate this approach we present a new Candida albicans barcoded disruption collection and describe how both microarray-based and next generation sequencing-based platforms can be used to collect 10,000 - 1,000,000 gene-gene and drug-gene interactions in a single experiment. PMID:21860376
Genome amplification of single sperm using multiple displacement amplification.
Jiang, Zhengwen; Zhang, Xingqi; Deka, Ranjan; Jin, Li
2005-06-07
Sperm typing is an effective way to study recombination rate on a fine scale in regions of interest. There are two strategies for the amplification of single meiotic recombinants: repulsion-phase allele-specific PCR and whole genome amplification (WGA). The former can selectively amplify single recombinant molecules from a batch of sperm but is not scalable for high-throughput operation. Currently, primer extension pre-amplification is the only method used in WGA of single sperm, whereas it has limited capacity to produce high-coverage products enough for the analysis of local recombination rate in multiple large regions. Here, we applied for the first time a recently developed WGA method, multiple displacement amplification (MDA), to amplify single sperm DNA, and demonstrated its great potential for producing high-yield and high-coverage products. In a 50 mul reaction, 76 or 93% of loci can be amplified at least 2500- or 250-fold, respectively, from single sperm DNA, and second-round MDA can further offer >200-fold amplification. The MDA products are usable for a variety of genetic applications, including sequencing and microsatellite marker and single nucleotide polymorphism (SNP) analysis. The use of MDA in single sperm amplification may open a new era for studies on local recombination rates.
CRISPR/Cas9-mediated knock-in of an optimized TetO repeat for live cell imaging of endogenous loci.
Tasan, Ipek; Sustackova, Gabriela; Zhang, Liguo; Kim, Jiah; Sivaguru, Mayandi; HamediRad, Mohammad; Wang, Yuchuan; Genova, Justin; Ma, Jian; Belmont, Andrew S; Zhao, Huimin
2018-06-15
Nuclear organization has an important role in determining genome function; however, it is not clear how spatiotemporal organization of the genome relates to functionality. To elucidate this relationship, a method for tracking any locus of interest is desirable. Recently clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein 9 (Cas9) or transcription activator-like effectors were adapted for imaging endogenous loci; however, they are mostly limited to visualization of repetitive regions. Here, we report an efficient and scalable method named SHACKTeR (Short Homology and CRISPR/Cas9-mediated Knock-in of a TetO Repeat) for live cell imaging of specific chromosomal regions without the need for a pre-existing repetitive sequence. SHACKTeR requires only two modifications to the genome: CRISPR/Cas9-mediated knock-in of an optimized TetO repeat and its visualization by TetR-EGFP expression. Our simplified knock-in protocol, utilizing short homology arms integrated by polymerase chain reaction, was successful at labeling 10 different loci in HCT116 cells. We also showed the feasibility of knock-in into lamina-associated, heterochromatin regions, demonstrating that these regions prefer non-homologous end joining for knock-in. Using SHACKTeR, we were able to observe DNA replication at a specific locus by long-term live cell imaging. We anticipate the general applicability and scalability of our method will enhance causative analyses between gene function and compartmentalization in a high-throughput manner.
Parallel processing architecture for H.264 deblocking filter on multi-core platforms
NASA Astrophysics Data System (ADS)
Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao
2012-03-01
Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.
Identifying chemicals that provide a specific function within a product, yet have minimal impact on the human body or environment, is the goal of most formulation chemists and engineers practicing green chemistry. We present a methodology to identify potential chemical functional...
Hao, Xiaoming; Zhu, Jian; Jiang, Xiong; Wu, Haitao; Qiao, Jinshuo; Sun, Wang; Wang, Zhenhua; Sun, Kening
2016-05-11
Polymeric nanomaterials emerge as key building blocks for engineering materials in a variety of applications. In particular, the high modulus polymeric nanofibers are suitable to prepare flexible yet strong membrane separators to prevent the growth and penetration of lithium dendrites for safe and reliable high energy lithium metal-based batteries. High ionic conductance, scalability, and low cost are other required attributes of the separator important for practical implementations. Available materials so far are difficult to comply with such stringent criteria. Here, we demonstrate a high-yield exfoliation of ultrastrong poly(p-phenylene benzobisoxazole) nanofibers from the Zylon microfibers. A highly scalable blade casting process is used to assemble these nanofibers into nanoporous membranes. These membranes possess ultimate strengths of 525 MPa, Young's moduli of 20 GPa, thermal stability up to 600 °C, and impressively low ionic resistance, enabling their use as dendrite-suppressing membrane separators in electrochemical cells. With such high-performance separators, reliable lithium-metal based batteries operated at 150 °C are also demonstrated. Those polyoxyzole nanofibers would enrich the existing library of strong nanomaterials and serve as a promising material for large-scale and cost-effective safe energy storage.
Askar, Khalid; Leo, Sin-Yen; Xu, Can; Liu, Danielle; Jiang, Peng
2016-11-15
Here we report a rapid and scalable bottom-up technique for layer-by-layer (LBL) assembling near-infrared-active colloidal photonic crystals consisting of large (⩾1μm) silica microspheres. By combining a new electrostatics-assisted colloidal transferring approach with spontaneous colloidal crystallization at an air/water interface, we have demonstrated that the crystal transfer speed of traditional Langmuir-Blodgett-based colloidal assembly technologies can be enhanced by nearly 2 orders of magnitude. Importantly, the crystalline quality of the resultant photonic crystals is not compromised by this rapid colloidal assembly approach. They exhibit thickness-dependent near-infrared stop bands and well-defined Fabry-Perot fringes in the specular transmission and reflection spectra, which match well with the theoretical calculations using a scalar-wave approximation model and Fabry-Perot analysis. This simple yet scalable bottom-up technology can significantly improve the throughput in assembling large-area, multilayer colloidal crystals, which are of great technological importance in a variety of optical and non-optical applications ranging from all-optical integrated circuits to tissue engineering. Copyright © 2016 Elsevier Inc. All rights reserved.
Efficient data management tools for the heterogeneous big data warehouse
NASA Astrophysics Data System (ADS)
Alekseev, A. A.; Osipova, V. V.; Ivanov, M. A.; Klimentov, A.; Grigorieva, N. V.; Nalamwar, H. S.
2016-09-01
The traditional RDBMS has been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data warehouse like workload against the transactional schema, in particular for the analysis of archived data or the aggregation of data for summary and accounting purposes. The paper evaluates new database technologies like HBase, Cassandra, and MongoDB commonly referred as NoSQL databases for handling messy, varied and large amount of data. The evaluation depends upon the performance, throughput and scalability of the above technologies for several scientific and industrial use-cases. This paper outlines the technologies and architectures needed for processing Big Data, as well as the description of the back-end application that implements data migration from RDBMS to NoSQL data warehouse, NoSQL database organization and how it could be useful for further data analytics.
Kortz, Linda; Helmschrodt, Christin; Ceglarek, Uta
2011-03-01
In the last decade various analytical strategies have been established to enhance separation speed and efficiency in high performance liquid chromatography applications. Chromatographic supports based on monolithic material, small porous particles, and porous layer beads have been developed and commercialized to improve throughput and separation efficiency. This paper provides an overview of current developments in fast chromatography combined with mass spectrometry for the analysis of metabolites and proteins in clinical applications. Advances and limitations of fast chromatography for the combination with mass spectrometry are discussed. Practical aspects of, recent developments in, and the present status of high-throughput analysis of human body fluids for therapeutic drug monitoring, toxicology, clinical metabolomics, and proteomics are presented.
Schuberth, Christian; Won, Hong-Hee; Blattmann, Peter; Joggerst-Thomalla, Brigitte; Theiss, Susanne; Asselta, Rosanna; Duga, Stefano; Merlini, Pier Angelica; Ardissino, Diego; Lander, Eric S.; Gabriel, Stacey; Rader, Daniel J.; Peloso, Gina M.; Kathiresan, Sekar; Runz, Heiko
2015-01-01
A fundamental challenge to contemporary genetics is to distinguish rare missense alleles that disrupt protein functions from the majority of alleles neutral on protein activities. High-throughput experimental tools to securely discriminate between disruptive and non-disruptive missense alleles are currently missing. Here we establish a scalable cell-based strategy to profile the biological effects and likely disease relevance of rare missense variants in vitro. We apply this strategy to systematically characterize missense alleles in the low-density lipoprotein receptor (LDLR) gene identified through exome sequencing of 3,235 individuals and exome-chip profiling of 39,186 individuals. Our strategy reliably identifies disruptive missense alleles, and disruptive-allele carriers have higher plasma LDL-cholesterol (LDL-C). Importantly, considering experimental data refined the risk of rare LDLR allele carriers from 4.5- to 25.3-fold for high LDL-C, and from 2.1- to 20-fold for early-onset myocardial infarction. Our study generates proof-of-concept that systematic functional variant profiling may empower rare variant-association studies by orders of magnitude. PMID:25647241
Using the iPlant collaborative discovery environment.
Oliver, Shannon L; Lenards, Andrew J; Barthelson, Roger A; Merchant, Nirav; McKay, Sheldon J
2013-06-01
The iPlant Collaborative is an academic consortium whose mission is to develop an informatics and social infrastructure to address the "grand challenges" in plant biology. Its cyberinfrastructure supports the computational needs of the research community and facilitates solving major challenges in plant science. The Discovery Environment provides a powerful and rich graphical interface to the iPlant Collaborative cyberinfrastructure by creating an accessible virtual workbench that enables all levels of expertise, ranging from students to traditional biology researchers and computational experts, to explore, analyze, and share their data. By providing access to iPlant's robust data-management system and high-performance computing resources, the Discovery Environment also creates a unified space in which researchers can access scalable tools. Researchers can use available Applications (Apps) to execute analyses on their data, as well as customize or integrate their own tools to better meet the specific needs of their research. These Apps can also be used in workflows that automate more complicated analyses. This module describes how to use the main features of the Discovery Environment, using bioinformatics workflows for high-throughput sequence data as examples. © 2013 by John Wiley & Sons, Inc.
Tier 3 batch system data locality via managed caches
NASA Astrophysics Data System (ADS)
Fischer, Max; Giffels, Manuel; Jung, Christopher; Kühn, Eileen; Quast, Günter
2015-05-01
Modern data processing increasingly relies on data locality for performance and scalability, whereas the common HEP approaches aim for uniform resource pools with minimal locality, recently even across site boundaries. To combine advantages of both, the High- Performance Data Analysis (HPDA) Tier 3 concept opportunistically establishes data locality via coordinated caches. In accordance with HEP Tier 3 activities, the design incorporates two major assumptions: First, only a fraction of data is accessed regularly and thus the deciding factor for overall throughput. Second, data access may fallback to non-local, making permanent local data availability an inefficient resource usage strategy. Based on this, the HPDA design generically extends available storage hierarchies into the batch system. Using the batch system itself for scheduling file locality, an array of independent caches on the worker nodes is dynamically populated with high-profile data. Cache state information is exposed to the batch system both for managing caches and scheduling jobs. As a result, users directly work with a regular, adequately sized storage system. However, their automated batch processes are presented with local replications of data whenever possible.
Globus Nexus: A Platform-as-a-Service Provider of Research Identity, Profile, and Group Management.
Chard, Kyle; Lidman, Mattias; McCollam, Brendan; Bryan, Josh; Ananthakrishnan, Rachana; Tuecke, Steven; Foster, Ian
2016-03-01
Globus Nexus is a professionally hosted Platform-as-a-Service that provides identity, profile and group management functionality for the research community. Many collaborative e-Science applications need to manage large numbers of user identities, profiles, and groups. However, developing and maintaining such capabilities is often challenging given the complexity of modern security protocols and requirements for scalable, robust, and highly available implementations. By outsourcing this functionality to Globus Nexus, developers can leverage best-practice implementations without incurring development and operations overhead. Users benefit from enhanced capabilities such as identity federation, flexible profile management, and user-oriented group management. In this paper we present Globus Nexus, describe its capabilities and architecture, summarize how several e-Science applications leverage these capabilities, and present results that characterize its scalability, reliability, and availability.
Globus Nexus: A Platform-as-a-Service provider of research identity, profile, and group management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chard, Kyle; Lidman, Mattias; McCollam, Brendan
Globus Nexus is a professionally hosted Platform-as-a-Service that provides identity, profile and group management functionality for the research community. Many collaborative e-Science applications need to manage large numbers of user identities, profiles, and groups. However, developing and maintaining such capabilities is often challenging given the complexity of modern security protocols and requirements for scalable, robust, and highly available implementations. By outsourcing this functionality to Globus Nexus, developers can leverage best-practice implementations without incurring development and operations overhead. Users benefit from enhanced capabilities such as identity federation, flexible profile management, and user-oriented group management. In this paper we present Globus Nexus,more » describe its capabilities and architecture, summarize how several e-Science applications leverage these capabilities, and present results that characterize its scalability, reliability, and availability.« less
Globus Nexus: A Platform-as-a-Service Provider of Research Identity, Profile, and Group Management
Lidman, Mattias; McCollam, Brendan; Bryan, Josh; Ananthakrishnan, Rachana; Tuecke, Steven; Foster, Ian
2015-01-01
Globus Nexus is a professionally hosted Platform-as-a-Service that provides identity, profile and group management functionality for the research community. Many collaborative e-Science applications need to manage large numbers of user identities, profiles, and groups. However, developing and maintaining such capabilities is often challenging given the complexity of modern security protocols and requirements for scalable, robust, and highly available implementations. By outsourcing this functionality to Globus Nexus, developers can leverage best-practice implementations without incurring development and operations overhead. Users benefit from enhanced capabilities such as identity federation, flexible profile management, and user-oriented group management. In this paper we present Globus Nexus, describe its capabilities and architecture, summarize how several e-Science applications leverage these capabilities, and present results that characterize its scalability, reliability, and availability. PMID:26688598
NASA Astrophysics Data System (ADS)
Zhu, F.; Yu, H.; Rilee, M. L.; Kuo, K. S.; Yu, L.; Pan, Y.; Jiang, H.
2017-12-01
Since the establishment of data archive centers and the standardization of file formats, scientists are required to search metadata catalogs for data needed and download the data files to their local machines to carry out data analysis. This approach has facilitated data discovery and access for decades, but it inevitably leads to data transfer from data archive centers to scientists' computers through low-bandwidth Internet connections. Data transfer becomes a major performance bottleneck in such an approach. Combined with generally constrained local compute/storage resources, they limit the extent of scientists' studies and deprive them of timely outcomes. Thus, this conventional approach is not scalable with respect to both the volume and variety of geoscience data. A much more viable solution is to couple analysis and storage systems to minimize data transfer. In our study, we compare loosely coupled approaches (exemplified by Spark and Hadoop) and tightly coupled approaches (exemplified by parallel distributed database management systems, e.g., SciDB). In particular, we investigate the optimization of data placement and movement to effectively tackle the variety challenge, and boost the popularization of parallelization to address the volume challenge. Our goal is to enable high-performance interactive analysis for a good portion of geoscience data analysis exercise. We show that tightly coupled approaches can concentrate data traffic between local storage systems and compute units, and thereby optimizing bandwidth utilization to achieve a better throughput. Based on our observations, we develop a geoscience data analysis system that tightly couples analysis engines with storages, which has direct access to the detailed map of data partition locations. Through an innovation data partitioning and distribution scheme, our system has demonstrated scalable and interactive performance in real-world geoscience data analysis applications.
Barone, Umberto; Merletti, Roberto
2013-08-01
A compact and portable system for real-time, multichannel, HD-sEMG acquisition is presented. The device is based on a modular, multiboard approach for scalability and to optimize power consumption for battery operating mode. The proposed modular approach allows us to configure the number of sEMG channels from 64 to 424. A plastic-optical-fiber-based 10/100 Ethernet link is implemented on a field-programmable gate array (FPGA)-based board for real-time, safety data transmission toward a personal computer or laptop for data storage and offline analysis. The high-performance A/D conversion stage, based on 24-bit ADC, allows us to automatically serialize the samples and transmits them on a single SPI bus connecting a sequence of up to 14 ADC chips in chain mode. The prototype is configured to work with 64 channels and a sample frequency of 2.441 ksps (derived from 25-MHz clock source), corresponding to a real data throughput of 3 Mbps. The prototype was assembled to demonstrate the available features (e.g., scalability) and evaluate the expected performances. The analog front end board could be dynamically configured to acquire sEMG signals in monopolar or single differential mode by means of FPGA I/O interface. The system can acquire continuously 64 channels for up to 5 h with a lightweight battery pack of 7.5 Vdc/2200 mAh. A PC-based application was also developed, by means of the open source Qt Development Kit from Nokia, for prototype characterization, sEMG measurements, and real-time visualization of 2-D maps.
Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.
2012-01-01
Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.
Heterobimetallic catalysis in asymmetric 1,4-addition of O-alkylhydroxylamine to enones.
Yamagiwa, Noriyuki; Matsunaga, Shigeki; Shibasaki, Masakatsu
2003-12-31
A heterobimetallic YLi3tris(binaphthoxide) catalyst (YLB) promoted a 1,4-addition of O-methylhydroxylamine in high enantiomeric excess (up to 97% ee). Catalyst loading was reduced to as little as 0.5 mol %, still affording the 1,4-adduct in 96% yield and 96% ee. A high concentration of substrates and the scalability of the present system is also practically useful. The results suggested that the heterobimetallic catalysis was not deactivated even in the presence of excess amine under highly concentrated conditions. A Y and Li bimetallic cooperative function was essential for a high catalyst turnover number.
Zhou, Haiying; Purdie, Jennifer; Wang, Tongtong; Ouyang, Anli
2010-01-01
The number of therapeutic proteins produced by cell culture in the pharmaceutical industry continues to increase. During the early stages of manufacturing process development, hundreds of clones and various cell culture conditions are evaluated to develop a robust process to identify and select cell lines with high productivity. It is highly desirable to establish a high throughput system to accelerate process development and reduce cost. Multiwell plates and shake flasks are widely used in the industry as the scale down model for large-scale bioreactors. However, one of the limitations of these two systems is the inability to measure and control pH in a high throughput manner. As pH is an important process parameter for cell culture, this could limit the applications of these scale down model vessels. An economical, rapid, and robust pH measurement method was developed at Eli Lilly and Company by employing SNARF-4F 5-(-and 6)-carboxylic acid. The method demonstrated the ability to measure the pH values of cell culture samples in a high throughput manner. Based upon the chemical equilibrium of CO(2), HCO(3)(-), and the buffer system, i.e., HEPES, we established a mathematical model to regulate pH in multiwell plates and shake flasks. The model calculates the required %CO(2) from the incubator and the amount of sodium bicarbonate to be added to adjust pH to a preset value. The model was validated by experimental data, and pH was accurately regulated by this method. The feasibility of studying the pH effect on cell culture in 96-well plates and shake flasks was also demonstrated in this study. This work shed light on mini-bioreactor scale down model construction and paved the way for cell culture process development to improve productivity or product quality using high throughput systems. Copyright 2009 American Institute of Chemical Engineers
Background: Adverse outcome pathways (AOPs) link adverse effects in individuals or populations to a molecular initiating event (MIE) that can be quantified using in vitro methods. Practical application of AOPs in chemical-specific risk assessment requires incorporation of knowled...
Scalable and Sustainable Electrochemical Allylic C–H Oxidation
Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D.; Baran, Phil S.
2016-01-01
New methods and strategies for the direct functionalization of C–H bonds are beginning to reshape the fabric of retrosynthetic analysis, impacting the synthesis of natural products, medicines, and even materials1. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C–H functionalization due to the utility of enones and allylic alcohols as versatile intermediates, along with their prevalence in natural and unnatural materials2. Allylic oxidations have been featured in hundreds of syntheses, including some natural product syntheses regarded as “classics”3. Despite many attempts to improve the efficiency and practicality of this powerful transformation, the vast majority of conditions still employ highly toxic reagents (based around toxic elements such as chromium, selenium, etc.) or expensive catalysts (palladium, rhodium, etc.)2. These requirements are highly problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. As such, this oxidation strategy is rarely embraced for large-scale synthetic applications, limiting the adoption of this important retrosynthetic strategy by industrial scientists. In this manuscript, we describe an electrochemical solution to this problem that exhibits broad substrate scope, operational simplicity, and high chemoselectivity. This method employs inexpensive and readily available materials, representing the first example of a scalable allylic C–H oxidation (demonstrated on 100 grams), finally opening the door for the adoption of this C–H oxidation strategy in large-scale industrial settings without significant environmental impact. PMID:27096371
Selection and Presentation of Commercially Available Electronic Resources: Issues and Practices.
ERIC Educational Resources Information Center
Jewell, Timothy D.
This report focuses on practices related to the selection and presentation of commercially available electronic resources. As part of the Digital Library Federation's Collection Practices Initiative, the report also shares the goal of identifying and propagating practices that support the growth of sustainable and scalable collections. It looks in…
Closha: bioinformatics workflow system for the analysis of massive sequencing data.
Ko, GunHwan; Kim, Pan-Gyu; Yoon, Jongcheol; Han, Gukhee; Park, Seong-Jin; Song, Wangho; Lee, Byungwook
2018-02-19
While next-generation sequencing (NGS) costs have fallen in recent years, the cost and complexity of computation remain substantial obstacles to the use of NGS in bio-medical care and genomic research. The rapidly increasing amounts of data available from the new high-throughput methods have made data processing infeasible without automated pipelines. The integration of data and analytic resources into workflow systems provides a solution to the problem by simplifying the task of data analysis. To address this challenge, we developed a cloud-based workflow management system, Closha, to provide fast and cost-effective analysis of massive genomic data. We implemented complex workflows making optimal use of high-performance computing clusters. Closha allows users to create multi-step analyses using drag and drop functionality and to modify the parameters of pipeline tools. Users can also import the Galaxy pipelines into Closha. Closha is a hybrid system that enables users to use both analysis programs providing traditional tools and MapReduce-based big data analysis programs simultaneously in a single pipeline. Thus, the execution of analytics algorithms can be parallelized, speeding up the whole process. We also developed a high-speed data transmission solution, KoDS, to transmit a large amount of data at a fast rate. KoDS has a file transfer speed of up to 10 times that of normal FTP and HTTP. The computer hardware for Closha is 660 CPU cores and 800 TB of disk storage, enabling 500 jobs to run at the same time. Closha is a scalable, cost-effective, and publicly available web service for large-scale genomic data analysis. Closha supports the reliable and highly scalable execution of sequencing analysis workflows in a fully automated manner. Closha provides a user-friendly interface to all genomic scientists to try to derive accurate results from NGS platform data. The Closha cloud server is freely available for use from http://closha.kobic.re.kr/ .
A comprehensive and scalable database search system for metaproteomics.
Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W
2016-08-16
Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for the 5 microbiome samples characterized and discussed herein are open source and available for use and additional analysis.
Platinum clusters with precise numbers of atoms for preparative-scale catalysis.
Imaoka, Takane; Akanuma, Yuki; Haruta, Naoki; Tsuchiya, Shogo; Ishihara, Kentaro; Okayasu, Takeshi; Chun, Wang-Jae; Takahashi, Masaki; Yamamoto, Kimihisa
2017-09-25
Subnanometer noble metal clusters have enormous potential, mainly for catalytic applications. Because a difference of only one atom may cause significant changes in their reactivity, a preparation method with atomic-level precision is essential. Although such a precision with enough scalability has been achieved by gas-phase synthesis, large-scale preparation is still at the frontier, hampering practical applications. We now show the atom-precise and fully scalable synthesis of platinum clusters on a milligram scale from tiara-like platinum complexes with various ring numbers (n = 5-13). Low-temperature calcination of the complexes on a carbon support under hydrogen stream affords monodispersed platinum clusters, whose atomicity is equivalent to that of the precursor complex. One of the clusters (Pt 10 ) exhibits high catalytic activity in the hydrogenation of styrene compared to that of the other clusters. This method opens an avenue for the application of these clusters to preparative-scale catalysis.The catalytic activity of a noble metal nanocluster is tied to its atomicity. Here, the authors report an atom-precise, fully scalable synthesis of platinum clusters from molecular ring precursors, and show that a variation of only one atom can dramatically change a cluster's reactivity.
Aphinyanaphongs, Yin; Fu, Lawrence D; Aliferis, Constantin F
2013-01-01
Building machine learning models that identify unproven cancer treatments on the Health Web is a promising approach for dealing with the dissemination of false and dangerous information to vulnerable health consumers. Aside from the obvious requirement of accuracy, two issues are of practical importance in deploying these models in real world applications. (a) Generalizability: The models must generalize to all treatments (not just the ones used in the training of the models). (b) Scalability: The models can be applied efficiently to billions of documents on the Health Web. First, we provide methods and related empirical data demonstrating strong accuracy and generalizability. Second, by combining the MapReduce distributed architecture and high dimensionality compression via Markov Boundary feature selection, we show how to scale the application of the models to WWW-scale corpora. The present work provides evidence that (a) a very small subset of unproven cancer treatments is sufficient to build a model to identify unproven treatments on the web; (b) unproven treatments use distinct language to market their claims and this language is learnable; (c) through distributed parallelization and state of the art feature selection, it is possible to prepare the corpora and build and apply models with large scalability.
NASA Astrophysics Data System (ADS)
Park, Jae Yong; Lee, Illhwan; Ham, Juyoung; Gim, Seungo; Lee, Jong-Lam
2017-06-01
Implementing nanostructures on plastic film is indispensable for highly efficient flexible optoelectronic devices. However, due to the thermal and chemical fragility of plastic, nanostructuring approaches are limited to indirect transfer with low throughput. Here, we fabricate single-crystal AgCl nanorods by using a Cl2 plasma on Ag-coated polyimide. Cl radicals react with Ag to form AgCl nanorods. The AgCl is subjected to compressive strain at its interface with the Ag film because of the larger lattice constant of AgCl compared to Ag. To minimize strain energy, the AgCl nanorods grow in the [200] direction. The epitaxial relationship between AgCl (200) and Ag (111) induces a strain, which leads to a strain gradient at the periphery of AgCl nanorods. The gradient causes a strain-induced diffusion of Ag atoms to accelerate the nanorod growth. Nanorods grown for 45 s exhibit superior haze up to 100% and luminance of optical device increased by up to 33%.
High-throughput automated home-cage mesoscopic functional imaging of mouse cortex
Murphy, Timothy H.; Boyd, Jamie D.; Bolaños, Federico; Vanni, Matthieu P.; Silasi, Gergely; Haupt, Dirk; LeDue, Jeff M.
2016-01-01
Mouse head-fixed behaviour coupled with functional imaging has become a powerful technique in rodent systems neuroscience. However, training mice can be time consuming and is potentially stressful for animals. Here we report a fully automated, open source, self-initiated head-fixation system for mesoscopic functional imaging in mice. The system supports five mice at a time and requires minimal investigator intervention. Using genetically encoded calcium indicator transgenic mice, we longitudinally monitor cortical functional connectivity up to 24 h per day in >7,000 self-initiated and unsupervised imaging sessions up to 90 days. The procedure provides robust assessment of functional cortical maps on the basis of both spontaneous activity and brief sensory stimuli such as light flashes. The approach is scalable to a number of remotely controlled cages that can be assessed within the controlled conditions of dedicated animal facilities. We anticipate that home-cage brain imaging will permit flexible and chronic assessment of mesoscale cortical function. PMID:27291514
Photogenerated Lectin Sensors Produced by Thiol-Ene/Yne Photo-Click Chemistry in Aqueous Solution
Norberg, Oscar; Lee, Irene H.; Aastrup, Teodor; Yan, Mingdi; Ramström, Olof
2012-01-01
The photoinitiated radical reactions between thiols and alkenes/alkynes (thiol-ene and thiol-yne chemistry) have been applied to a functionalization methodology to produce carbohydrate-presenting surfaces for analyses of biomolecular interactions. Polymer-coated quartz surfaces were functionalized with alkenes or alkynes in a straightforward photochemical procedure utilizing perfluorophenylazide (PFPA) chemistry. The alkene/alkyne surfaces were subsequently allowed to react with carbohydrate thiols in water under UV-irradiation. The reaction can be carried out in a drop of water directly on the surface without photoinitiator and any disulfide side products were easily washed away after the functionalization process. The resulting carbohydrate-presenting surfaces were evaluated in real-time studies of protein-carbohydrate interactions using a quartz crystal microbalance flow-through system with recurring injections of selected lectins with intermediate regeneration steps using low pH buffer. The resulting methodology proved fast, efficient and scalable to high-throughput analysis formats, and the produced surfaces showed significant protein binding with expected selectivities of the lectins used in the study. PMID:22341757
FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.
Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver
2014-06-14
Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.
Convolutional networks for fast, energy-efficient neuromorphic computing
Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.
2016-01-01
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489
Convolutional networks for fast, energy-efficient neuromorphic computing.
Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S
2016-10-11
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
Software for the Integration of Multiomics Experiments in Bioconductor.
Ramos, Marcel; Schiffer, Lucas; Re, Angela; Azhar, Rimsha; Basunia, Azfar; Rodriguez, Carmen; Chan, Tiffany; Chapman, Phil; Davis, Sean R; Gomez-Cabrero, David; Culhane, Aedin C; Haibe-Kains, Benjamin; Hansen, Kasper D; Kodali, Hanish; Louis, Marie S; Mer, Arvind S; Riester, Markus; Morgan, Martin; Carey, Vince; Waldron, Levi
2017-11-01
Multiomics experiments are increasingly commonplace in biomedical research and add layers of complexity to experimental design, data integration, and analysis. R and Bioconductor provide a generic framework for statistical analysis and visualization, as well as specialized data classes for a variety of high-throughput data types, but methods are lacking for integrative analysis of multiomics experiments. The MultiAssayExperiment software package, implemented in R and leveraging Bioconductor software and design principles, provides for the coordinated representation of, storage of, and operation on multiple diverse genomics data. We provide the unrestricted multiple 'omics data for each cancer tissue in The Cancer Genome Atlas as ready-to-analyze MultiAssayExperiment objects and demonstrate in these and other datasets how the software simplifies data representation, statistical analysis, and visualization. The MultiAssayExperiment Bioconductor package reduces major obstacles to efficient, scalable, and reproducible statistical analysis of multiomics data and enhances data science applications of multiple omics datasets. Cancer Res; 77(21); e39-42. ©2017 AACR . ©2017 American Association for Cancer Research.
Application of Nexus copy number software for CNV detection and analysis.
Darvishi, Katayoon
2010-04-01
Among human structural genomic variation, copy number variants (CNVs) are the most frequently known component, comprised of gains/losses of DNA segments that are generally 1 kb in length or longer. Array-based comparative genomic hybridization (aCGH) has emerged as a powerful tool for detecting genomic copy number variants (CNVs). With the rapid increase in the density of array technology and with the adaptation of new high-throughput technology, a reliable and computationally scalable method for accurate mapping of recurring DNA copy number aberrations has become a main focus in research. Here we introduce Nexus Copy Number software, a platform-independent tool, to analyze the output files of all types of commercial and custom-made comparative genomic hybridization (CGH) and single-nucleotide polymorphism (SNP) arrays, such as those manufactured by Affymetrix, Agilent Technologies, Illumina, and Roche NimbleGen. It also supports data generated by various array image-analysis software tools such as GenePix, ImaGene, and BlueFuse. (c) 2010 by John Wiley & Sons, Inc.
Verhaeghe, Tom; De Winter, Karel; Berland, Magali; De Vreese, Rob; D'hooghe, Matthias; Offmann, Bernard; Desmet, Tom
2016-03-04
Despite the growing importance of prebiotics in nutrition and gastroenterology, their structural variety is currently still very limited. The lack of straightforward procedures to gain new products in sufficient amounts often hampers application testing and further development. Although the enzyme sucrose phosphorylase can be used to produce the rare disaccharide kojibiose (α-1,2-glucobiose) from the bulk sugars sucrose and glucose, the target compound is only a side product that is difficult to isolate. Accordingly, for this biocatalyst to become economically attractive, the formation of other glucobioses should be avoided and therefore we applied semi-rational mutagenesis and low-throughput screening, which resulted in a double mutant (L341I_Q345S) with a selectivity of 95% for kojibiose. That way, an efficient and scalable production process with a yield of 74% could be established, and with a simple yeast treatment and crystallization step over a hundred grams of highly pure kojibiose (>99.5%) was obtained.
Agelastos, Anthony; Allan, Benjamin; Brandt, Jim; ...
2016-05-18
A detailed understanding of HPC applications’ resource needs and their complex interactions with each other and HPC platform resources are critical to achieving scalability and performance. Such understanding has been difficult to achieve because typical application profiling tools do not capture the behaviors of codes under the potentially wide spectrum of actual production conditions and because typical monitoring tools do not capture system resource usage information with high enough fidelity to gain sufficient insight into application performance and demands. In this paper we present both system and application profiling results based on data obtained through synchronized system wide monitoring onmore » a production HPC cluster at Sandia National Laboratories (SNL). We demonstrate analytic and visualization techniques that we are using to characterize application and system resource usage under production conditions for better understanding of application resource needs. Furthermore, our goals are to improve application performance (through understanding application-to-resource mapping and system throughput) and to ensure that future system capabilities match their intended workloads.« less
High-Throughput Histopathological Image Analysis via Robust Cell Segmentation and Hashing
Zhang, Xiaofan; Xing, Fuyong; Su, Hai; Yang, Lin; Zhang, Shaoting
2015-01-01
Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells. PMID:26599156
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agelastos, Anthony; Allan, Benjamin; Brandt, Jim
A detailed understanding of HPC applications’ resource needs and their complex interactions with each other and HPC platform resources are critical to achieving scalability and performance. Such understanding has been difficult to achieve because typical application profiling tools do not capture the behaviors of codes under the potentially wide spectrum of actual production conditions and because typical monitoring tools do not capture system resource usage information with high enough fidelity to gain sufficient insight into application performance and demands. In this paper we present both system and application profiling results based on data obtained through synchronized system wide monitoring onmore » a production HPC cluster at Sandia National Laboratories (SNL). We demonstrate analytic and visualization techniques that we are using to characterize application and system resource usage under production conditions for better understanding of application resource needs. Furthermore, our goals are to improve application performance (through understanding application-to-resource mapping and system throughput) and to ensure that future system capabilities match their intended workloads.« less
Optofluidic fabrication for 3D-shaped particles
NASA Astrophysics Data System (ADS)
Paulsen, Kevin S.; di Carlo, Dino; Chung, Aram J.
2015-04-01
Complex three-dimensional (3D)-shaped particles could play unique roles in biotechnology, structural mechanics and self-assembly. Current methods of fabricating 3D-shaped particles such as 3D printing, injection moulding or photolithography are limited because of low-resolution, low-throughput or complicated/expensive procedures. Here, we present a novel method called optofluidic fabrication for the generation of complex 3D-shaped polymer particles based on two coupled processes: inertial flow shaping and ultraviolet (UV) light polymerization. Pillars within fluidic platforms are used to deterministically deform photosensitive precursor fluid streams. The channels are then illuminated with patterned UV light to polymerize the photosensitive fluid, creating particles with multi-scale 3D geometries. The fundamental advantages of optofluidic fabrication include high-resolution, multi-scalability, dynamic tunability, simple operation and great potential for bulk fabrication with full automation. Through different combinations of pillar configurations, flow rates and UV light patterns, an infinite set of 3D-shaped particles is available, and a variety are demonstrated.
High voltage fragmentation of composites from secondary raw materials - Potential and limitations.
Leißner, T; Hamann, D; Wuschke, L; Jäckel, H-G; Peuker, U A
2018-04-01
The comminution of composites for liberation of valuable components is a costly and energy-intensive process within the recycling of spent products. It therefore is continuously studied and optimized. In addition to conventional mechanical comminution innovative new principles for size reduction have been developed. One is the use of high voltage (HV) pulses, which is known to be a technology selectively liberating along phase boundaries. This technology offers the advantage of targeted liberation, preventing overgrinding of the material and thus improving the overall processing as well as product quality. In this study, the high voltage fragmentation of three different non-brittle composites (galvanized plastics, carbon fibre composites, electrode foils from Li-ion batteries) was investigated. The influence of pulse rate, number of pulses and filling level on the liberation and efficiency of comminution is discussed. Using the guideline VDI 2225 HV, fragmentation is compared to conventional mechanical comminution with respect to numerous criteria such as cost, throughput, energy consumption, availability and scalability. It was found that at current state of development, HV fragmentation cannot compete with mechanical comminution beyond laboratory scale. Copyright © 2017 Elsevier Ltd. All rights reserved.
Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.
Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo
2017-01-01
The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.
High-throughput NGL electron-beam direct-write lithography system
NASA Astrophysics Data System (ADS)
Parker, N. William; Brodie, Alan D.; McCoy, John H.
2000-07-01
Electron beam lithography systems have historically had low throughput. The only practical solution to this limitation is an approach using many beams writing simultaneously. For single-column multi-beam systems, including projection optics (SCALPELR and PREVAIL) and blanked aperture arrays, throughput and resolution are limited by space-charge effects. Multibeam micro-column (one beam per column) systems are limited by the need for low voltage operation, electrical connection density and fabrication complexities. In this paper, we discuss a new multi-beam concept employing multiple columns each with multiple beams to generate a very large total number of parallel writing beams. This overcomes the limitations of space-charge interactions and low voltage operation. We also discuss a rationale leading to the optimum number of columns and beams per column. Using this approach we show how production throughputs >= 60 wafers per hour can be achieved at CDs
Ganna, Andrea; Lee, Donghwan; Ingelsson, Erik; Pawitan, Yudi
2015-07-01
It is common and advised practice in biomedical research to validate experimental or observational findings in a population different from the one where the findings were initially assessed. This practice increases the generalizability of the results and decreases the likelihood of reporting false-positive findings. Validation becomes critical when dealing with high-throughput experiments, where the large number of tests increases the chance to observe false-positive results. In this article, we review common approaches to determine statistical thresholds for validation and describe the factors influencing the proportion of significant findings from a 'training' sample that are replicated in a 'validation' sample. We refer to this proportion as rediscovery rate (RDR). In high-throughput studies, the RDR is a function of false-positive rate and power in both the training and validation samples. We illustrate the application of the RDR using simulated data and real data examples from metabolomics experiments. We further describe an online tool to calculate the RDR using t-statistics. We foresee two main applications. First, if the validation study has not yet been collected, the RDR can be used to decide the optimal combination between the proportion of findings taken to validation and the size of the validation study. Secondly, if a validation study has already been done, the RDR estimated using the training data can be compared with the observed RDR from the validation data; hence, the success of the validation study can be assessed. © The Author 2014. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
EUV patterning using CAR or MOX photoresist at low dose exposure for sub 36nm pitch
NASA Astrophysics Data System (ADS)
Thibaut, Sophie; Raley, Angélique; Lazarrino, Frederic; Mao, Ming; De Simone, Danilo; Piumi, Daniele; Barla, Kathy; Ko, Akiteru; Metz, Andrew; Kumar, Kaushik; Biolsi, Peter
2018-04-01
The semiconductor industry has been pushing the limits of scalability by combining 193nm immersion lithography with multi-patterning techniques for several years. Those integrations have been declined in a wide variety of options to lower their cost but retain their inherent variability and process complexity. EUV lithography offers a much desired path that allows for direct print of line and space at 36nm pitch and below and effectively addresses issues like cycle time, intra-level overlay and mask count costs associated with multi-patterning. However it also brings its own sets of challenges. One of the major barrier to high volume manufacturing implementation has been hitting the 250W power exposure required for adequate throughput [1]. Enabling patterning using a lower dose resist could help move us closer to the HVM throughput targets assuming required performance for roughness and pattern transfer can be met. As plasma etching is known to reduce line edge roughness on 193nm lithography printed features [2], we investigate in this paper the level of roughness that can be achieved on EUV photoresist exposed at a lower dose through etch process optimization into a typical back end of line film stack. We will study 16nm lines printed at 32 and 34nm pitch. MOX and CAR photoresist performance will be compared. We will review step by step etch chemistry development to reach adequate selectivity and roughness reduction to successfully pattern the target layer.
Patil, Shilpa A; Chandrasekaran, E V; Matta, Khushi L; Parikh, Abhirath; Tzanakakis, Emmanuel S; Neelamegham, Sriram
2012-06-15
Glycosyltransferases (glycoTs) catalyze the transfer of monosaccharides from nucleotide-sugars to carbohydrate-, lipid-, and protein-based acceptors. We examined strategies to scale down and increase the throughput of glycoT enzymatic assays because traditional methods require large reaction volumes and complex chromatography. Approaches tested used (i) microarray pin printing, an appropriate method when glycoT activity was high; (ii) microwells and microcentrifuge tubes, a suitable method for studies with cell lysates when enzyme activity was moderate; and (iii) C(18) pipette tips and solvent extraction, a method that enriched reaction product when the extent of reaction was low. In all cases, reverse-phase thin layer chromatography (RP-TLC) coupled with phosphorimaging quantified the reaction rate. Studies with mouse embryonic stem cells (mESCs) demonstrated an increase in overall β(1,3)galactosyltransferase and α(2,3)sialyltransferase activity and a decrease in α(1,3)fucosyltransferases when these cells differentiate toward cardiomyocytes. Enzymatic and lectin binding data suggest a transition from Lewis(x)-type structures in mESCs to sialylated Galβ1,3GalNAc-type glycans on differentiation, with more prominent changes in enzyme activity occurring at later stages when embryoid bodies differentiated toward cardiomyocytes. Overall, simple, rapid, quantitative, and scalable glycoT activity analysis methods are presented. These use a range of natural and synthetic acceptors for the analysis of complex biological specimens that have limited availability. Copyright © 2012 Elsevier Inc. All rights reserved.
Evaluation of concurrent priority queue algorithms. Technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Q.
1991-02-01
The priority queue is a fundamental data structure that is used in a large variety of parallel algorithms, such as multiprocessor scheduling and parallel best-first search of state-space graphs. This thesis addresses the design and experimental evaluation of two novel concurrent priority queues: a parallel Fibonacci heap and a concurrent priority pool, and compares them with the concurrent binary heap. The parallel Fibonacci heap is based on the sequential Fibonacci heap, which is theoretically the most efficient data structure for sequential priority queues. This scheme not only preserves the efficient operation time bounds of its sequential counterpart, but also hasmore » very low contention by distributing locks over the entire data structure. The experimental results show its linearly scalable throughput and speedup up to as many processors as tested (currently 18). A concurrent access scheme for a doubly linked list is described as part of the implementation of the parallel Fibonacci heap. The concurrent priority pool is based on the concurrent B-tree and the concurrent pool. The concurrent priority pool has the highest throughput among the priority queues studied. Like the parallel Fibonacci heap, the concurrent priority pool scales linearly up to as many processors as tested. The priority queues are evaluated in terms of throughput and speedup. Some applications of concurrent priority queues such as the vertex cover problem and the single source shortest path problem are tested.« less
Wang, Sibo; Ren, Zheng; Guo, Yanbing; ...
2016-03-21
We report the scalable three-dimensional (3-D) integration of functional nanostructures into applicable platforms represents a promising technology to meet the ever-increasing demands of fabricating high performance devices featuring cost-effectiveness, structural sophistication and multi-functional enabling. Such an integration process generally involves a diverse array of nanostructural entities (nano-entities) consisting of dissimilar nanoscale building blocks such as nanoparticles, nanowires, and nanofilms made of metals, ceramics, or polymers. Various synthetic strategies and integration methods have enabled the successful assembly of both structurally and functionally tailored nano-arrays into a unique class of monolithic devices. The performance of nano-array based monolithic devices is dictated bymore » a few important factors such as materials substrate selection, nanostructure composition and nano-architecture geometry. Therefore, the rational material selection and nano-entity manipulation during the nano-array integration process, aiming to exploit the advantageous characteristics of nanostructures and their ensembles, are critical steps towards bridging the design of nanostructure integrated monolithic devices with various practical applications. In this article, we highlight the latest research progress of the two-dimensional (2-D) and 3-D metal and metal oxide based nanostructural integrations into prototype devices applicable with ultrahigh efficiency, good robustness and improved functionality. Lastly, selective examples of nano-array integration, scalable nanomanufacturing and representative monolithic devices such as catalytic converters, sensors and batteries will be utilized as the connecting dots to display a roadmap from hierarchical nanostructural assembly to practical nanotechnology implications ranging from energy, environmental, to chemical and biotechnology areas.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Sibo; Ren, Zheng; Guo, Yanbing
We report the scalable three-dimensional (3-D) integration of functional nanostructures into applicable platforms represents a promising technology to meet the ever-increasing demands of fabricating high performance devices featuring cost-effectiveness, structural sophistication and multi-functional enabling. Such an integration process generally involves a diverse array of nanostructural entities (nano-entities) consisting of dissimilar nanoscale building blocks such as nanoparticles, nanowires, and nanofilms made of metals, ceramics, or polymers. Various synthetic strategies and integration methods have enabled the successful assembly of both structurally and functionally tailored nano-arrays into a unique class of monolithic devices. The performance of nano-array based monolithic devices is dictated bymore » a few important factors such as materials substrate selection, nanostructure composition and nano-architecture geometry. Therefore, the rational material selection and nano-entity manipulation during the nano-array integration process, aiming to exploit the advantageous characteristics of nanostructures and their ensembles, are critical steps towards bridging the design of nanostructure integrated monolithic devices with various practical applications. In this article, we highlight the latest research progress of the two-dimensional (2-D) and 3-D metal and metal oxide based nanostructural integrations into prototype devices applicable with ultrahigh efficiency, good robustness and improved functionality. Lastly, selective examples of nano-array integration, scalable nanomanufacturing and representative monolithic devices such as catalytic converters, sensors and batteries will be utilized as the connecting dots to display a roadmap from hierarchical nanostructural assembly to practical nanotechnology implications ranging from energy, environmental, to chemical and biotechnology areas.« less
Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis
2012-01-01
Background The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Results Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. Conclusions By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand. PMID:22276739
Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis.
Tu, Jing; Ge, Qinyu; Wang, Shengqin; Wang, Lei; Sun, Beili; Yang, Qi; Bai, Yunfei; Lu, Zuhong
2012-01-25
The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand.
Shirotani, Keiro; Futakawa, Satoshi; Nara, Kiyomitsu; Hoshi, Kyoka; Saito, Toshie; Tohyama, Yuriko; Kitazume, Shinobu; Yuasa, Tatsuhiko; Miyajima, Masakazu; Arai, Hajime; Kuno, Atsushi; Narimatsu, Hisashi; Hashimoto, Yasuhiro
2011-01-01
We have established high-throughput lectin-antibody ELISAs to measure different glycans on transferrin (Tf) in cerebrospinal fluid (CSF) using lectins and an anti-transferrin antibody (TfAb). Lectin blot and precipitation analysis of CSF revealed that PVL (Psathyrella velutina lectin) bound an unique N-acetylglucosamine-terminated N-glycans on “CSF-type” Tf whereas SSA (Sambucus sieboldiana agglutinin) bound α2,6-N-acetylneuraminic acid-terminated N-glycans on “serum-type” Tf. PVL-TfAb ELISA of 0.5 μL CSF samples detected “CSF-type” Tf but not “serum-type” Tf whereas SSA-TfAb ELISA detected “serum-type” Tf but not “CSF-type” Tf, demonstrating the specificity of the lectin-TfAb ELISAs. In idiopathic normal pressure hydrocephalus (iNPH), a senile dementia associated with ventriculomegaly, amounts of the SSA-reactive Tf were significantly higher than in non-iNPH patients, indicating that Tf glycan analysis by the high-throughput lectin-TfAb ELISAs could become practical diagnostic tools for iNPH. The lectin-antibody ELISAs of CSF proteins might be useful for diagnosis of the other neurological diseases. PMID:21876827
Shirotani, Keiro; Futakawa, Satoshi; Nara, Kiyomitsu; Hoshi, Kyoka; Saito, Toshie; Tohyama, Yuriko; Kitazume, Shinobu; Yuasa, Tatsuhiko; Miyajima, Masakazu; Arai, Hajime; Kuno, Atsushi; Narimatsu, Hisashi; Hashimoto, Yasuhiro
2011-01-01
We have established high-throughput lectin-antibody ELISAs to measure different glycans on transferrin (Tf) in cerebrospinal fluid (CSF) using lectins and an anti-transferrin antibody (TfAb). Lectin blot and precipitation analysis of CSF revealed that PVL (Psathyrella velutina lectin) bound an unique N-acetylglucosamine-terminated N-glycans on "CSF-type" Tf whereas SSA (Sambucus sieboldiana agglutinin) bound α2,6-N-acetylneuraminic acid-terminated N-glycans on "serum-type" Tf. PVL-TfAb ELISA of 0.5 μL CSF samples detected "CSF-type" Tf but not "serum-type" Tf whereas SSA-TfAb ELISA detected "serum-type" Tf but not "CSF-type" Tf, demonstrating the specificity of the lectin-TfAb ELISAs. In idiopathic normal pressure hydrocephalus (iNPH), a senile dementia associated with ventriculomegaly, amounts of the SSA-reactive Tf were significantly higher than in non-iNPH patients, indicating that Tf glycan analysis by the high-throughput lectin-TfAb ELISAs could become practical diagnostic tools for iNPH. The lectin-antibody ELISAs of CSF proteins might be useful for diagnosis of the other neurological diseases.
Breast cancer diagnosis using spatial light interference microscopy
NASA Astrophysics Data System (ADS)
Majeed, Hassaan; Kandel, Mikhail E.; Han, Kevin; Luo, Zelun; Macias, Virgilia; Tangella, Krishnarao; Balla, Andre; Popescu, Gabriel
2015-11-01
The standard practice in histopathology of breast cancers is to examine a hematoxylin and eosin (H&E) stained tissue biopsy under a microscope to diagnose whether a lesion is benign or malignant. This determination is made based on a manual, qualitative inspection, making it subject to investigator bias and resulting in low throughput. Hence, a quantitative, label-free, and high-throughput diagnosis method is highly desirable. We present here preliminary results showing the potential of quantitative phase imaging for breast cancer screening and help with differential diagnosis. We generated phase maps of unstained breast tissue biopsies using spatial light interference microscopy (SLIM). As a first step toward quantitative diagnosis based on SLIM, we carried out a qualitative evaluation of our label-free images. These images were shown to two pathologists who classified each case as either benign or malignant. This diagnosis was then compared against the diagnosis of the two pathologists on corresponding H&E stained tissue images and the number of agreements were counted. The agreement between SLIM and H&E based diagnosis was 88% for the first pathologist and 87% for the second. Our results demonstrate the potential and promise of SLIM for quantitative, label-free, and high-throughput diagnosis.
Caraus, Iurie; Alsuwailem, Abdulaziz A; Nadon, Robert; Makarenkov, Vladimir
2015-11-01
Significant efforts have been made recently to improve data throughput and data quality in screening technologies related to drug design. The modern pharmaceutical industry relies heavily on high-throughput screening (HTS) and high-content screening (HCS) technologies, which include small molecule, complementary DNA (cDNA) and RNA interference (RNAi) types of screening. Data generated by these screening technologies are subject to several environmental and procedural systematic biases, which introduce errors into the hit identification process. We first review systematic biases typical of HTS and HCS screens. We highlight that study design issues and the way in which data are generated are crucial for providing unbiased screening results. Considering various data sets, including the publicly available ChemBank data, we assess the rates of systematic bias in experimental HTS by using plate-specific and assay-specific error detection tests. We describe main data normalization and correction techniques and introduce a general data preprocessing protocol. This protocol can be recommended for academic and industrial researchers involved in the analysis of current or next-generation HTS data. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
High-throughput cell analysis and sorting technologies for clinical diagnostics and therapeutics
NASA Astrophysics Data System (ADS)
Leary, James F.; Reece, Lisa M.; Szaniszlo, Peter; Prow, Tarl W.; Wang, Nan
2001-05-01
A number of theoretical and practical limits of high-speed flow cytometry/cell sorting are important for clinical diagnostics and therapeutics. Three applications include: (1) stem cell isolation with tumor purging for minimal residual disease monitoring and treatment, (2) identification and isolation of human fetal cells from maternal blood for prenatal diagnostics and in-vitro therapeutics, and (3) high-speed library screening for recombinant vaccine production against unknown pathogens.
Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures
NASA Astrophysics Data System (ADS)
Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.
2016-12-01
The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.
Microplate Bioassay for Determining Substrate Selectivity of "Candida rugosa" Lipase
ERIC Educational Resources Information Center
Wang, Shi-zhen; Fang, Bai-shan
2012-01-01
Substrate selectivity of "Candida rugosa" lipase was tested using "p"-nitrophenyl esters of increasing chain length (C[subscript 1], C[subscript 7], C[subscript 15]) using the high-throughput screening method. A fast and easy 96-well microplate bioassay was developed to help students learn and practice biotechnological specificity screen. The…
Bioinformatics: Current Practice and Future Challenges for Life Science Education
ERIC Educational Resources Information Center
Hack, Catherine; Kendall, Gary
2005-01-01
It is widely predicted that the application of high-throughput technologies to the quantification and identification of biological molecules will cause a paradigm shift in the life sciences. However, if the biosciences are to evolve from a predominantly descriptive discipline to an information science, practitioners will require enhanced skills in…
Efficient mouse genome engineering by CRISPR-EZ technology.
Modzelewski, Andrew J; Chen, Sean; Willis, Brandon J; Lloyd, K C Kent; Wood, Joshua A; He, Lin
2018-06-01
CRISPR/Cas9 technology has transformed mouse genome editing with unprecedented precision, efficiency, and ease; however, the current practice of microinjecting CRISPR reagents into pronuclear-stage embryos remains rate-limiting. We thus developed CRISPR ribonucleoprotein (RNP) electroporation of zygotes (CRISPR-EZ), an electroporation-based technology that outperforms pronuclear and cytoplasmic microinjection in efficiency, simplicity, cost, and throughput. In C57BL/6J and C57BL/6N mouse strains, CRISPR-EZ achieves 100% delivery of Cas9/single-guide RNA (sgRNA) RNPs, facilitating indel mutations (insertions or deletions), exon deletions, point mutations, and small insertions. In a side-by-side comparison in the high-throughput KnockOut Mouse Project (KOMP) pipeline, CRISPR-EZ consistently outperformed microinjection. Here, we provide an optimized protocol covering sgRNA synthesis, embryo collection, RNP electroporation, mouse generation, and genotyping strategies. Using CRISPR-EZ, a graduate-level researcher with basic embryo-manipulation skills can obtain genetically modified mice in 6 weeks. Altogether, CRISPR-EZ is a simple, economic, efficient, and high-throughput technology that is potentially applicable to other mammalian species.
Feasibility of Using Distributed Wireless Mesh Networks for Medical Emergency Response
Braunstein, Brian; Trimble, Troy; Mishra, Rajesh; Manoj, B. S.; Rao, Ramesh; Lenert, Leslie
2006-01-01
Achieving reliable, efficient data communications networks at a disaster site is a difficult task. Network paradigms, such as Wireless Mesh Network (WMN) architectures, form one exemplar for providing high-bandwidth, scalable data communication for medical emergency response activity. WMNs are created by self-organized wireless nodes that use multi-hop wireless relaying for data transfer. In this paper, we describe our experience using a mesh network architecture we developed for homeland security and medical emergency applications. We briefly discuss the architecture and present the traffic behavioral observations made by a client-server medical emergency application tested during a large-scale homeland security drill. We present our traffic measurements, describe lessons learned, and offer functional requirements (based on field testing) for practical 802.11 mesh medical emergency response networks. With certain caveats, the results suggest that 802.11 mesh networks are feasible and scalable systems for field communications in disaster settings. PMID:17238308
Facile approach to prepare Pt decorated SWNT/graphene hybrid catalytic ink
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayavan, Sundar, E-mail: sundarmayavan@cecri.res.in; Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 305-701; Mandalam, Aditya
Highlights: • Pt NPs were in situ synthesized onto CNT–graphene support in aqueous solution. • The as-prepared material was used directly as a catalyst ink without further treatment. • Catalyst ink is active toward methanol oxidation. • This approach realizes both scalable and greener production of hybrid catalysts. - Abstract: Platinum nanoparticles were in situ synthesized onto hybrid support involving graphene and single walled carbon nanotube in aqueous solution. We investigate the reduction of graphene oxide, and platinum nanoparticle functionalization on hybrid support by X-ray photoelectron spectroscopy, Raman spectroscopy, X-ray diffraction, scanning electron microscopy and transmission electron microscopy. The as-preparedmore » platinum on hybrid support was used directly as a catalyst ink without further treatment and is active toward methanol oxidation. This work realizes both scalable and greener production of highly efficient hybrid catalysts, and would be valuable for practical applications of graphene based fuel cell catalysts.« less
NASA Astrophysics Data System (ADS)
Nam, Young Jin; Oh, Dae Yang; Jung, Sung Hoo; Jung, Yoon Seok
2018-01-01
Owing to their potential for greater safety, higher energy density, and scalable fabrication, bulk-type all-solid-state lithium-ion batteries (ASLBs) employing deformable sulfide superionic conductors are considered highly promising for applications in battery electric vehicles. While fabrication of sheet-type electrodes is imperative from the practical point of view, reports on relevant research are scarce. This might be attributable to issues that complicate the slurry-based fabrication process and/or issues with ionic contacts and percolation. In this work, we systematically investigate the electrochemical performance of conventional dry-mixed electrodes and wet-slurry fabricated electrodes for ASLBs, by varying the different fractions of solid electrolytes and the mass loading. This information calls for a need to develop well-designed electrodes with better ionic contacts and to improve the ionic conductivity of solid electrolytes. As a scalable proof-of-concept to achieve better ionic contacts, a premixing process for active materials and solid electrolytes is demonstrated to significantly improve electrochemical performance. Pouch-type 80 × 60 mm2 all-solid-state LiNi0·6Co0·2Mn0·2O2/graphite full-cells fabricated by the slurry process show high cell-based energy density (184 W h kg-1 and 432 W h L-1). For the first time, their excellent safety is also demonstrated by simple tests (cutting with scissors and heating at 110 °C).
OptoDyCE: Automated system for high-throughput all-optical dynamic cardiac electrophysiology
NASA Astrophysics Data System (ADS)
Klimas, Aleksandra; Yu, Jinzhu; Ambrosi, Christina M.; Williams, John C.; Bien, Harold; Entcheva, Emilia
2016-02-01
In the last two decades, <30% of drugs withdrawals from the market were due to cardiac toxicity, where unintended interactions with ion channels disrupt the heart's normal electrical function. Consequently, all new drugs must undergo preclinical testing for cardiac liability, adding to an already expensive and lengthy process. Recognition that proarrhythmic effects often result from drug action on multiple ion channels demonstrates a need for integrative and comprehensive measurements. Additionally, patient-specific therapies relying on emerging technologies employing stem-cell derived cardiomyocytes (e.g. induced pluripotent stem-cell-derived cardiomyocytes, iPSC-CMs) require better screening methods to become practical. However, a high-throughput, cost-effective approach for cellular cardiac electrophysiology has not been feasible. Optical techniques for manipulation and recording provide a contactless means of dynamic, high-throughput testing of cells and tissues. Here, we consider the requirements for all-optical electrophysiology for drug testing, and we implement and validate OptoDyCE, a fully automated system for all-optical cardiac electrophysiology. We demonstrate the high-throughput capabilities using multicellular samples in 96-well format by combining optogenetic actuation with simultaneous fast high-resolution optical sensing of voltage or intracellular calcium. The system can also be implemented using iPSC-CMs and other cell-types by delivery of optogenetic drivers, or through the modular use of dedicated light-sensitive somatic cells in conjunction with non-modified cells. OptoDyCE provides a truly modular and dynamic screening system, capable of fully-automated acquisition of high-content information integral for improved discovery and development of new drugs and biologics, as well as providing a means of better understanding of electrical disturbances in the heart.
A Centrifugal Contactor Design to Facilitate Remote Replacement
DOE Office of Scientific and Technical Information (OSTI.GOV)
David H. Meikrantz; Jack. D. Law; Troy G. Garn
2011-03-01
Advanced designs of nuclear fuel recycling and radioactive waste treatment plants are expected to include more ambitious goals for solvent extraction based separations including; higher separations efficiency, high-level waste minimization, and a greater focus on continuous processes to minimize cost and footprint. Therefore, Annular Centrifugal Contactors (ACCs) are destined to play a more important role for such future processing schemes. This work continues the development of remote designs for ACCs that can process the large throughputs needed for future nuclear fuel recycling and radioactive waste treatment plants. A three stage, 12.5 cm diameter rotor module has been constructed and ismore » being evaluated for use in highly radioactive environments. This prototype assembly employs three standard CINC V-05 clean-in-place (CIP) units modified for remote service and replacement via new methods of connection for solution inlets, outlets, drain and CIP. Hydraulic testing and functional checks were successfully conducted and then the prototype was evaluated for remote handling and maintenance. Removal and replacement of the center position V-05R contactor in the three stage assembly was demonstrated using an overhead rail mounted PaR manipulator. Initial evaluation indicates a viable new design for interconnecting and cleaning individual stages while retaining the benefits of commercially reliable ACC equipment. Replacement of a single stage via remote manipulators and tools is estimated to take about 30 minutes, perhaps fast enough to support a contactor change without loss of process equilibrium. The design presented in this work is scalable to commercial ACC models from V-05 to V-20 with total throughput rates ranging from 20 to 650 liters per minute.« less
Bhambure, R; Rathore, A S
2013-01-01
This article describes the development of a high-throughput process development (HTPD) platform for developing chromatography steps. An assessment of the platform as a tool for establishing the "characterization space" for an ion exchange chromatography step has been performed by using design of experiments. Case studies involving use of a biotech therapeutic, granulocyte colony-stimulating factor have been used to demonstrate the performance of the platform. We discuss the various challenges that arise when working at such small volumes along with the solutions that we propose to alleviate these challenges to make the HTPD data suitable for empirical modeling. Further, we have also validated the scalability of this platform by comparing the results from the HTPD platform (2 and 6 μL resin volumes) against those obtained at the traditional laboratory scale (resin volume, 0.5 mL). We find that after integration of the proposed correction factors, the HTPD platform is capable of performing the process optimization studies at 170-fold higher productivity. The platform is capable of providing semi-quantitative assessment of the effects of the various input parameters under consideration. We think that platform such as the one presented is an excellent tool for examining the "characterization space" and reducing the extensive experimentation at the traditional lab scale that is otherwise required for establishing the "design space." Thus, this platform will specifically aid in successful implementation of quality by design in biotech process development. This is especially significant in view of the constraints with respect to time and resources that the biopharma industry faces today. Copyright © 2013 American Institute of Chemical Engineers.
Integration of multi-interface conversion channel using FPGA for modular photonic network
NASA Astrophysics Data System (ADS)
Janicki, Tomasz; Pozniak, Krzysztof T.; Romaniuk, Ryszard S.
2010-09-01
The article discusses the integration of different types of interfaces with FPGA circuits using a reconfigurable communication platform. The solution has been implemented in practice in a single node of a distributed measurement system. Construction of communication platform has been presented with its selected hardware modules, described in VHDL and implemented in FPGA circuits. The graphical user interface (GUI) has been described that allows a user to control the operation of the system. In the final part of the article selected practical solutions have been introduced. The whole measurement system resides on multi-gigabit optical network. The optical network construction is highly modular, reconfigurable and scalable.
Choi, Gihoon; Hassett, Daniel J; Choi, Seokheun
2015-06-21
There is a large global effort to improve microbial fuel cell (MFC) techniques and advance their translational potential toward practical, real-world applications. Significant boosts in MFC performance can be achieved with the development of new techniques in synthetic biology that can regulate microbial metabolic pathways or control their gene expression. For these new directions, a high-throughput and rapid screening tool for microbial biopower production is needed. In this work, a 48-well, paper-based sensing platform was developed for the high-throughput and rapid characterization of the electricity-producing capability of microbes. 48 spatially distinct wells of a sensor array were prepared by patterning 48 hydrophilic reservoirs on paper with hydrophobic wax boundaries. This paper-based platform exploited the ability of paper to quickly wick fluid and promoted bacterial attachment to the anode pads, resulting in instant current generation upon loading of the bacterial inoculum. We validated the utility of our MFC array by studying how strategic genetic modifications impacted the electrochemical activity of various Pseudomonas aeruginosa mutant strains. Within just 20 minutes, we successfully determined the electricity generation capacity of eight isogenic mutants of P. aeruginosa. These efforts demonstrate that our MFC array displays highly comparable performance characteristics and identifies genes in P. aeruginosa that can trigger a higher power density.
Oh, Kwang Seok; Woo, Seong Ihl
2011-01-01
A chemiluminescence-based analyzer of NOx gas species has been applied for high-throughput screening of a library of catalytic materials. The applicability of the commercial NOx analyzer as a rapid screening tool was evaluated using selective catalytic reduction of NO gas. A library of 60 binary alloys composed of Pt and Co, Zr, La, Ce, Fe or W on Al2O3 substrate was tested for the efficiency of NOx removal using a home-built 64-channel parallel and sequential tubular reactor. The NOx concentrations measured by the NOx analyzer agreed well with the results obtained using micro gas chromatography for a reference catalyst consisting of 1 wt% Pt on γ-Al2O3. Most alloys showed high efficiency at 275 °C, which is typical of Pt-based catalysts for selective catalytic reduction of NO. The screening with NOx analyzer allowed to select Pt-Ce(X) (X=1–3) and Pt–Fe(2) as the optimal catalysts for NOx removal: 73% NOx conversion was achieved with the Pt–Fe(2) alloy, which was much better than the results for the reference catalyst and the other library alloys. This study demonstrates a sequential high-throughput method of practical evaluation of catalysts for the selective reduction of NO. PMID:27877438
High-throughput Molecular Simulations of MOFs for CO2 Separation: Opportunities and Challenges
NASA Astrophysics Data System (ADS)
Erucar, Ilknur; Keskin, Seda
2018-02-01
Metal organic frameworks (MOFs) have emerged as great alternatives to traditional nanoporous materials for CO2 separation applications. MOFs are porous materials that are formed by self-assembly of transition metals and organic ligands. The most important advantage of MOFs over well-known porous materials is the possibility to generate multiple materials with varying structural properties and chemical functionalities by changing the combination of metal centers and organic linkers during the synthesis. This leads to a large diversity of materials with various pore sizes and shapes that can be efficiently used for CO2 separations. Since the number of synthesized MOFs has already reached to several thousand, experimental investigation of each MOF at the lab-scale is not practical. High-throughput computational screening of MOFs is a great opportunity to identify the best materials for CO2 separation and to gain molecular-level insights into the structure-performance relationships. This type of knowledge can be used to design new materials with the desired structural features that can lead to extraordinarily high CO2 selectivities. In this mini-review, we focused on developments in high-throughput molecular simulations of MOFs for CO2 separations. After reviewing the current studies on this topic, we discussed the opportunities and challenges in the field and addressed the potential future developments.
Lin, Haishuang; Li, Qiang; Wang, Ou; Rauch, Jack; Harm, Braden; Viljoen, Hendrik J; Zhang, Chi; Van Wyk, Erika; Zhang, Chi; Lei, Yuguo
2018-05-11
Adoptive immunotherapy is a highly effective strategy for treating many human cancers, such as melanoma, cervical cancer, lymphoma, and leukemia. Here, a novel cell culture technology is reported for expanding primary human T cells for adoptive immunotherapy. T cells are suspended and cultured in microscale alginate hydrogel tubes (AlgTubes) that are suspended in the cell culture medium in a culture vessel. The hydrogel tubes protect cells from hydrodynamic stresses and confine the cell mass less than 400 µm (in radial diameter) to ensure efficient mass transport, creating a cell-friendly microenvironment for growing T cells. This system is simple, scalable, highly efficient, defined, cost-effective, and compatible with current good manufacturing practices. Under optimized culture conditions, the AlgTubes enable culturing T cells with high cell viability, low DNA damage, high growth rate (≈320-fold expansion over 14 days), high purity (≈98% CD3+), and high yield (≈3.2 × 10 8 cells mL -1 hydrogel). All offer considerable advantages compared to current T cell culturing approaches. This new culture technology can significantly reduce the culture volume, time, and cost, while increasing the production. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Transfer of Online Professional Learning to Teachers' Classroom Practice
ERIC Educational Resources Information Center
Herrington, Anthony; Herrington, Jan; Hoban, Garry; Reid, Doug
2009-01-01
Professional learning is an important process in enabling teachers to update their pedagogical knowledge and practices. The use of online technologies to support professional learning has a number of benefits in terms of flexibility and scalability. However, it is not clear how well the approach impacts on teachers' classroom practices. This…
Yamamoto, Toshio; Nagasaki, Hideki; Yonemaru, Jun-ichi; Ebana, Kaworu; Nakajima, Maiko; Shibaya, Taeko; Yano, Masahiro
2010-04-27
To create useful gene combinations in crop breeding, it is necessary to clarify the dynamics of the genome composition created by breeding practices. A large quantity of single-nucleotide polymorphism (SNP) data is required to permit discrimination of chromosome segments among modern cultivars, which are genetically related. Here, we used a high-throughput sequencer to conduct whole-genome sequencing of an elite Japanese rice cultivar, Koshihikari, which is closely related to Nipponbare, whose genome sequencing has been completed. Then we designed a high-throughput typing array based on the SNP information by comparison of the two sequences. Finally, we applied this array to analyze historical representative rice cultivars to understand the dynamics of their genome composition. The total 5.89-Gb sequence for Koshihikari, equivalent to 15.7 x the entire rice genome, was mapped using the Pseudomolecules 4.0 database for Nipponbare. The resultant Koshihikari genome sequence corresponded to 80.1% of the Nipponbare sequence and led to the identification of 67,051 SNPs. A high-throughput typing array consisting of 1917 SNP sites distributed throughout the genome was designed to genotype 151 representative Japanese cultivars that have been grown during the past 150 years. We could identify the ancestral origin of the pedigree haplotypes in 60.9% of the Koshihikari genome and 18 consensus haplotype blocks which are inherited from traditional landraces to current improved varieties. Moreover, it was predicted that modern breeding practices have generally decreased genetic diversity Detection of genome-wide SNPs by both high-throughput sequencer and typing array made it possible to evaluate genomic composition of genetically related rice varieties. With the aid of their pedigree information, we clarified the dynamics of chromosome recombination during the historical rice breeding process. We also found several genomic regions decreasing genetic diversity which might be caused by a recent human selection in rice breeding. The definition of pedigree haplotypes by means of genome-wide SNPs will facilitate next-generation breeding of rice and other crops.
Microchip amplifier for in vitro, in vivo, and automated whole cell patch-clamp recording
Kolb, Ilya; Kodandaramaiah, Suhasa B.; Chubykin, Alexander A.; Yang, Aimei; Bear, Mark F.; Boyden, Edward S.; Forest, Craig R.
2014-01-01
Patch clamping is a gold-standard electrophysiology technique that has the temporal resolution and signal-to-noise ratio capable of reporting single ion channel currents, as well as electrical activity of excitable single cells. Despite its usefulness and decades of development, the amplifiers required for patch clamping are expensive and bulky. This has limited the scalability and throughput of patch clamping for single-ion channel and single-cell analyses. In this work, we have developed a custom patch-clamp amplifier microchip that can be fabricated using standard commercial silicon processes capable of performing both voltage- and current-clamp measurements. A key innovation is the use of nonlinear feedback elements in the voltage-clamp amplifier circuit to convert measured currents into logarithmically encoded voltages, thereby eliminating the need for large high-valued resistors, a factor that has limited previous attempts at integration. Benchtop characterization of the chip shows low levels of current noise [1.1 pA root mean square (rms) over 5 kHz] during voltage-clamp measurements and low levels of voltage noise (8.2 μV rms over 10 kHz) during current-clamp measurements. We demonstrate the ability of the chip to perform both current- and voltage-clamp measurement in vitro in HEK293FT cells and cultured neurons. We also demonstrate its ability to perform in vivo recordings as part of a robotic patch-clamping system. The performance of the patch-clamp amplifier microchip compares favorably with much larger commercial instrumentation, enabling benchtop commoditization, miniaturization, and scalable patch-clamp instrumentation. PMID:25429119
Towards High-Throughput, Simultaneous Characterization of Thermal and Thermoelectric Properties
NASA Astrophysics Data System (ADS)
Miers, Collier Stephen
The extension of thermoelectric generators to more general markets requires that the devices be affordable and practical (low $/Watt) to implement. A key challenge in this pursuit is the quick and accurate characterization of thermoelectric materials, which will allow researchers to tune and modify the material properties quickly. The goal of this thesis is to design and fabricate a high-throughput characterization system for the simultaneous characterization of thermal, electrical, and thermoelectric properties for device scale material samples. The measurement methodology presented in this thesis combines a custom designed measurement system created specifically for high-throughput testing with a novel device structure that permits simultaneous characterization of the material properties. The measurement system is based upon the 3o method for thermal conductivity measurements, with the addition of electrodes and voltage probes to measure the electrical conductivity and Seebeck coefficient. A device designed and optimized to permit the rapid characterization of thermoelectric materials is also presented. This structure is optimized to ensure 1D heat transfer within the sample, thus permitting rapid data analysis and fitting using a MATLAB script. Verification of the thermal portion of the system is presented using fused silica and sapphire materials for benchmarking. The fused silica samples yielded a thermal conductivity of 1.21 W/(m K), while a thermal conductivity of 31.2 W/(m K) was measured for the sapphire samples. The device and measurement system designed and developed in this thesis provide insight and serve as a foundation for the development of high throughput, simultaneous measurement platforms.
Lin, Xiaotong; Liu, Mei; Chen, Xue-wen
2009-04-29
Protein-protein interactions play vital roles in nearly all cellular processes and are involved in the construction of biological pathways such as metabolic and signal transduction pathways. Although large-scale experiments have enabled the discovery of thousands of previously unknown linkages among proteins in many organisms, the high-throughput interaction data is often associated with high error rates. Since protein interaction networks have been utilized in numerous biological inferences, the inclusive experimental errors inevitably affect the quality of such prediction. Thus, it is essential to assess the quality of the protein interaction data. In this paper, a novel Bayesian network-based integrative framework is proposed to assess the reliability of protein-protein interactions. We develop a cross-species in silico model that assigns likelihood scores to individual protein pairs based on the information entirely extracted from model organisms. Our proposed approach integrates multiple microarray datasets and novel features derived from gene ontology. Furthermore, the confidence scores for cross-species protein mappings are explicitly incorporated into our model. Applying our model to predict protein interactions in the human genome, we are able to achieve 80% in sensitivity and 70% in specificity. Finally, we assess the overall quality of the experimentally determined yeast protein-protein interaction dataset. We observe that the more high-throughput experiments confirming an interaction, the higher the likelihood score, which confirms the effectiveness of our approach. This study demonstrates that model organisms certainly provide important information for protein-protein interaction inference and assessment. The proposed method is able to assess not only the overall quality of an interaction dataset, but also the quality of individual protein-protein interactions. We expect the method to continually improve as more high quality interaction data from more model organisms becomes available and is readily scalable to a genome-wide application.
qPortal: A platform for data-driven biomedical research.
Mohr, Christopher; Friedrich, Andreas; Wojnar, David; Kenar, Erhan; Polatkan, Aydin Can; Codrea, Marius Cosmin; Czemmel, Stefan; Kohlbacher, Oliver; Nahnsen, Sven
2018-01-01
Modern biomedical research aims at drawing biological conclusions from large, highly complex biological datasets. It has become common practice to make extensive use of high-throughput technologies that produce big amounts of heterogeneous data. In addition to the ever-improving accuracy, methods are getting faster and cheaper, resulting in a steadily increasing need for scalable data management and easily accessible means of analysis. We present qPortal, a platform providing users with an intuitive way to manage and analyze quantitative biological data. The backend leverages a variety of concepts and technologies, such as relational databases, data stores, data models and means of data transfer, as well as front-end solutions to give users access to data management and easy-to-use analysis options. Users are empowered to conduct their experiments from the experimental design to the visualization of their results through the platform. Here, we illustrate the feature-rich portal by simulating a biomedical study based on publically available data. We demonstrate the software's strength in supporting the entire project life cycle. The software supports the project design and registration, empowers users to do all-digital project management and finally provides means to perform analysis. We compare our approach to Galaxy, one of the most widely used scientific workflow and analysis platforms in computational biology. Application of both systems to a small case study shows the differences between a data-driven approach (qPortal) and a workflow-driven approach (Galaxy). qPortal, a one-stop-shop solution for biomedical projects offers up-to-date analysis pipelines, quality control workflows, and visualization tools. Through intensive user interactions, appropriate data models have been developed. These models build the foundation of our biological data management system and provide possibilities to annotate data, query metadata for statistics and future re-analysis on high-performance computing systems via coupling of workflow management systems. Integration of project and data management as well as workflow resources in one place present clear advantages over existing solutions.
Software designs of image processing tasks with incremental refinement of computation.
Anastasia, Davide; Andreopoulos, Yiannis
2010-08-01
Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.
Strategic and Operational Plan for Integrating Transcriptomics ...
Plans for incorporating high throughput transcriptomics into the current high throughput screening activities at NCCT; the details are in the attached slide presentation presentation on plans for incorporating high throughput transcriptomics into the current high throughput screening activities at NCCT, given at the OECD meeting on June 23, 2016
High-Throughput Experimental Approach Capabilities | Materials Science |
NREL High-Throughput Experimental Approach Capabilities High-Throughput Experimental Approach by yellow and is for materials in the upper right sector. NREL's high-throughput experimental ,Te) and oxysulfide sputtering Combi-5: Nitrides and oxynitride sputtering We also have several non
Many commercial and environmental chemicals lack toxicity data necessary for users and risk assessors to make fully informed decisions about potential health effects. Generating these data using high throughput in vitro cell- or biochemical-based tests would be faster and less e...
Open release of the DCA++ project
NASA Astrophysics Data System (ADS)
Haehner, Urs; Solca, Raffaele; Staar, Peter; Alvarez, Gonzalo; Maier, Thomas; Summers, Michael; Schulthess, Thomas
We present the first open release of the DCA++ project, a highly scalable and efficient research code to solve quantum many-body problems with cutting edge quantum cluster algorithms. The implemented dynamical cluster approximation (DCA) and its DCA+ extension with a continuous self-energy capture nonlocal correlations in strongly correlated electron systems thereby allowing insight into high-Tc superconductivity. With the increasing heterogeneity of modern machines, DCA++ provides portable performance on conventional and emerging new architectures, such as hybrid CPU-GPU and Xeon Phi, sustaining multiple petaflops on ORNL's Titan and CSCS' Piz Daint. Moreover, we will describe how best practices in software engineering can be applied to make software development sustainable and scalable in a research group. Software testing and documentation not only prevent productivity collapse, but more importantly, they are necessary for correctness, credibility and reproducibility of scientific results. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF) awarded by the INCITE program, and of the Swiss National Supercomputing Center. OLCF is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.
Numerical Simulations of Reacting Flows Using Asynchrony-Tolerant Schemes for Exascale Computing
NASA Astrophysics Data System (ADS)
Cleary, Emmet; Konduri, Aditya; Chen, Jacqueline
2017-11-01
Communication and data synchronization between processing elements (PEs) are likely to pose a major challenge in scalability of solvers at the exascale. Recently developed asynchrony-tolerant (AT) finite difference schemes address this issue by relaxing communication and synchronization between PEs at a mathematical level while preserving accuracy, resulting in improved scalability. The performance of these schemes has been validated for simple linear and nonlinear homogeneous PDEs. However, many problems of practical interest are governed by highly nonlinear PDEs with source terms, whose solution may be sensitive to perturbations caused by communication asynchrony. The current work applies the AT schemes to combustion problems with chemical source terms, yielding a stiff system of PDEs with nonlinear source terms highly sensitive to temperature. Examples shown will use single-step and multi-step CH4 mechanisms for 1D premixed and nonpremixed flames. Error analysis will be discussed both in physical and spectral space. Results show that additional errors introduced by the AT schemes are negligible and the schemes preserve their accuracy. We acknowledge funding from the DOE Computational Science Graduate Fellowship administered by the Krell Institute.
NASA Astrophysics Data System (ADS)
Xiao, Fei; Yang, Shengxiong; Zhang, Zheye; Liu, Hongfang; Xiao, Junwu; Wan, Lian; Luo, Jun; Wang, Shuai; Liu, Yunqi
2015-03-01
We reported a scalable and modular method to prepare a new type of sandwich-structured graphene-based nanohybrid paper and explore its practical application as high-performance electrode in flexible supercapacitor. The freestanding and flexible graphene paper was firstly fabricated by highly reproducible printing technique and bubbling delamination method, by which the area and thickness of the graphene paper can be freely adjusted in a wide range. The as-prepared graphene paper possesses a collection of unique properties of highly electrical conductivity (340 S cm-1), light weight (1 mg cm-2) and excellent mechanical properties. In order to improve its supercapacitive properties, we have prepared a unique sandwich-structured graphene/polyaniline/graphene paper by in situ electropolymerization of porous polyaniline nanomaterials on graphene paper, followed by wrapping an ultrathin graphene layer on its surface. This unique design strategy not only circumvents the low energy storage capacity resulting from the double-layer capacitor of graphene paper, but also enhances the rate performance and cycling stability of porous polyaniline. The as-obtained all-solid-state symmetric supercapacitor exhibits high energy density, high power density, excellent cycling stability and exceptional mechanical flexibility, demonstrative of its extensive potential applications for flexible energy-related devices and wearable electronics.
Xiao, Fei; Yang, Shengxiong; Zhang, Zheye; Liu, Hongfang; Xiao, Junwu; Wan, Lian; Luo, Jun; Wang, Shuai; Liu, Yunqi
2015-01-01
We reported a scalable and modular method to prepare a new type of sandwich-structured graphene-based nanohybrid paper and explore its practical application as high-performance electrode in flexible supercapacitor. The freestanding and flexible graphene paper was firstly fabricated by highly reproducible printing technique and bubbling delamination method, by which the area and thickness of the graphene paper can be freely adjusted in a wide range. The as-prepared graphene paper possesses a collection of unique properties of highly electrical conductivity (340 S cm−1), light weight (1 mg cm−2) and excellent mechanical properties. In order to improve its supercapacitive properties, we have prepared a unique sandwich-structured graphene/polyaniline/graphene paper by in situ electropolymerization of porous polyaniline nanomaterials on graphene paper, followed by wrapping an ultrathin graphene layer on its surface. This unique design strategy not only circumvents the low energy storage capacity resulting from the double-layer capacitor of graphene paper, but also enhances the rate performance and cycling stability of porous polyaniline. The as-obtained all-solid-state symmetric supercapacitor exhibits high energy density, high power density, excellent cycling stability and exceptional mechanical flexibility, demonstrative of its extensive potential applications for flexible energy-related devices and wearable electronics. PMID:25797022
Xiao, Fei; Yang, Shengxiong; Zhang, Zheye; Liu, Hongfang; Xiao, Junwu; Wan, Lian; Luo, Jun; Wang, Shuai; Liu, Yunqi
2015-03-23
We reported a scalable and modular method to prepare a new type of sandwich-structured graphene-based nanohybrid paper and explore its practical application as high-performance electrode in flexible supercapacitor. The freestanding and flexible graphene paper was firstly fabricated by highly reproducible printing technique and bubbling delamination method, by which the area and thickness of the graphene paper can be freely adjusted in a wide range. The as-prepared graphene paper possesses a collection of unique properties of highly electrical conductivity (340 S cm(-1)), light weight (1 mg cm(-2)) and excellent mechanical properties. In order to improve its supercapacitive properties, we have prepared a unique sandwich-structured graphene/polyaniline/graphene paper by in situ electropolymerization of porous polyaniline nanomaterials on graphene paper, followed by wrapping an ultrathin graphene layer on its surface. This unique design strategy not only circumvents the low energy storage capacity resulting from the double-layer capacitor of graphene paper, but also enhances the rate performance and cycling stability of porous polyaniline. The as-obtained all-solid-state symmetric supercapacitor exhibits high energy density, high power density, excellent cycling stability and exceptional mechanical flexibility, demonstrative of its extensive potential applications for flexible energy-related devices and wearable electronics.
The main challenges that remain in applying high-throughput sequencing to clinical diagnostics.
Loeffelholz, Michael; Fofanov, Yuriy
2015-01-01
Over the last 10 years, the quality, price and availability of high-throughput sequencing instruments have improved to the point that this technology may be close to becoming a routine tool in the diagnostic microbiology laboratory. Two groups of challenges, however, have to be resolved in order to move this powerful research technology into routine use in the clinical microbiology laboratory. The computational/bioinformatics challenges include data storage cost and privacy concerns, requiring analysis to be performed without access to cloud storage or expensive computational infrastructure. The logistical challenges include interpretation of complex results and acceptance and understanding of the advantages and limitations of this technology by the medical community. This article focuses on the approaches to address these challenges, such as file formats, algorithms, data collection, reporting and good laboratory practices.
Mapping specificity landscapes of RNA-protein interactions by high throughput sequencing.
Jankowsky, Eckhard; Harris, Michael E
2017-04-15
To function in a biological setting, RNA binding proteins (RBPs) have to discriminate between alternative binding sites in RNAs. This discrimination can occur in the ground state of an RNA-protein binding reaction, in its transition state, or in both. The extent by which RBPs discriminate at these reaction states defines RBP specificity landscapes. Here, we describe the HiTS-Kin and HiTS-EQ techniques, which combine kinetic and equilibrium binding experiments with high throughput sequencing to quantitatively assess substrate discrimination for large numbers of substrate variants at ground and transition states of RNA-protein binding reactions. We discuss experimental design, practical considerations and data analysis and outline how a combination of HiTS-Kin and HiTS-EQ allows the mapping of RBP specificity landscapes. Copyright © 2017 Elsevier Inc. All rights reserved.
Field evaluation of a blood based test for active tuberculosis in endemic settings
Hussainy, Syed Fahadulla; Krishnan, Viwanathan V.; Ambreen, Atiqa; Yusuf, Noshin Wasim; Irum, Shagufta; Rashid, Abdul; Jamil, Muhammad; Zaffar, Fareed; Chaudhry, Muhammad Nawaz; Gupta, Puneet K.; Akhtar, Muhammad Waheed; Khan, Imran H.
2017-01-01
Over 9 million new active tuberculosis (TB) cases emerge each year from an enormous pool of 2 billion individuals latently infected with Mycobacterium tuberculosis (M. tb.) worldwide. About 3 million new TB cases per year are unaccounted for, and 1.5 million die. TB, however, is generally curable if diagnosed correctly and in a timely manner. The current diagnostic methods for TB, including state-of-the-art molecular tests, have failed in delivering the capacity needed in endemic countries to curtail this ongoing pandemic. Efficient, cost effective and scalable diagnostic approaches are critically needed. We report a multiplex TB serology panel using microbead suspension array containing a combination of 11 M.tb. antigens that demonstrated overall sensitivity of 91% in serum/plasma samples from TB patients confirmed by culture. Group wise sensitivities for sputum smear positive and negative patients were 95%, and 88%, respectively. Specificity of the test was 96% in untreated COPD patients and 91% in general healthy population. The sensitivity of this test is superior to that of the frontline sputum smear test with a comparable specificity (30–70%, and 93–99%, respectively). The multiplex serology test can be performed with scalability from 1 to 360 patients per day, and is amenable to automation for higher (1000s per day) throughput, thus enabling a scalable clinical work flow model for TB endemic countries. Taken together, the above results suggest that well defined antibody profiles in blood, analyzed by an appropriate technology platform, offer a valuable approach to TB diagnostics in endemic countries. PMID:28380055
Photoacoustic microscopy and computed tomography: from bench to bedside
Wang, Lihong V.; Gao, Liang
2014-01-01
Photoacoustic imaging (PAI) of biological tissue has seen immense growth in the past decade, providing unprecedented spatial resolution and functional information at depths in the optical diffusive regime. PAI uniquely combines the advantages of optical excitation and acoustic detection. The hybrid imaging modality features high sensitivity to optical absorption and wide scalability of spatial resolution with the desired imaging depth. Here we first summarize the fundamental principles underpinning the technology, then highlight its practical implementation, and finally discuss recent advances towards clinical translation. PMID:24905877
Cryogenic infrared filter made of alumina for use at millimeter wavelength.
Inoue, Yuki; Matsumura, Tomotake; Hazumi, Masashi; Lee, Adrian T; Okamura, Takahiro; Suzuki, Aritoki; Tomaru, Takayuki; Yamaguchi, Hiroshi
2014-03-20
We propose a high-thermal-conductivity infrared filter using alumina for millimeter-wave detection systems. We constructed a prototype two-layer antireflection-coated alumina filter with a diameter of 100 mm and a thickness of 2 mm and characterized its thermal and optical properties. The transmittance of this filter at 95 and 150 GHz is 97% and 95%, respectively, while the estimated 3 dB cut-off frequency is at 450 GHz. The high thermal conductivity of alumina minimizes thermal gradients. We measure a differential temperature of only 0.21 K between the center and the edge of the filter when it is mounted on a thermal anchor of 77 K. We also constructed a thermal model based on the prototype filter and analyzed the scalability of the filter diameter. We conclude that the temperature increase at the center of the alumina IR filter is less than 6 K, even with a large diameter of 500 mm, when the temperature at the edge of the filter is 50 K. This is suitable for an application to a large-throughput next-generation cosmic-microwave-background polarization experiment such as POLARBEAR-2.
Cheung, Kit; Schultz, Simon R; Luk, Wayne
2015-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.
Stepanauskas, Ramunas; Fergusson, Elizabeth A; Brown, Joseph; Poulton, Nicole J; Tupper, Ben; Labonté, Jessica M; Becraft, Eric D; Brown, Julia M; Pachiadaki, Maria G; Povilaitis, Tadas; Thompson, Brian P; Mascena, Corianna J; Bellows, Wendy K; Lubys, Arvydas
2017-07-20
Microbial single-cell genomics can be used to provide insights into the metabolic potential, interactions, and evolution of uncultured microorganisms. Here we present WGA-X, a method based on multiple displacement amplification of DNA that utilizes a thermostable mutant of the phi29 polymerase. WGA-X enhances genome recovery from individual microbial cells and viral particles while maintaining ease of use and scalability. The greatest improvements are observed when amplifying high G+C content templates, such as those belonging to the predominant bacteria in agricultural soils. By integrating WGA-X with calibrated index-cell sorting and high-throughput genomic sequencing, we are able to analyze genomic sequences and cell sizes of hundreds of individual, uncultured bacteria, archaea, protists, and viral particles, obtained directly from marine and soil samples, in a single experiment. This approach may find diverse applications in microbiology and in biomedical and forensic studies of humans and other multicellular organisms.Single-cell genomics can be used to study uncultured microorganisms. Here, Stepanauskas et al. present a method combining improved multiple displacement amplification and FACS, to obtain genomic sequences and cell size information from uncultivated microbial cells and viral particles in environmental samples.
Cheung, Kit; Schultz, Simon R.; Luk, Wayne
2016-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542
Oono, Ryoko
2017-01-01
High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions 'how and why are communities different?' This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences.
2017-01-01
High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions ‘how and why are communities different?’ This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences. PMID:29253889
GPU Lossless Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.
2014-01-01
Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.
Screening Chemicals for Estrogen Receptor Bioactivity Using a Computational Model.
Browne, Patience; Judson, Richard S; Casey, Warren M; Kleinstreuer, Nicole C; Thomas, Russell S
2015-07-21
The U.S. Environmental Protection Agency (EPA) is considering high-throughput and computational methods to evaluate the endocrine bioactivity of environmental chemicals. Here we describe a multistep, performance-based validation of new methods and demonstrate that these new tools are sufficiently robust to be used in the Endocrine Disruptor Screening Program (EDSP). Results from 18 estrogen receptor (ER) ToxCast high-throughput screening assays were integrated into a computational model that can discriminate bioactivity from assay-specific interference and cytotoxicity. Model scores range from 0 (no activity) to 1 (bioactivity of 17β-estradiol). ToxCast ER model performance was evaluated for reference chemicals, as well as results of EDSP Tier 1 screening assays in current practice. The ToxCast ER model accuracy was 86% to 93% when compared to reference chemicals and predicted results of EDSP Tier 1 guideline and other uterotrophic studies with 84% to 100% accuracy. The performance of high-throughput assays and ToxCast ER model predictions demonstrates that these methods correctly identify active and inactive reference chemicals, provide a measure of relative ER bioactivity, and rapidly identify chemicals with potential endocrine bioactivities for additional screening and testing. EPA is accepting ToxCast ER model data for 1812 chemicals as alternatives for EDSP Tier 1 ER binding, ER transactivation, and uterotrophic assays.
Marek, A; Blum, V; Johanni, R; Havu, V; Lang, B; Auckenthaler, T; Heinecke, A; Bungartz, H-J; Lederer, H
2014-05-28
Obtaining the eigenvalues and eigenvectors of large matrices is a key problem in electronic structure theory and many other areas of computational science. The computational effort formally scales as O(N(3)) with the size of the investigated problem, N (e.g. the electron count in electronic structure theory), and thus often defines the system size limit that practical calculations cannot overcome. In many cases, more than just a small fraction of the possible eigenvalue/eigenvector pairs is needed, so that iterative solution strategies that focus only on a few eigenvalues become ineffective. Likewise, it is not always desirable or practical to circumvent the eigenvalue solution entirely. We here review some current developments regarding dense eigenvalue solvers and then focus on the Eigenvalue soLvers for Petascale Applications (ELPA) library, which facilitates the efficient algebraic solution of symmetric and Hermitian eigenvalue problems for dense matrices that have real-valued and complex-valued matrix entries, respectively, on parallel computer platforms. ELPA addresses standard as well as generalized eigenvalue problems, relying on the well documented matrix layout of the Scalable Linear Algebra PACKage (ScaLAPACK) library but replacing all actual parallel solution steps with subroutines of its own. For these steps, ELPA significantly outperforms the corresponding ScaLAPACK routines and proprietary libraries that implement the ScaLAPACK interface (e.g. Intel's MKL). The most time-critical step is the reduction of the matrix to tridiagonal form and the corresponding backtransformation of the eigenvectors. ELPA offers both a one-step tridiagonalization (successive Householder transformations) and a two-step transformation that is more efficient especially towards larger matrices and larger numbers of CPU cores. ELPA is based on the MPI standard, with an early hybrid MPI-OpenMPI implementation available as well. Scalability beyond 10,000 CPU cores for problem sizes arising in the field of electronic structure theory is demonstrated for current high-performance computer architectures such as Cray or Intel/Infiniband. For a matrix of dimension 260,000, scalability up to 295,000 CPU cores has been shown on BlueGene/P.
Chapter 1: Biomedical knowledge integration.
Payne, Philip R O
2012-01-01
The modern biomedical research and healthcare delivery domains have seen an unparalleled increase in the rate of innovation and novel technologies over the past several decades. Catalyzed by paradigm-shifting public and private programs focusing upon the formation and delivery of genomic and personalized medicine, the need for high-throughput and integrative approaches to the collection, management, and analysis of heterogeneous data sets has become imperative. This need is particularly pressing in the translational bioinformatics domain, where many fundamental research questions require the integration of large scale, multi-dimensional clinical phenotype and bio-molecular data sets. Modern biomedical informatics theory and practice has demonstrated the distinct benefits associated with the use of knowledge-based systems in such contexts. A knowledge-based system can be defined as an intelligent agent that employs a computationally tractable knowledge base or repository in order to reason upon data in a targeted domain and reproduce expert performance relative to such reasoning operations. The ultimate goal of the design and use of such agents is to increase the reproducibility, scalability, and accessibility of complex reasoning tasks. Examples of the application of knowledge-based systems in biomedicine span a broad spectrum, from the execution of clinical decision support, to epidemiologic surveillance of public data sets for the purposes of detecting emerging infectious diseases, to the discovery of novel hypotheses in large-scale research data sets. In this chapter, we will review the basic theoretical frameworks that define core knowledge types and reasoning operations with particular emphasis on the applicability of such conceptual models within the biomedical domain, and then go on to introduce a number of prototypical data integration requirements and patterns relevant to the conduct of translational bioinformatics that can be addressed via the design and use of knowledge-based systems.
Cosmological Particle Data Compression in Practice
NASA Astrophysics Data System (ADS)
Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.
2017-12-01
In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.
Chapter 1: Biomedical Knowledge Integration
Payne, Philip R. O.
2012-01-01
The modern biomedical research and healthcare delivery domains have seen an unparalleled increase in the rate of innovation and novel technologies over the past several decades. Catalyzed by paradigm-shifting public and private programs focusing upon the formation and delivery of genomic and personalized medicine, the need for high-throughput and integrative approaches to the collection, management, and analysis of heterogeneous data sets has become imperative. This need is particularly pressing in the translational bioinformatics domain, where many fundamental research questions require the integration of large scale, multi-dimensional clinical phenotype and bio-molecular data sets. Modern biomedical informatics theory and practice has demonstrated the distinct benefits associated with the use of knowledge-based systems in such contexts. A knowledge-based system can be defined as an intelligent agent that employs a computationally tractable knowledge base or repository in order to reason upon data in a targeted domain and reproduce expert performance relative to such reasoning operations. The ultimate goal of the design and use of such agents is to increase the reproducibility, scalability, and accessibility of complex reasoning tasks. Examples of the application of knowledge-based systems in biomedicine span a broad spectrum, from the execution of clinical decision support, to epidemiologic surveillance of public data sets for the purposes of detecting emerging infectious diseases, to the discovery of novel hypotheses in large-scale research data sets. In this chapter, we will review the basic theoretical frameworks that define core knowledge types and reasoning operations with particular emphasis on the applicability of such conceptual models within the biomedical domain, and then go on to introduce a number of prototypical data integration requirements and patterns relevant to the conduct of translational bioinformatics that can be addressed via the design and use of knowledge-based systems. PMID:23300416
Yu, Hong; Gao, Feng; Jiang, Liren; Ma, Shuoxin
2017-09-15
The aim was to develop scalable Whole Slide Imaging (sWSI), a WSI system based on mainstream smartphones coupled with regular optical microscopes. This ultra-low-cost solution should offer diagnostic-ready imaging quality on par with standalone scanners, supporting both oil and dry objective lenses of different magnifications, and reasonably high throughput. These performance metrics should be evaluated by expert pathologists and match those of high-end scanners. The aim was to develop scalable Whole Slide Imaging (sWSI), a whole slide imaging system based on smartphones coupled with optical microscopes. This ultra-low-cost solution should offer diagnostic-ready imaging quality on par with standalone scanners, supporting both oil and dry object lens of different magnification. All performance metrics should be evaluated by expert pathologists and match those of high-end scanners. In the sWSI design, the digitization process is split asynchronously between light-weight clients on smartphones and powerful cloud servers. The client apps automatically capture FoVs at up to 12-megapixel resolution and process them in real-time to track the operation of users, then give instant feedback of guidance. The servers first restitch each pair of FoVs, then automatically correct the unknown nonlinear distortion introduced by the lens of the smartphone on the fly, based on pair-wise stitching, before finally combining all FoVs into one gigapixel VS for each scan. These VSs can be viewed using Internet browsers anywhere. In the evaluation experiment, 100 frozen section slides from patients randomly selected among in-patients of the participating hospital were scanned by both a high-end Leica scanner and sWSI. All VSs were examined by senior pathologists whose diagnoses were compared against those made using optical microscopy as ground truth to evaluate the image quality. The sWSI system is developed for both Android and iPhone smartphones and is currently being offered to the public. The image quality is reliable and throughput is approximately 1 FoV per second, yielding a 15-by-15 mm slide under 20X object lens in approximately 30-35 minutes, with little training required for the operator. The expected cost for setup is approximately US $100 and scanning each slide costs between US $1 and $10, making sWSI highly cost-effective for infrequent or low-throughput usage. In the clinical evaluation of sample-wise diagnostic reliability, average accuracy scores achieved by sWSI-scan-based diagnoses were as follows: 0.78 for breast, 0.88 for uterine corpus, 0.68 for thyroid, and 0.50 for lung samples. The respective low-sensitivity rates were 0.05, 0.05, 0.13, and 0.25 while the respective low-specificity rates were 0.18, 0.08, 0.20, and 0.25. The participating pathologists agreed that the overall quality of sWSI was generally on par with that produced by high-end scanners, and did not affect diagnosis in most cases. Pathologists confirmed that sWSI is reliable enough for standard diagnoses of most tissue categories, while it can be used for quick screening of difficult cases. As an ultra-low-cost alternative to whole slide scanners, diagnosis-ready VS quality and robustness for commercial usage is achieved in the sWSI solution. Operated on main-stream smartphones installed on normal optical microscopes, sWSI readily offers affordable and reliable WSI to resource-limited or infrequent clinical users. ©Hong Yu, Feng Gao, Liren Jiang, Shuoxin Ma. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 15.09.2017.
ERIC Educational Resources Information Center
Hoogenboom, Richard; Meier, Michael A. R.; Schubert, Ulrich S.
2005-01-01
A laboratory project permits for the discussion of the reaction mechanism of the Suzuki-Miyaura coupling reaction. The practical part of the project makes the students familiar with working under inert atmosphere but if the appropriate equipment for working under inert atmosphere is not available in a laboratory, novel catalysts that do not…
Tan, Xin; Tahini, Hassan A; Smith, Sean C
2016-12-07
Electrocatalytic, switchable hydrogen storage promises both tunable kinetics and facile reversibility without the need for specific catalysts. The feasibility of this approach relies on having materials that are easy to synthesize, possessing good electrical conductivities. Graphitic carbon nitride (g-C 4 N 3 ) has been predicted to display charge-responsive binding with molecular hydrogen-the only such conductive sorbent material that has been discovered to date. As yet, however, this conductive variant of graphitic carbon nitride is not readily synthesized by scalable methods. Here, we examine the possibility of conductive and easily synthesized boron-doped graphene nanosheets (B-doped graphene) as sorbent materials for practical applications of electrocatalytically switchable hydrogen storage. Using first-principle calculations, we find that the adsorption energy of H 2 molecules on B-doped graphene can be dramatically enhanced by removing electrons from and thereby positively charging the adsorbent. Thus, by controlling charge injected or depleted from the adsorbent, one can effectively tune the storage/release processes which occur spontaneously without any energy barriers. At full hydrogen coverage, the positively charged BC 5 achieves high storage capacities up to 5.3 wt %. Importantly, B-doped graphene, such as BC 49 , BC 7 , and BC 5 , have good electrical conductivity and can be easily synthesized by scalable methods, which positions this class of material as a very good candidate for charge injection/release. These predictions pave the route for practical implementation of electrocatalytic systems with switchable storage/release capacities that offer high capacity for hydrogen storage.
Genetic Rescue of Functional Senescence in Synaptic and Behavioral Plasticity
Donlea, Jeffrey M.; Ramanan, Narendrakumar; Silverman, Neal; Shaw, Paul J.
2014-01-01
Study Objectives: Aging has been linked with decreased neural plasticity and memory formation in humans and in laboratory model species such as the fruit fly, Drosophila melanogaster. Here, we examine plastic responses following social experience in Drosophila as a high-throughput method to identify interventions that prevent these impairments. Patients or Participants: Wild-type and transgenic Drosophila melanogaster. Design and Interventions: Young (5-day old) or aged (20-day old) adult female Drosophila were housed in socially enriched (n = 35-40) or isolated environments, then assayed for changes in sleep and for structural markers of synaptic terminal growth in the ventral lateral neurons (LNVs) of the circadian clock. Measurements and Results: When young flies are housed in a socially enriched environment, they exhibit synaptic elaboration within a component of the circadian circuitry, the LNVs, which is followed by increased sleep. Aged flies, however, no longer exhibit either of these plastic changes. Because of the tight correlation between neural plasticity and ensuing increases in sleep, we use sleep after enrichment as a high-throughput marker for neural plasticity to identify interventions that prolong youthful plasticity in aged flies. To validate this strategy, we find three independent genetic manipulations that delay age-related losses in plasticity: (1) elevation of dopaminergic signaling, (2) over-expression of the transcription factor blistered (bs) in the LNVs, and (3) reduction of the Imd immune signaling pathway. These findings provide proof-of-principle evidence that measuring changes in sleep in flies after social enrichment may provide a highly scalable assay for the study of age-related deficits in synaptic plasticity. Conclusions: These studies demonstrate that Drosophila provides a promising model for the study of age-related loss of neural plasticity and begin to identify genes that might be manipulated to delay the onset of functional senescence. Citation: Donlea JM, Ramanan N, Silverman N, Shaw PJ. Genetic rescue of functional senescence in synaptic and behavioral plasticity. SLEEP 2014;37(9):1427-1437. PMID:25142573
Use of nanoscale mechanical stimulation for control and manipulation of cell behaviour.
Childs, Peter G; Boyle, Christina A; Pemberton, Gabriel D; Nikukar, Habib; Curtis, Adam S G; Henriquez, Fiona L; Dalby, Matthew J; Reid, Stuart
2016-04-01
The ability to control cell behaviour, cell fate and simulate reliable tissue models in vitro remains a significant challenge yet is crucial for various applications of high throughput screening e.g. drug discovery. Mechanotransduction (the ability of cells to convert mechanical forces in their environment to biochemical signalling) represents an alternative mechanism to attain this control with such studies developing techniques to reproducibly control the mechanical environment in techniques which have potential to be scaled. In this review, the use of techniques such as finite element modelling and precision interferometric measurement are examined to provide context for a novel technique based on nanoscale vibration, also known as "nanokicking". Studies have shown this stimulus to alter cellular responses in both endothelial and mesenchymal stem cells (MSCs), particularly in increased proliferation rate and induced osteogenesis respectively. Endothelial cell lines were exposed to nanoscale vibration amplitudes across a frequency range of 1-100 Hz, and MSCs primarily at 1 kHz. This technique provides significant potential benefits over existing technologies, as cellular responses can be initiated without the use of expensive engineering techniques and/or chemical induction factors. Due to the reproducible and scalable nature of the apparatus it is conceivable that nanokicking could be used for controlling cell behaviour within a wide array of high throughput procedures in the research environment, within drug discovery, and for clinical/therapeutic applications. The results discussed within this article summarise the potential benefits of using nanoscale vibration protocols for controlling cell behaviour. There is a significant need for reliable tissue models within the clinical and pharma industries, and the control of cell behaviour and stem cell differentiation would be highly beneficial. The full potential of this method of controlling cell behaviour has not yet been realised. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Wilkinson, Samuel L.; John, Shibu; Walsh, Roddy; Novotny, Tomas; Valaskova, Iveta; Gupta, Manu; Game, Laurence; Barton, Paul J R.; Cook, Stuart A.; Ware, James S.
2013-01-01
Background Molecular genetic testing is recommended for diagnosis of inherited cardiac disease, to guide prognosis and treatment, but access is often limited by cost and availability. Recently introduced high-throughput bench-top DNA sequencing platforms have the potential to overcome these limitations. Methodology/Principal Findings We evaluated two next-generation sequencing (NGS) platforms for molecular diagnostics. The protein-coding regions of six genes associated with inherited arrhythmia syndromes were amplified from 15 human samples using parallelised multiplex PCR (Access Array, Fluidigm), and sequenced on the MiSeq (Illumina) and Ion Torrent PGM (Life Technologies). Overall, 97.9% of the target was sequenced adequately for variant calling on the MiSeq, and 96.8% on the Ion Torrent PGM. Regions missed tended to be of high GC-content, and most were problematic for both platforms. Variant calling was assessed using 107 variants detected using Sanger sequencing: within adequately sequenced regions, variant calling on both platforms was highly accurate (Sensitivity: MiSeq 100%, PGM 99.1%. Positive predictive value: MiSeq 95.9%, PGM 95.5%). At the time of the study the Ion Torrent PGM had a lower capital cost and individual runs were cheaper and faster. The MiSeq had a higher capacity (requiring fewer runs), with reduced hands-on time and simpler laboratory workflows. Both provide significant cost and time savings over conventional methods, even allowing for adjunct Sanger sequencing to validate findings and sequence exons missed by NGS. Conclusions/Significance MiSeq and Ion Torrent PGM both provide accurate variant detection as part of a PCR-based molecular diagnostic workflow, and provide alternative platforms for molecular diagnosis of inherited cardiac conditions. Though there were performance differences at this throughput, platforms differed primarily in terms of cost, scalability, protocol stability and ease of use. Compared with current molecular genetic diagnostic tests for inherited cardiac arrhythmias, these NGS approaches are faster, less expensive, and yet more comprehensive. PMID:23861798
Gobalasingham, Nemal S; Carlé, Jon E; Krebs, Frederik C; Thompson, Barry C; Bundgaard, Eva; Helgesen, Martin
2017-11-01
Continuous flow methods are utilized in conjunction with direct arylation polymerization (DArP) for the scaled synthesis of the roll-to-roll compatible polymer, poly[(2,5-bis(2-hexyldecyloxy)phenylene)-alt-(4,7-di(thiophen-2-yl)-benzo[c][1,2,5]thiadiazole)] (PPDTBT). PPDTBT is based on simple, inexpensive, and scalable monomers using thienyl-flanked benzothiadiazole as the acceptor, which is the first β-unprotected substrate to be used in continuous flow via DArP, enabling critical evaluation of the suitability of this emerging synthetic method for minimizing defects and for the scaled synthesis of high-performance materials. To demonstrate the usefulness of the method, DArP-prepared PPDTBT via continuous flow synthesis is employed for the preparation of indium tin oxide (ITO)-free and flexible roll-coated solar cells to achieve a power conversion efficiency of 3.5% for 1 cm 2 devices, which is comparable to the performance of PPDTBT polymerized through Stille cross coupling. These efforts demonstrate the distinct advantages of the continuous flow protocol with DArP avoiding use of toxic tin chemicals, reducing the associated costs of polymer upscaling, and minimizing batch-to-batch variations for high-quality material. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Shinde, A.; Li, G.; Zhou, L.; ...
2016-09-09
Solar fuel generators entail a high degree of materials integration, and efficient photoelectrocatalysis of the constituent reactions hinges upon the establishment of highly functional interfaces. Our recent application of high throughput experimentation to interface discovery for solar fuels photoanodes has revealed several surprising and promising mixed-metal oxide coatings for BiVO 4. Furthermore, when using sputter deposition of composition and thickness gradients on a uniform BiVO 4 film, we systematically explore photoanodic performance as a function of the composition and loading of Fe–Ce oxide coatings. This combinatorial materials integration study not only enhances the performance of this new class of materialsmore » but also identifies CeO 2 as a critical ingredient that merits detailed study. A heteroepitaxial CeO 2(001)/BiVO4(010) interface is identified in which Bi and V remain fully coordinated to O such that no surface states are formed. Ab initio calculations of the integrated materials and inspection of the electronic structure reveals mechanisms by which CeO 2 facilitates charge transport while mitigating deleterious recombination. Our results support the observations that addition of Ce to BiVO 4 coatings greatly enhances photoelectrocatalytic activity, providing an important strategy for developing a scalable solar fuels technology.« less
Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.
2014-01-01
Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868
Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P
2014-07-01
Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.
A field ornithologist’s guide to genomics: Practical considerations for ecology and conservation
Oyler-McCance, Sara J.; Oh, Kevin; Langin, Kathryn; Aldridge, Cameron L.
2016-01-01
Vast improvements in sequencing technology have made it practical to simultaneously sequence millions of nucleotides distributed across the genome, opening the door for genomic studies in virtually any species. Ornithological research stands to benefit in three substantial ways. First, genomic methods enhance our ability to parse and simultaneously analyze both neutral and non-neutral genomic regions, thus providing insight into adaptive evolution and divergence. Second, the sheer quantity of sequence data generated by current sequencing platforms allows increased precision and resolution in analyses. Third, high-throughput sequencing can benefit applications that focus on a small number of loci that are otherwise prohibitively expensive, time-consuming, and technically difficult using traditional sequencing methods. These advances have improved our ability to understand evolutionary processes like speciation and local adaptation, but they also offer many practical applications in the fields of population ecology, migration tracking, conservation planning, diet analyses, and disease ecology. This review provides a guide for field ornithologists interested in incorporating genomic approaches into their research program, with an emphasis on techniques related to ecology and conservation. We present a general overview of contemporary genomic approaches and methods, as well as important considerations when selecting a genomic technique. We also discuss research questions that are likely to benefit from utilizing high-throughput sequencing instruments, highlighting select examples from recent avian studies.
Optofluidic time-stretch quantitative phase microscopy.
Guo, Baoshan; Lei, Cheng; Wu, Yi; Kobayashi, Hirofumi; Ito, Takuro; Yalikun, Yaxiaer; Lee, Sangwook; Isozaki, Akihiro; Li, Ming; Jiang, Yiyue; Yasumoto, Atsushi; Di Carlo, Dino; Tanaka, Yo; Yatomi, Yutaka; Ozeki, Yasuyuki; Goda, Keisuke
2018-03-01
Innovations in optical microscopy have opened new windows onto scientific research, industrial quality control, and medical practice over the last few decades. One of such innovations is optofluidic time-stretch quantitative phase microscopy - an emerging method for high-throughput quantitative phase imaging that builds on the interference between temporally stretched signal and reference pulses by using dispersive properties of light in both spatial and temporal domains in an interferometric configuration on a microfluidic platform. It achieves the continuous acquisition of both intensity and phase images with a high throughput of more than 10,000 particles or cells per second by overcoming speed limitations that exist in conventional quantitative phase imaging methods. Applications enabled by such capabilities are versatile and include characterization of cancer cells and microalgal cultures. In this paper, we review the principles and applications of optofluidic time-stretch quantitative phase microscopy and discuss its future perspective. Copyright © 2017 Elsevier Inc. All rights reserved.
A Practical, Hardware Friendly MMSE Detector for MIMO-OFDM-Based Systems
NASA Astrophysics Data System (ADS)
Kim, Hun Seok; Zhu, Weijun; Bhatia, Jatin; Mohammed, Karim; Shah, Anish; Daneshrad, Babak
2008-12-01
Design and implementation of a highly optimized MIMO (multiple-input multiple-output) detector requires cooptimization of the algorithm with the underlying hardware architecture. Special attention must be paid to application requirements such as throughput, latency, and resource constraints. In this work, we focus on a highly optimized matrix inversion free [InlineEquation not available: see fulltext.] MMSE (minimum mean square error) MIMO detector implementation. The work has resulted in a real-time field-programmable gate array-based implementation (FPGA-) on a Xilinx Virtex-2 6000 using only 9003 logic slices, 66 multipliers, and 24 Block RAMs (less than 33% of the overall resources of this part). The design delivers over 420 Mbps sustained throughput with a small 2.77-microsecond latency. The designed [InlineEquation not available: see fulltext.] linear MMSE MIMO detector is capable of complying with the proposed IEEE 802.11n standard.
A Microfluidic Approach for Studying Piezo Channels.
Maneshi, M M; Gottlieb, P A; Hua, S Z
2017-01-01
Microfluidics is an interdisciplinary field intersecting many areas in engineering. Utilizing a combination of physics, chemistry, biology, and biotechnology, along with practical applications for designing devices that use low volumes of fluids to achieve high-throughput screening, is a major goal in microfluidics. Microfluidic approaches allow the study of cells growth and differentiation using a variety of conditions including control of fluid flow that generates shear stress. Recently, Piezo1 channels were shown to respond to fluid shear stress and are crucial for vascular development. This channel is ideal for studying fluid shear stress applied to cells using microfluidic devices. We have developed an approach that allows us to analyze the role of Piezo channels on any given cell and serves as a high-throughput screen for drug discovery. We show that this approach can provide detailed information about the inhibitors of Piezo channels. Copyright © 2017 Elsevier Inc. All rights reserved.
Shabbir, Shagufta H.; Regan, Clinton J.; Anslyn, Eric V.
2009-01-01
A general approach to high-throughput screening of enantiomeric excess (ee) and concentration was developed by using indicator displacement assays (IDAs), and the protocol was then applied to the vicinal diol hydrobenzoin. The method involves the sequential utilization of what we define herein as screening, training, and analysis plates. Several enantioselective boronic acid-based receptors were screened by using 96-well plates, both for their ability to discriminate the enantiomers of hydrobenzoin and to find their optimal pairing with indicators resulting in the largest optical responses. The best receptor/indicator combination was then used to train an artificial neural network to determine concentration and ee. To prove the practicality of the developed protocol, analysis plates were created containing true unknown samples of hydrobenzoin generated by established Sharpless asymmetric dihydroxylation reactions, and the best ligand was correctly identified. PMID:19332790
Shabbir, Shagufta H; Regan, Clinton J; Anslyn, Eric V
2009-06-30
A general approach to high-throughput screening of enantiomeric excess (ee) and concentration was developed by using indicator displacement assays (IDAs), and the protocol was then applied to the vicinal diol hydrobenzoin. The method involves the sequential utilization of what we define herein as screening, training, and analysis plates. Several enantioselective boronic acid-based receptors were screened by using 96-well plates, both for their ability to discriminate the enantiomers of hydrobenzoin and to find their optimal pairing with indicators resulting in the largest optical responses. The best receptor/indicator combination was then used to train an artificial neural network to determine concentration and ee. To prove the practicality of the developed protocol, analysis plates were created containing true unknown samples of hydrobenzoin generated by established Sharpless asymmetric dihydroxylation reactions, and the best ligand was correctly identified.
Transfection microarray and the applications.
Miyake, Masato; Yoshikawa, Tomohiro; Fujita, Satoshi; Miyake, Jun
2009-05-01
Microarray transfection has been extensively studied for high-throughput functional analysis of mammalian cells. However, control of efficiency and reproducibility are the critical issues for practical use. By using solid-phase transfection accelerators and nano-scaffold, we provide a highly efficient and reproducible microarray-transfection device, "transfection microarray". The device would be applied to the limited number of available primary cells and stem cells not only for large-scale functional analysis but also reporter-based time-lapse cellular event analysis.
High duty cycle inverse Compton scattering X-ray source
Ovodenko, A.; Agustsson, R.; Babzien, M.; ...
2016-12-22
Inverse Compton Scattering (ICS) is an emerging compact X-ray source technology, where the small source size and high spectral brightness are of interest for multitude of applications. However, to satisfy the practical flux requirements, a high-repetition-rate ICS system needs to be developed. To this end, this article reports the experimental demonstration of a high peak brightness ICS source operating in a burst mode at 40 MHz. A pulse train interaction has been achieved by recirculating a picosecond CO 2 laser pulse inside an active optical cavity synchronized to the electron beam. The pulse train ICS performance has been characterized atmore » 5- and 15- pulses per train and compared to a single pulse operation under the same operating conditions. Lastly, with the observed near-linear X-ray photon yield gain due to recirculation, as well as noticeably higher operational reliability, the burst-mode ICS offers a great potential for practical scalability towards high duty cycles.« less
NASA Astrophysics Data System (ADS)
Cai, Hongyan; Han, Kai; Jiang, Heng; Wang, Jingwen; Liu, Hui
2017-10-01
Silicon/carbon (Si/C) composite shows great potential to replace graphite as lithium-ion battery (LIB) anode owing to its high theoretical capacity. Exploring low-cost scalable approach for synthesizing Si/C composites with excellent electrochemical performance is critical for practical application of Si/C anodes. In this study, we rationally applied a scalable in situ approach to produce Si-carbon nanotube (Si-CNT) composite via acid etching of commercial inexpensive micro-sized Al-Si alloy powder and CNT mixture. In the Si-CNT composite, ∼10 nm Si particles were uniformly deposited on the CNT surface. After combining with graphene sheets, a flexible self-standing Si-CNT/graphene paper was fabricated with three-dimensional (3D) sandwich-like structure. The in situ presence of CNT during acid-etching process shows remarkable two advantages: providing deposition sites for Si atoms to restrain agglomeration of Si nanoparticles after Al removal from Al-Si alloy powder, increasing the cross-layer conductivity of the paper anode to provide excellent conductive contact sites for each Si nanoparticles. When used as binder-free anode for LIBs without any further treatment, in situ addition of CNT especially plays important role to improve the initial electrochemical activity of Si nanoparticles synthesized from low-cost Al-Si alloy powder, thus resulting in about twice higher capacity than Si/G paper anode. The self-standing Si-CNT/graphene paper anode exhibited a high specific capacity of 1100 mAh g-1 even after 100 cycles at 200 mA g-1 current density with a Coulombic efficiency of >99%. It also showed remarkable rate capability improvement compared to Si/G paper without CNT. The present work demonstrates a low-cost scalable in situ approach from commercial micro-sized Al-Si alloy powder for Si-based composites with specific nanostructure. The Si-CNT/graphene paper is a promising anode candidate with high capacity and cycling stability for LIBs, especially for the flexible batteries application.
Alignment-free inference of hierarchical and reticulate phylogenomic relationships.
Bernard, Guillaume; Chan, Cheong Xin; Chan, Yao-Ban; Chua, Xin-Yi; Cong, Yingnan; Hogan, James M; Maetschke, Stefan R; Ragan, Mark A
2017-06-30
We are amidst an ongoing flood of sequence data arising from the application of high-throughput technologies, and a concomitant fundamental revision in our understanding of how genomes evolve individually and within the biosphere. Workflows for phylogenomic inference must accommodate data that are not only much larger than before, but often more error prone and perhaps misassembled, or not assembled in the first place. Moreover, genomes of microbes, viruses and plasmids evolve not only by tree-like descent with modification but also by incorporating stretches of exogenous DNA. Thus, next-generation phylogenomics must address computational scalability while rethinking the nature of orthogroups, the alignment of multiple sequences and the inference and comparison of trees. New phylogenomic workflows have begun to take shape based on so-called alignment-free (AF) approaches. Here, we review the conceptual foundations of AF phylogenetics for the hierarchical (vertical) and reticulate (lateral) components of genome evolution, focusing on methods based on k-mers. We reflect on what seems to be successful, and on where further development is needed. © The Author 2017. Published by Oxford University Press.
Single-cell barcoding and sequencing using droplet microfluidics.
Zilionis, Rapolas; Nainys, Juozas; Veres, Adrian; Savova, Virginia; Zemmour, David; Klein, Allon M; Mazutis, Linas
2017-01-01
Single-cell RNA sequencing has recently emerged as a powerful tool for mapping cellular heterogeneity in diseased and healthy tissues, yet high-throughput methods are needed for capturing the unbiased diversity of cells. Droplet microfluidics is among the most promising candidates for capturing and processing thousands of individual cells for whole-transcriptome or genomic analysis in a massively parallel manner with minimal reagent use. We recently established a method called inDrops, which has the capability to index >15,000 cells in an hour. A suspension of cells is first encapsulated into nanoliter droplets with hydrogel beads (HBs) bearing barcoding DNA primers. Cells are then lysed and mRNA is barcoded (indexed) by a reverse transcription (RT) reaction. Here we provide details for (i) establishing an inDrops platform (1 d); (ii) performing hydrogel bead synthesis (4 d); (iii) encapsulating and barcoding cells (1 d); and (iv) RNA-seq library preparation (2 d). inDrops is a robust and scalable platform, and it is unique in its ability to capture and profile >75% of cells in even very small samples, on a scale of thousands or tens of thousands of cells.
eCOMPAGT – efficient Combination and Management of Phenotypes and Genotypes for Genetic Epidemiology
Schönherr, Sebastian; Weißensteiner, Hansi; Coassin, Stefan; Specht, Günther; Kronenberg, Florian; Brandstätter, Anita
2009-01-01
Background High-throughput genotyping and phenotyping projects of large epidemiological study populations require sophisticated laboratory information management systems. Most epidemiological studies include subject-related personal information, which needs to be handled with care by following data privacy protection guidelines. In addition, genotyping core facilities handling cooperative projects require a straightforward solution to monitor the status and financial resources of the different projects. Description We developed a database system for an efficient combination and management of phenotypes and genotypes (eCOMPAGT) deriving from genetic epidemiological studies. eCOMPAGT securely stores and manages genotype and phenotype data and enables different user modes with different rights. Special attention was drawn on the import of data deriving from TaqMan and SNPlex genotyping assays. However, the database solution is adjustable to other genotyping systems by programming additional interfaces. Further important features are the scalability of the database and an export interface to statistical software. Conclusion eCOMPAGT can store, administer and connect phenotype data with all kinds of genotype data and is available as a downloadable version at . PMID:19432954
Silk protein nanowires patterned using electron beam lithography.
Pal, Ramendra K; Yadavalli, Vamsi K
2018-08-17
Nanofabrication approaches to pattern proteins at the nanoscale are useful in applications ranging from organic bioelectronics to cellular engineering. Specifically, functional materials based on natural polymers offer sustainable and environment-friendly substitutes to synthetic polymers. Silk proteins (fibroin and sericin) have emerged as an important class of biomaterials for next generation applications owing to excellent optical and mechanical properties, inherent biocompatibility, and biodegradability. However, the ability to precisely control their spatial positioning at the nanoscale via high throughput tools continues to remain a challenge. In this study electron beam lithography (EBL) is used to provide nanoscale patterning using methacrylate conjugated silk proteins that are photoreactive 'photoresists' materials. Very low energy electron beam radiation can be used to pattern silk proteins at the nanoscale and over large areas, whereby such nanostructure fabrication can be performed without specialized EBL tools. Significantly, using conducting polymers in conjunction with these silk proteins, the formation of protein nanowires down to 100 nm is shown. These wires can be easily degraded using enzymatic degradation. Thus, proteins can be precisely and scalably patterned and doped with conducting polymers and enzymes to form degradable, organic bioelectronic devices.
High-throughput 3D whole-brain quantitative histopathology in rodents
Vandenberghe, Michel E.; Hérard, Anne-Sophie; Souedet, Nicolas; Sadouni, Elmahdi; Santin, Mathieu D.; Briet, Dominique; Carré, Denis; Schulz, Jocelyne; Hantraye, Philippe; Chabrier, Pierre-Etienne; Rooney, Thomas; Debeir, Thomas; Blanchard, Véronique; Pradier, Laurent; Dhenain, Marc; Delzescaux, Thierry
2016-01-01
Histology is the gold standard to unveil microscopic brain structures and pathological alterations in humans and animal models of disease. However, due to tedious manual interventions, quantification of histopathological markers is classically performed on a few tissue sections, thus restricting measurements to limited portions of the brain. Recently developed 3D microscopic imaging techniques have allowed in-depth study of neuroanatomy. However, quantitative methods are still lacking for whole-brain analysis of cellular and pathological markers. Here, we propose a ready-to-use, automated, and scalable method to thoroughly quantify histopathological markers in 3D in rodent whole brains. It relies on block-face photography, serial histology and 3D-HAPi (Three Dimensional Histology Analysis Pipeline), an open source image analysis software. We illustrate our method in studies involving mouse models of Alzheimer’s disease and show that it can be broadly applied to characterize animal models of brain diseases, to evaluate therapeutic interventions, to anatomically correlate cellular and pathological markers throughout the entire brain and to validate in vivo imaging techniques. PMID:26876372
Quantitative proteomic analysis in breast cancer.
Tabchy, A; Hennessy, B T; Gonzalez-Angulo, A M; Bernstam, F M; Lu, Y; Mills, G B
2011-02-01
Much progress has recently been made in the genomic and transcriptional characterization of tumors. However, historically the characterization of cells at the protein level has suffered limitations in reproducibility, scalability and robustness. Recent technological advances have made it possible to accurately and reproducibly portray the global levels and active states of cellular proteins. Protein microarrays examine the native post-translational conformations of proteins including activated phosphorylated states, in a comprehensive high-throughput mode, and can map activated pathways and networks of proteins inside the cells. The reverse-phase protein microarray (RPPA) offers a unique opportunity to study signal transduction networks in small biological samples such as human biopsy material and can provide critical information for therapeutic decision-making and the monitoring of patients for targeted molecular medicine. By providing the key missing link to the story generated from genomic and gene expression characterization efforts, functional proteomics offer the promise of a comprehensive understanding of cancer. Several initial successes in breast cancer are showing that such information is clinically relevant. Copyright 2011 Prous Science, S.A.U. or its licensors. All rights reserved.
Pijuan-Galitó, Sara; Tamm, Christoffer; Schuster, Jens; Sobol, Maria; Forsberg, Lars; Merry, Catherine L. R.; Annerén, Cecilia
2016-01-01
Reliable, scalable and time-efficient culture methods are required to fully realize the clinical and industrial applications of human pluripotent stem (hPS) cells. Here we present a completely defined, xeno-free medium that supports long-term propagation of hPS cells on uncoated tissue culture plastic. The medium consists of the Essential 8 (E8) formulation supplemented with inter-α-inhibitor (IαI), a human serum-derived protein, recently demonstrated to activate key pluripotency pathways in mouse PS cells. IαI efficiently induces attachment and long-term growth of both embryonic and induced hPS cell lines when added as a soluble protein to the medium at seeding. IαI supplementation efficiently supports adaptation of feeder-dependent hPS cells to xeno-free conditions, clonal growth as well as single-cell survival in the absence of Rho-associated kinase inhibitor (ROCKi). This time-efficient and simplified culture method paves the way for large-scale, high-throughput hPS cell culture, and will be valuable for both basic research and commercial applications. PMID:27405751
Atomic-layer soft plasma etching of MoS2
Xiao, Shaoqing; Xiao, Peng; Zhang, Xuecheng; Yan, Dawei; Gu, Xiaofeng; Qin, Fang; Ni, Zhenhua; Han, Zhao Jun; Ostrikov, Kostya (Ken)
2016-01-01
Transition from multi-layer to monolayer and sub-monolayer thickness leads to the many exotic properties and distinctive applications of two-dimensional (2D) MoS2. This transition requires atomic-layer-precision thinning of bulk MoS2 without damaging the remaining layers, which presently remains elusive. Here we report a soft, selective and high-throughput atomic-layer-precision etching of MoS2 in SF6 + N2 plasmas with low-energy (<0.4 eV) electrons and minimized ion-bombardment-related damage. Equal numbers of MoS2 layers are removed uniformly across domains with vastly different initial thickness, without affecting the underlying SiO2 substrate and the remaining MoS2 layers. The etching rates can be tuned to achieve complete MoS2 removal and any desired number of MoS2 layers including monolayer. Layer-dependent vibrational and photoluminescence spectra of the etched MoS2 are also demonstrated. This soft plasma etching technique is versatile, scalable, compatible with the semiconductor manufacturing processes, and may be applicable for a broader range of 2D materials and intended device applications. PMID:26813335
Automated Operant Conditioning in the Mouse Home Cage.
Francis, Nikolas A; Kanold, Patrick O
2017-01-01
Recent advances in neuroimaging and genetics have made mice an advantageous animal model for studying the neurophysiology of sensation, cognition, and locomotion. A key benefit of mice is that they provide a large population of test subjects for behavioral screening. Reflex-based assays of hearing in mice, such as the widely used acoustic startle response, are less accurate than operant conditioning in measuring auditory processing. To date, however, there are few cost-effective options for scalable operant conditioning systems. Here, we describe a new system for automated operant conditioning, the Psibox. It is assembled from low cost parts, designed to fit within typical commercial wire-top cages, and allows large numbers of mice to train independently in their home cages on positive reinforcement tasks. We found that groups of mice trained together learned to accurately detect sounds within 2 weeks of training. In addition, individual mice isolated from groups also showed good task performance. The Psibox facilitates high-throughput testing of sensory, motor, and cognitive skills in mice, and provides a readily available animal population for studies ranging from experience-dependent neural plasticity to rodent models of mental disorders.
Zhavoronkov, Alex; Buzdin, Anton A.; Garazha, Andrey V.; Borisov, Nikolay M.; Moskalev, Alexey A.
2014-01-01
The major challenges of aging research include absence of the comprehensive set of aging biomarkers, the time it takes to evaluate the effects of various interventions on longevity in humans and the difficulty extrapolating the results from model organisms to humans. To address these challenges we propose the in silico method for screening and ranking the possible geroprotectors followed by the high-throughput in vivo and in vitro validation. The proposed method evaluates the changes in the collection of activated or suppressed signaling pathways involved in aging and longevity, termed signaling pathway cloud, constructed using the gene expression data and epigenetic profiles of young and old patients' tissues. The possible interventions are selected and rated according to their ability to regulate age-related changes and minimize differences in the signaling pathway cloud. While many algorithmic solutions to simulating the induction of the old into young metabolic profiles in silico are possible, this flexible and scalable approach may potentially be used to predict the efficacy of the many drugs that may extend human longevity before conducting pre-clinical work and expensive clinical trials. PMID:24624136