NASA Astrophysics Data System (ADS)
Chakraborty, Swarnendu Kumar; Goswami, Rajat Subhra; Bhunia, Chandan Tilak; Bhunia, Abhinandan
2016-06-01
Aggressive packet combining (APC) scheme is well-established in literature. Several modifications were studied earlier for improving throughput. In this paper, three new modifications of APC are proposed. The performance of proposed modified APC is studied by simulation and is reported here. A hybrid scheme is proposed here for getting higher throughput and also the disjoint factor is compared among conventional APC with proposed schemes for getting higher throughput.
Adaptive Packet Combining Scheme in Three State Channel Model
NASA Astrophysics Data System (ADS)
Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak
2018-01-01
The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.
Asif, Muhammad; Guo, Xiangzhou; Zhang, Jing; Miao, Jungang
2018-04-17
Digital cross-correlation is central to many applications including but not limited to Digital Image Processing, Satellite Navigation and Remote Sensing. With recent advancements in digital technology, the computational demands of such applications have increased enormously. In this paper we are presenting a high throughput digital cross correlator, capable of processing 1-bit digitized stream, at the rate of up to 2 GHz, simultaneously on 64 channels i.e., approximately 4 Trillion correlation and accumulation operations per second. In order to achieve higher throughput, we have focused on frequency based partitioning of our design and tried to minimize and localize high frequency operations. This correlator is designed for a Passive Millimeter Wave Imager intended for the detection of contraband items concealed on human body. The goals are to increase the system bandwidth, achieve video rate imaging, improve sensitivity and reduce the size. Design methodology is detailed in subsequent sections, elaborating the techniques enabling high throughput. The design is verified for Xilinx Kintex UltraScale device in simulation and the implementation results are given in terms of device utilization and power consumption estimates. Our results show considerable improvements in throughput as compared to our baseline design, while the correlator successfully meets the functional requirements.
NASA Astrophysics Data System (ADS)
Fang, Sheng-Po; Jao, PitFee; Senior, David E.; Kim, Kyoung-Tae; Yoon, Yong-Kyu
2017-12-01
High throughput nanomanufacturing of photopatternable nanofibers and subsequent photopatterning is reported. For the production of high density nanofibers, the tube nozzle electrospinning (TNE) process has been used, where an array of micronozzles on the sidewall of a plastic tube are used as spinnerets. By increasing the density of nozzles, the electric fields of adjacent nozzles confine the cone of electrospinning and give a higher density of nanofibers. With TNE, higher density nozzles are easily achievable compared to metallic nozzles, e.g. an inter-nozzle distance as small as 0.5 cm and an average semi-vertical repulsion angle of 12.28° for 8-nozzles were achieved. Nanofiber diameter distribution, mass throughput rate, and growth rate of nanofiber stacks in different operating conditions and with different numbers of nozzles, such as 2, 4 and 8 nozzles, and scalability with single and double tube configurations are discussed. Nanofibers made of SU-8, photopatternable epoxy, have been collected to a thickness of over 80 μm in 240 s of electrospinning and the production rate of 0.75 g/h is achieved using the 2 tube 8 nozzle systems, followed by photolithographic micropatterning. TNE is scalable to a large number of nozzles, and offers high throughput production, plug and play capability with standard electrospinning equipment, and little waste of polymer.
Throughput increase by adjustment of the BARC drying time with coat track process
NASA Astrophysics Data System (ADS)
Brakensiek, Nickolas L.; Long, Ryan
2005-05-01
Throughput of a coater module within the coater track is related to the solvent evaporation rate from the material that is being coated. Evaporation rate is controlled by the spin dynamics of the wafer and airflow dynamics over the wafer. Balancing these effects is the key to achieving very uniform coatings across a flat unpatterned wafer. As today"s coat tracks are being pushed to higher throughputs to match the scanner, the coat module throughput must be increased as well. For chemical manufacturers the evaporation rate of the material depends on the solvent used. One measure of relative evaporation rates is to compare flash points of a solvent. The lower the flash point, the quicker the solvent will evaporate. It is possible to formulate products with these volatile solvents although at a price. Shipping and manufacturing a more flammable product increase chances of fire, thereby increasing insurance premiums. Also, the end user of these chemicals will have to take extra precautions in the fab and in storage of these more flammable chemicals. An alternative coat process is possible which would allow higher throughput in a distinct coat module without sacrificing safety. A tradeoff is required for this process, that being a more complicated coat process and a higher viscosity chemical. The coat process uses the fact that evaporation rate depends on the spin dynamics of the wafer by utilizing a series of spin speeds that first would set the thickness of the material followed by a high spin speed to remove the residual solvent. This new process can yield a throughput of over 150 wafers per hour (wph) given two coat modules. The thickness uniformity of less than 2 nm (3 sigma) is still excellent, while drying times are shorter than 10 seconds to achieve the 150 wph throughput targets.
Implicit Block ACK Scheme for IEEE 802.11 WLANs
Sthapit, Pranesh; Pyun, Jae-Young
2016-01-01
The throughput of IEEE 802.11 standard is significantly bounded by the associated Medium Access Control (MAC) overhead. Because of the overhead, an upper limit exists for throughput, which is bounded, including situations where data rates are extremely high. Therefore, an overhead reduction is necessary to achieve higher throughput. The IEEE 802.11e amendment introduced the block ACK mechanism, to reduce the number of control messages in MAC. Although the block ACK scheme greatly reduces overhead, further improvements are possible. In this letter, we propose an implicit block ACK method that further reduces the overhead associated with IEEE 802.11e’s block ACK scheme. The mathematical analysis results are presented for both the original protocol and the proposed scheme. A performance improvement of greater than 10% was achieved with the proposed implementation.
A 0.13-µm implementation of 5 Gb/s and 3-mW folded parallel architecture for AES algorithm
NASA Astrophysics Data System (ADS)
Rahimunnisa, K.; Karthigaikumar, P.; Kirubavathy, J.; Jayakumar, J.; Kumar, S. Suresh
2014-02-01
A new architecture for encrypting and decrypting the confidential data using Advanced Encryption Standard algorithm is presented in this article. This structure combines the folded structure with parallel architecture to increase the throughput. The whole architecture achieved high throughput with less power. The proposed architecture is implemented in 0.13-µm Complementary metal-oxide-semiconductor (CMOS) technology. The proposed structure is compared with different existing structures, and from the result it is proved that the proposed structure gives higher throughput and less power compared to existing works.
Fu, Wei; Zhu, Pengyu; Wei, Shuang; Zhixin, Du; Wang, Chenguang; Wu, Xiyang; Li, Feiwu; Zhu, Shuifang
2017-04-01
Among all of the high-throughput detection methods, PCR-based methodologies are regarded as the most cost-efficient and feasible methodologies compared with the next-generation sequencing or ChIP-based methods. However, the PCR-based methods can only achieve multiplex detection up to 15-plex due to limitations imposed by the multiplex primer interactions. The detection throughput cannot meet the demands of high-throughput detection, such as SNP or gene expression analysis. Therefore, in our study, we have developed a new high-throughput PCR-based detection method, multiplex enrichment quantitative PCR (ME-qPCR), which is a combination of qPCR and nested PCR. The GMO content detection results in our study showed that ME-qPCR could achieve high-throughput detection up to 26-plex. Compared to the original qPCR, the Ct values of ME-qPCR were lower for the same group, which showed that ME-qPCR sensitivity is higher than the original qPCR. The absolute limit of detection for ME-qPCR could achieve levels as low as a single copy of the plant genome. Moreover, the specificity results showed that no cross-amplification occurred for irrelevant GMO events. After evaluation of all of the parameters, a practical evaluation was performed with different foods. The more stable amplification results, compared to qPCR, showed that ME-qPCR was suitable for GMO detection in foods. In conclusion, ME-qPCR achieved sensitive, high-throughput GMO detection in complex substrates, such as crops or food samples. In the future, ME-qPCR-based GMO content identification may positively impact SNP analysis or multiplex gene expression of food or agricultural samples. Graphical abstract For the first-step amplification, four primers (A, B, C, and D) have been added into the reaction volume. In this manner, four kinds of amplicons have been generated. All of these four amplicons could be regarded as the target of second-step PCR. For the second-step amplification, three parallels have been taken for the final evaluation. After the second evaluation, the final amplification curves and melting curves have been achieved.
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen
2013-01-01
One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.
Real-Time Aggressive Image Data Compression
1990-03-31
implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression
Linking Emotional Intelligence to Achieve Technology Enhanced Learning in Higher Education
ERIC Educational Resources Information Center
Kruger, Janette; Blignaut, A. Seugnet
2013-01-01
Higher education institutions (HEIs) increasingly use technology-enhanced learning (TEL) environments (e.g. blended learning and e-learning) to improve student throughput and retention rates. As the demand for TEL courses increases, expectations rise for faculty to meet the challenge of using TEL effectively. The promises that TEL holds have not…
On the Achievable Throughput Over TVWS Sensor Networks
Caleffi, Marcello; Cacciapuoti, Angela Sara
2016-01-01
In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565
High count-rate study of two TES x-ray microcalorimeters with different transition temperatures
NASA Astrophysics Data System (ADS)
Lee, Sang-Jun; Adams, Joseph S.; Bandler, Simon R.; Betancourt-Martinez, Gabriele L.; Chervenak, James A.; Eckart, Megan E.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.; Porter, Frederick S.; Sadleir, John E.; Smith, Stephen J.; Wassell, Edward J.
2017-10-01
We have developed transition-edge sensor (TES) microcalorimeter arrays with high count-rate capability and high energy resolution to carry out x-ray imaging spectroscopy observations of various astronomical sources and the Sun. We have studied the dependence of the energy resolution and throughput (fraction of processed pulses) on the count rate for such microcalorimeters with two different transition temperatures (T c). Devices with both transition temperatures were fabricated within a single microcalorimeter array directly on top of a solid substrate where the thermal conductance of the microcalorimeter is dependent upon the thermal boundary resistance between the TES sensor and the dielectric substrate beneath. Because the thermal boundary resistance is highly temperature dependent, the two types of device with different T cs had very different thermal decay times, approximately one order of magnitude different. In our earlier report, we achieved energy resolutions of 1.6 and 2.3 eV at 6 keV from lower and higher T c devices, respectively, using a standard analysis method based on optimal filtering in the low flux limit. We have now measured the same devices at elevated x-ray fluxes ranging from 50 Hz to 1000 Hz per pixel. In the high flux limit, however, the standard optimal filtering scheme nearly breaks down because of x-ray pile-up. To achieve the highest possible energy resolution for a fixed throughput, we have developed an analysis scheme based on the so-called event grade method. Using the new analysis scheme, we achieved 5.0 eV FWHM with 96% throughput for 6 keV x-rays of 1025 Hz per pixel with the higher T c (faster) device, and 5.8 eV FWHM with 97% throughput with the lower T c (slower) device at 722 Hz.
Netest: A Tool to Measure the Maximum Burst Size, Available Bandwidth and Achievable Throughput
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Guojun; Tierney, Brian
2003-01-31
Distinguishing available bandwidth and achievable throughput is essential for improving network applications' performance. Achievable throughput is the throughput considering a number of factors such as network protocol, host speed, network path, and TCP buffer space, where as available bandwidth only considers the network path. Without understanding this difference, trying to improve network applications' performance is like ''blind men feeling the elephant'' [4]. In this paper, we define and distinguish bandwidth and throughput, and debate which part of each is achievable and which is available. Also, we introduce and discuss a new concept - Maximum Burst Size that is crucial tomore » the network performance and bandwidth sharing. A tool, netest, is introduced to help users to determine the available bandwidth, and provides information to achieve better throughput with fairness of sharing the available bandwidth, thus reducing misuse of the network.« less
NASA Astrophysics Data System (ADS)
Taoka, Hidekazu; Kishiyama, Yoshihisa; Higuchi, Kenichi; Sawahashi, Mamoru
This paper presents comparisons between common and dedicated reference signals (RSs) for channel estimation in MIMO multiplexing using codebook-based precoding for orthogonal frequency division multiplexing (OFDM) radio access in the Evolved UTRA downlink with frequency division duplexing (FDD). We clarify the best RS structure for precoding-based MIMO multiplexing based on comparisons of the structures in terms of the achievable throughput taking into account the overhead of the common and dedicated RSs and the precoding matrix indication (PMI) signal. Based on extensive simulations on the throughput in 2-by-2 and 4-by-4 MIMO multiplexing with precoding, we clarify that channel estimation based on common RSs multiplied with the precoding matrix indicated by the PMI signal achieves higher throughput compared to that using dedicated RSs irrespective of the number of spatial multiplexing streams when the number of available precoding matrices, i.e., the codebook size, is less than approximately 16 and 32 for 2-by-2 and 4-by-4 MIMO multiplexing, respectively.
GPU Lossless Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.
2014-01-01
Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.
High-throughput sample adaptive offset hardware architecture for high-efficiency video coding
NASA Astrophysics Data System (ADS)
Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin
2018-03-01
A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.
Uplink Downlink Rate Balancing and Throughput Scaling in FDD Massive MIMO Systems
NASA Astrophysics Data System (ADS)
Bergel, Itsik; Perets, Yona; Shamai, Shlomo
2016-05-01
In this work we extend the concept of uplink-downlink rate balancing to frequency division duplex (FDD) massive MIMO systems. We consider a base station with large number antennas serving many single antenna users. We first show that any unused capacity in the uplink can be traded off for higher throughput in the downlink in a system that uses either dirty paper (DP) coding or linear zero-forcing (ZF) precoding. We then also study the scaling of the system throughput with the number of antennas in cases of linear Beamforming (BF) Precoding, ZF Precoding, and DP coding. We show that the downlink throughput is proportional to the logarithm of the number of antennas. While, this logarithmic scaling is lower than the linear scaling of the rate in the uplink, it can still bring significant throughput gains. For example, we demonstrate through analysis and simulation that increasing the number of antennas from 4 to 128 will increase the throughput by more than a factor of 5. We also show that a logarithmic scaling of downlink throughput as a function of the number of receive antennas can be achieved even when the number of transmit antennas only increases logarithmically with the number of receive antennas.
NASA Astrophysics Data System (ADS)
Takeda, Kazuaki; Kojima, Yohei; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. However, the residual inter-chip interference (ICI) is produced after MMSE-FDE and this degrades the BER performance. Recently, we showed that frequency-domain ICI cancellation can bring the BER performance close to the theoretical lower bound. To further improve the BER performance, transmit antenna diversity technique is effective. Cyclic delay transmit diversity (CDTD) can increase the number of equivalent paths and hence achieve a large frequency diversity gain. Space-time transmit diversity (STTD) can obtain antenna diversity gain due to the space-time coding and achieve a better BER performance than CDTD. Objective of this paper is to show that the BER performance degradation of CDTD is mainly due to the residual ICI and that the introduction of ICI cancellation gives almost the same BER performance as STTD. This study provides a very important result that CDTD has a great advantage of providing a higher throughput than STTD. This is confirmed by computer simulation. The computer simulation results show that CDTD can achieve higher throughput than STTD when ICI cancellation is introduced.
High Count-Rate Study of Two TES X-Ray Microcalorimeters With Different Transition Temperatures
NASA Technical Reports Server (NTRS)
Lee, Sang-Jun; Adams, Joseph S.; Bandler, Simon R.; Betancourt-Martinez, Gabriele L.; Chervenak, James A.; Eckart, Megan E.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.; Porter, Frederick S.;
2017-01-01
We have developed transition-edge sensor (TES) microcalorimeter arrays with high count-rate capability and high energy resolution to carry out x-ray imaging spectroscopy observations of various astronomical sources and the Sun. We have studied the dependence of the energy resolution and throughput (fraction of processed pulses) on the count rate for such microcalorimeters with two different transition temperatures T(sub c). Devices with both transition temperatures were fabricated within a single microcalorimeter array directly on top of a solid substrate where the thermal conductance of the microcalorimeter is dependent upon the thermal boundary resistance between the TES sensor and the dielectric substrate beneath. Because the thermal boundary resistance is highly temperature dependent, the two types of device with different T(sub c)(sup s) had very different thermal decay times, approximately one order of magnitude different. In our earlier report, we achieved energy resolutions of 1.6 and 2.eV at 6 keV from lower and higher T(sub c) devices, respectively, using a standard analysis method based on optimal filtering in the low flux limit. We have now measured the same devices at elevated x-ray fluxes ranging from 50 Hz to 1000 Hz per pixel. In the high flux limit, however, the standard optimal filtering scheme nearly breaks down because of x-ray pile-up. To achieve the highest possible energy resolution for a fixed throughput, we have developed an analysis scheme based on the socalled event grade method. Using the new analysis scheme, we achieved 5.0 eV FWHM with 96 Percent throughput for 6 keV x-rays of 1025 Hz per pixel with the higher T(sub c) (faster) device, and 5.8 eV FWHM with 97 Percent throughput with the lower T(sub c) (slower) device at 722 Hz.
Wang, Xixian; Ren, Lihui; Su, Yetian; Ji, Yuetong; Liu, Yaoping; Li, Chunyu; Li, Xunrong; Zhang, Yi; Wang, Wei; Hu, Qiang; Han, Danxiang; Xu, Jian; Ma, Bo
2017-11-21
Raman-activated cell sorting (RACS) has attracted increasing interest, yet throughput remains one major factor limiting its broader application. Here we present an integrated Raman-activated droplet sorting (RADS) microfluidic system for functional screening of live cells in a label-free and high-throughput manner, by employing AXT-synthetic industrial microalga Haematococcus pluvialis (H. pluvialis) as a model. Raman microspectroscopy analysis of individual cells is carried out prior to their microdroplet encapsulation, which is then directly coupled to DEP-based droplet sorting. To validate the system, H. pluvialis cells containing different levels of AXT were mixed and underwent RADS. Those AXT-hyperproducing cells were sorted with an accuracy of 98.3%, an enrichment ratio of eight folds, and a throughput of ∼260 cells/min. Of the RADS-sorted cells, 92.7% remained alive and able to proliferate, which is equivalent to the unsorted cells. Thus, the RADS achieves a much higher throughput than existing RACS systems, preserves the vitality of cells, and facilitates seamless coupling with downstream manipulations such as single-cell sequencing and cultivation.
High throughput imaging cytometer with acoustic focussing.
Zmijan, Robert; Jonnalagadda, Umesh S; Carugo, Dario; Kochi, Yu; Lemm, Elizabeth; Packham, Graham; Hill, Martyn; Glynne-Jones, Peter
2015-10-31
We demonstrate an imaging flow cytometer that uses acoustic levitation to assemble cells and other particles into a sheet structure. This technique enables a high resolution, low noise CMOS camera to capture images of thousands of cells with each frame. While ultrasonic focussing has previously been demonstrated for 1D cytometry systems, extending the technology to a planar, much higher throughput format and integrating imaging is non-trivial, and represents a significant jump forward in capability, leading to diagnostic possibilities not achievable with current systems. A galvo mirror is used to track the images of the moving cells permitting exposure times of 10 ms at frame rates of 50 fps with motion blur of only a few pixels. At 80 fps, we demonstrate a throughput of 208 000 beads per second. We investigate the factors affecting motion blur and throughput, and demonstrate the system with fluorescent beads, leukaemia cells and a chondrocyte cell line. Cells require more time to reach the acoustic focus than beads, resulting in lower throughputs; however a longer device would remove this constraint.
High-throughput countercurrent microextraction in passive mode.
Xie, Tingliang; Xu, Cong
2018-05-15
Although microextraction is much more efficient than conventional macroextraction, its practical application has been limited by low throughputs and difficulties in constructing robust countercurrent microextraction (CCME) systems. In this work, a robust CCME process was established based on a novel passive microextractor with four units without any moving parts. The passive microextractor has internal recirculation and can efficiently mix two immiscible liquids. The hydraulic characteristics as well as the extraction and back-extraction performance of the passive CCME were investigated experimentally. The recovery efficiencies of the passive CCME were 1.43-1.68 times larger than the best values achieved using cocurrent extraction. Furthermore, the total throughput of the passive CCME developed in this work was about one to three orders of magnitude higher than that of other passive CCME systems reported in the literature. Therefore, a robust CCME process with high throughputs has been successfully constructed, which may promote the application of passive CCME in a wide variety of fields.
The application of the high throughput sequencing technology in the transposable elements.
Liu, Zhen; Xu, Jian-hong
2015-09-01
High throughput sequencing technology has dramatically improved the efficiency of DNA sequencing, and decreased the costs to a great extent. Meanwhile, this technology usually has advantages of better specificity, higher sensitivity and accuracy. Therefore, it has been applied to the research on genetic variations, transcriptomics and epigenomics. Recently, this technology has been widely employed in the studies of transposable elements and has achieved fruitful results. In this review, we summarize the application of high throughput sequencing technology in the fields of transposable elements, including the estimation of transposon content, preference of target sites and distribution, insertion polymorphism and population frequency, identification of rare copies, transposon horizontal transfers as well as transposon tagging. We also briefly introduce the major common sequencing strategies and algorithms, their advantages and disadvantages, and the corresponding solutions. Finally, we envision the developing trends of high throughput sequencing technology, especially the third generation sequencing technology, and its application in transposon studies in the future, hopefully providing a comprehensive understanding and reference for related scientific researchers.
NASA Technical Reports Server (NTRS)
Jandebeur, T. S.
1980-01-01
The effect of sample concentration on throughput and resolution in a modified continuous particle electrophoresis (CPE) system with flow in an upward direction is investigated. Maximum resolution is achieved at concentrations ranging from 2 x 10 to the 8th power cells/ml to 8 x 10 to the 8th power cells/ml. The widest peak separation is at 2 x 10 to the 8th power cells/ml; however, the sharpest peaks and least overlap between cell populations is at 8 x 10 to the 8th power cells/ml. Apparently as a result of improved electrophoresis cell performance due to coasting the chamber with bovine serum albumin, changing the electrode membranes and rinse, and lowering buffer temperatures, sedimentation effects attending to higher concentrations are diminished. Throughput as measured by recovery of fixed cells is diminished at the concentrations judged most likely to yield satisfactory resolution. The tradeoff appears to be improved recovery/throughput at the expense of resolution.
Spectrum Access In Cognitive Radio Using a Two-Stage Reinforcement Learning Approach
NASA Astrophysics Data System (ADS)
Raj, Vishnu; Dias, Irene; Tholeti, Thulasi; Kalyani, Sheetal
2018-02-01
With the advent of the 5th generation of wireless standards and an increasing demand for higher throughput, methods to improve the spectral efficiency of wireless systems have become very important. In the context of cognitive radio, a substantial increase in throughput is possible if the secondary user can make smart decisions regarding which channel to sense and when or how often to sense. Here, we propose an algorithm to not only select a channel for data transmission but also to predict how long the channel will remain unoccupied so that the time spent on channel sensing can be minimized. Our algorithm learns in two stages - a reinforcement learning approach for channel selection and a Bayesian approach to determine the optimal duration for which sensing can be skipped. Comparisons with other learning methods are provided through extensive simulations. We show that the number of sensing is minimized with negligible increase in primary interference; this implies that lesser energy is spent by the secondary user in sensing and also higher throughput is achieved by saving on sensing.
Single-cell genomics for the masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tringe, Susannah G.
In this issue of Nature Biotechnology, Lan et al. describe a new tool in the toolkit for studying uncultivated microbial communities, enabling orders of magnitude higher single cell genome throughput than previous methods. This is achieved by a complex droplet microfluidics workflow encompassing steps from physical cell isolation through genome sequencing, producing tens of thousands of lowcoverage genomes from individual cells.
Single-cell genomics for the masses
Tringe, Susannah G.
2017-07-12
In this issue of Nature Biotechnology, Lan et al. describe a new tool in the toolkit for studying uncultivated microbial communities, enabling orders of magnitude higher single cell genome throughput than previous methods. This is achieved by a complex droplet microfluidics workflow encompassing steps from physical cell isolation through genome sequencing, producing tens of thousands of lowcoverage genomes from individual cells.
Process Control in Production-Worthy Plasma Doping Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winder, Edmund J.; Fang Ziwei; Arevalo, Edwin
2006-11-13
As the semiconductor industry continues to scale devices of smaller dimensions and improved performance, many ion implantation processes require lower energy and higher doses. Achieving these high doses (in some cases {approx}1x1016 ions/cm2) at low energies (<3 keV) while maintaining throughput is increasingly challenging for traditional beamline implant tools because of space-charge effects that limit achievable beam density at low energies. Plasma doping is recognized as a technology which can overcome this problem. In this paper, we highlight the technology available to achieve process control for all implant parameters associated with modem semiconductor manufacturing.
Jacobs, K R; Guillemin, G J; Lovejoy, D B
2018-02-01
Kynurenine 3-monooxygenase (KMO) is a well-validated therapeutic target for the treatment of neurodegenerative diseases, including Alzheimer's disease (AD) and Huntington's disease (HD). This work reports a facile fluorescence-based KMO assay optimized for high-throughput screening (HTS) that achieves a throughput approximately 20-fold higher than the fastest KMO assay currently reported. The screen was run with excellent performance (average Z' value of 0.80) from 110,000 compounds across 341 plates and exceeded all statistical parameters used to describe a robust HTS assay. A subset of molecules was selected for validation by ultra-high-performance liquid chromatography, resulting in the confirmation of a novel hit with an IC 50 comparable to that of the well-described KMO inhibitor Ro-61-8048. A medicinal chemistry program is currently underway to further develop our novel KMO inhibitor scaffolds.
muBLASTP: database-indexed protein sequence search on multicore CPUs.
Zhang, Jing; Misra, Sanchit; Wang, Hao; Feng, Wu-Chun
2016-11-04
The Basic Local Alignment Search Tool (BLAST) is a fundamental program in the life sciences that searches databases for sequences that are most similar to a query sequence. Currently, the BLAST algorithm utilizes a query-indexed approach. Although many approaches suggest that sequence search with a database index can achieve much higher throughput (e.g., BLAT, SSAHA, and CAFE), they cannot deliver the same level of sensitivity as the query-indexed BLAST, i.e., NCBI BLAST, or they can only support nucleotide sequence search, e.g., MegaBLAST. Due to different challenges and characteristics between query indexing and database indexing, the existing techniques for query-indexed search cannot be used into database indexed search. muBLASTP, a novel database-indexed BLAST for protein sequence search, delivers identical hits returned to NCBI BLAST. On Intel Haswell multicore CPUs, for a single query, the single-threaded muBLASTP achieves up to a 4.41-fold speedup for alignment stages, and up to a 1.75-fold end-to-end speedup over single-threaded NCBI BLAST. For a batch of queries, the multithreaded muBLASTP achieves up to a 5.7-fold speedups for alignment stages, and up to a 4.56-fold end-to-end speedup over multithreaded NCBI BLAST. With a newly designed index structure for protein database and associated optimizations in BLASTP algorithm, we re-factored BLASTP algorithm for modern multicore processors that achieves much higher throughput with acceptable memory footprint for the database index.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network.
Choi, Sangil; Park, Jong Hyuk
2016-12-02
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network
Choi, Sangil; Park, Jong Hyuk
2016-01-01
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM. PMID:27918438
Alvarez, Guillermo Dufort Y; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo
2018-02-01
This work presents a wireless multichannel electroencephalogram (EEG) recording system featuring lossless and near-lossless compression of the digitized EEG signal. Two novel, low-complexity, efficient compression algorithms were developed and tested in a low-power platform. The algorithms were tested on six public EEG databases comparing favorably with the best compression rates reported up to date in the literature. In its lossless mode, the platform is capable of encoding and transmitting 59-channel EEG signals, sampled at 500 Hz and 16 bits per sample, at a current consumption of 337 A per channel; this comes with a guarantee that the decompressed signal is identical to the sampled one. The near-lossless mode allows for significant energy savings and/or higher throughputs in exchange for a small guaranteed maximum per-sample distortion in the recovered signal. Finally, we address the tradeoff between computation cost and transmission savings by evaluating three alternatives: sending raw data, or encoding with one of two compression algorithms that differ in complexity and compression performance. We observe that the higher the throughput (number of channels and sampling rate) the larger the benefits obtained from compression.
NASA Astrophysics Data System (ADS)
Yu, Hao Yun; Liu, Chun-Hung; Shen, Yu Tian; Lee, Hsuan-Ping; Tsai, Kuen Yu
2014-03-01
Line edge roughness (LER) influencing the electrical performance of circuit components is a key challenge for electronbeam lithography (EBL) due to the continuous scaling of technology feature sizes. Controlling LER within an acceptable tolerance that satisfies International Technology Roadmap for Semiconductors requirements while achieving high throughput become a challenging issue. Although lower dosage and more-sensitive resist can be used to improve throughput, they would result in serious LER-related problems because of increasing relative fluctuation in the incident positions of electrons. Directed self-assembly (DSA) is a promising technique to relax LER-related pattern fidelity (PF) requirements because of its self-healing ability, which may benefit throughput. To quantify the potential of throughput improvement in EBL by introducing DSA for post healing, rigorous numerical methods are proposed to simultaneously maximize throughput by adjusting writing parameters of EBL systems subject to relaxed LER-related PF requirements. A fast, continuous model for parameter sweeping and a hybrid model for more accurate patterning prediction are employed for the patterning simulation. The tradeoff between throughput and DSA self-healing ability is investigated. Preliminary results indicate that significant throughput improvements are achievable at certain process conditions.
Zmijan, Robert; Jonnalagadda, Umesh S.; Carugo, Dario; Kochi, Yu; Lemm, Elizabeth; Packham, Graham; Hill, Martyn
2015-01-01
We demonstrate an imaging flow cytometer that uses acoustic levitation to assemble cells and other particles into a sheet structure. This technique enables a high resolution, low noise CMOS camera to capture images of thousands of cells with each frame. While ultrasonic focussing has previously been demonstrated for 1D cytometry systems, extending the technology to a planar, much higher throughput format and integrating imaging is non-trivial, and represents a significant jump forward in capability, leading to diagnostic possibilities not achievable with current systems. A galvo mirror is used to track the images of the moving cells permitting exposure times of 10 ms at frame rates of 50 fps with motion blur of only a few pixels. At 80 fps, we demonstrate a throughput of 208 000 beads per second. We investigate the factors affecting motion blur and throughput, and demonstrate the system with fluorescent beads, leukaemia cells and a chondrocyte cell line. Cells require more time to reach the acoustic focus than beads, resulting in lower throughputs; however a longer device would remove this constraint. PMID:29456838
NASA Astrophysics Data System (ADS)
Wang, Liping; Ji, Yusheng; Liu, Fuqiang
The integration of multihop relays with orthogonal frequency-division multiple access (OFDMA) cellular infrastructures can meet the growing demands for better coverage and higher throughput. Resource allocation in the OFDMA two-hop relay system is more complex than that in the conventional single-hop OFDMA system. With time division between transmissions from the base station (BS) and those from relay stations (RSs), fixed partitioning of the BS subframe and RS subframes can not adapt to various traffic demands. Moreover, single-hop scheduling algorithms can not be used directly in the two-hop system. Therefore, we propose a semi-distributed algorithm called ASP to adjust the length of every subframe adaptively, and suggest two ways to extend single-hop scheduling algorithms into multihop scenarios: link-based and end-to-end approaches. Simulation results indicate that the ASP algorithm increases system utilization and fairness. The max carrier-to-interference ratio (Max C/I) and proportional fairness (PF) scheduling algorithms extended using the end-to-end approach obtain higher throughput than those using the link-based approach, but at the expense of more overhead for information exchange between the BS and RSs. The resource allocation scheme using ASP and end-to-end PF scheduling achieves a tradeoff between system throughput maximization and fairness.
Zhang, Xin; Zhang, Xiaomei; Xu, Guoqiang; Zhang, Xiaojuan; Shi, Jinsong; Xu, Zhenghong
2018-05-03
L-Serine is widely used in the pharmaceutical, food, and cosmetics industries. Although direct fermentative production of L-serine from sugar in Corynebacterium glutamicum has been achieved, the L-serine yield remains relatively low. In this study, atmospheric and room temperature plasma (ARTP) mutagenesis was used to improve the L-serine yield based on engineered C. glutamicum ΔSSAAI strain. Subsequently, we developed a novel high-throughput screening method using a biosensor constructed based on NCgl0581, a transcriptional factor specifically responsive to L-serine, so that L-serine concentration within single cell of C. glutamicum can be monitored via fluorescence-activated cell sorting (FACS). Novel L-serine-producing mutants were isolated from a large library of mutagenized cells. The mutant strain A36-pDser was screened from 1.2 × 10 5 cells, and the magnesium ion concentration in the medium was optimized specifically for this mutant. C. glutamicum A36-pDser accumulated 34.78 g/L L-serine with a yield of 0.35 g/g sucrose, which were 35.9 and 66.7% higher than those of the parent C. glutamicum ΔSSAAI-pDser strain, respectively. The L-serine yield achieved in this mutant was the highest of all reported L-serine-producing strains of C. glutamicum. Moreover, the whole-genome sequencing identified 11 non-synonymous mutations of genes associated with metabolic and transport pathways, which might be responsible for the higher L-serine production and better cell growth in C. glutamicum A36-pDser. This study explored an effective mutagenesis strategy and reported a novel high-throughput screening method for the development of L-serine-producing strains.
Enhanced electrochemical nanoring electrode for analysis of cytosol in single cells.
Zhuang, Lihong; Zuo, Huanzhen; Wu, Zengqiang; Wang, Yu; Fang, Danjun; Jiang, Dechen
2014-12-02
A microelectrode array has been applied for single cell analysis with relatively high throughput; however, the cells were typically cultured on the microelectrodes under cell-size microwell traps leading to the difficulty in the functionalization of an electrode surface for higher detection sensitivity. Here, nanoring electrodes embedded under the microwell traps were fabricated to achieve the isolation of the electrode surface and the cell support, and thus, the electrode surface can be modified to obtain enhanced electrochemical sensitivity for single cell analysis. Moreover, the nanometer-sized electrode permitted a faster diffusion of analyte to the surface for additional improvement in the sensitivity, which was evidenced by the electrochemical characterization and the simulation. To demonstrate the concept of the functionalized nanoring electrode for single cell analysis, the electrode surface was deposited with prussian blue to detect intracellular hydrogen peroxide at a single cell. Hundreds of picoamperes were observed on our functionalized nanoring electrode exhibiting the enhanced electrochemical sensitivity. The success in the achievement of a functionalized nanoring electrode will benefit the development of high throughput single cell electrochemical analysis.
Optimisation of wavelength modulated Raman spectroscopy: towards high throughput cell screening.
Praveen, Bavishna B; Mazilu, Michael; Marchington, Robert F; Herrington, C Simon; Riches, Andrew; Dholakia, Kishan
2013-01-01
In the field of biomedicine, Raman spectroscopy is a powerful technique to discriminate between normal and cancerous cells. However the strong background signal from the sample and the instrumentation affects the efficiency of this discrimination technique. Wavelength Modulated Raman spectroscopy (WMRS) may suppress the background from the Raman spectra. In this study we demonstrate a systematic approach for optimizing the various parameters of WMRS to achieve a reduction in the acquisition time for potential applications such as higher throughput cell screening. The Signal to Noise Ratio (SNR) of the Raman bands depends on the modulation amplitude, time constant and total acquisition time. It was observed that the sampling rate does not influence the signal to noise ratio of the Raman bands if three or more wavelengths are sampled. With these optimised WMRS parameters, we increased the throughput in the binary classification of normal human urothelial cells and bladder cancer cells by reducing the total acquisition time to 6 s which is significantly lower in comparison to previous acquisition times required for the discrimination between similar cell types.
Zhu, Xudong; Arman, Bessembayev; Chu, Ju; Wang, Yonghong; Zhuang, Yingping
2017-05-01
To develop an efficient cost-effective screening process to improve production of glucoamylase in Aspergillus niger. The cultivation of A. niger was achieved with well-dispersed morphology in 48-deep-well microtiter plates, which increased the throughput of the samples compared to traditional flask cultivation. There was a close negative correlation between glucoamylase and its pH of the fermentation broth. A novel high-throughput analysis method using Methyl Orange was developed. When compared to the conventional analysis method using 4-nitrophenyl α-D-glucopyranoside as substrate, a correlation coefficient of 0.96 by statistical analysis was obtained. Using this novel screening method, we acquired a strain with an activity of 2.2 × 10 3 U ml -1 , a 70% higher yield of glucoamylase than its parent strain.
Higher Throughput Calorimetry: Opportunities, Approaches and Challenges
Recht, Michael I.; Coyle, Joseph E.; Bruce, Richard H.
2010-01-01
Higher throughput thermodynamic measurements can provide value in structure-based drug discovery during fragment screening, hit validation, and lead optimization. Enthalpy can be used to detect and characterize ligand binding, and changes that affect the interaction of protein and ligand can sometimes be detected more readily from changes in the enthalpy of binding than from the corresponding free-energy changes or from protein-ligand structures. Newer, higher throughput calorimeters are being incorporated into the drug discovery process. Improvements in titration calorimeters come from extensions of a mature technology and face limitations in scaling. Conversely, array calorimetry, an emerging technology, shows promise for substantial improvements in throughput and material utilization, but improved sensitivity is needed. PMID:20888754
2017-12-11
provides ultra-low energy search operations. To improve throughput, the in-array pipeline scheme has been developed, allowing the MeTCAM to operate at a...controlled magnetic tunnel junction (VC-MTJ), which not only reduces cell area (thus achieving higher density) but also eliminates standby energy . This...Variations of the cell design are presented and evaluated. The results indicated a potential 90x improvement in the energy efficiency and a 50x
Preliminary Assessment of Microwave Readout Multiplexing Factor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croce, Mark Philip; Koehler, Katrina Elizabeth; Rabin, Michael W.
2017-01-23
Ultra-high resolution microcalorimeter gamma spectroscopy is a new non-destructive assay technology for measurement of plutonium isotopic composition, with the potential to reduce total measurement uncertainty to a level competitive with destructive analysis methods [1-4]. Achieving this level of performance in practical applications requires not only the energy resolution now routinely achieved with transition-edge sensor microcalorimeter arrays (an order of magnitude better than for germanium detectors) but also high throughput. Microcalorimeter gamma spectrometers have not yet achieved detection efficiency and count rate capability that is comparable to germanium detectors, largely because of limits from existing readout technology. Microcalorimeter detectors must bemore » operated at low temperature to achieve their exceptional energy resolution. Although the typical 100 mK operating temperatures can be achieved with reliable, cryogen-free systems, the cryogenic complexity and heat load from individual readout channels for large sensor arrays is prohibitive. Multiplexing is required for practical systems. The most mature multiplexing technology at present is time-division multiplexing (TDM) [3, 5-6]. In TDM, the sensor outputs are switched by applying bias current to one SQUID amplifier at a time. Transition-edge sensor (TES) microcalorimeter arrays as large as 256 pixels have been developed for X-ray and gamma-ray spectroscopy using TDM technology. Due to bandwidth limits and noise scaling, TDM is limited to a maximum multiplexing factor of approximately 32-40 sensors on one readout line [8]. Increasing the size of microcalorimeter arrays above the kilopixel scale, required to match the throughput of germanium detectors, requires the development of a new readout technology with a much higher multiplexing factor.« less
Murlidhar, Vasudha; Zeinali, Mina; Grabauskiene, Svetlana; Ghannad-Rezaie, Mostafa; Wicha, Max S; Simeone, Diane M; Ramnath, Nithya; Reddy, Rishindra M; Nagrath, Sunitha
2014-12-10
Circulating tumor cells (CTCs) are believed to play an important role in metastasis, a process responsible for the majority of cancer-related deaths. But their rarity in the bloodstream makes microfluidic isolation complex and time-consuming. Additionally the low processing speeds can be a hindrance to obtaining higher yields of CTCs, limiting their potential use as biomarkers for early diagnosis. Here, a high throughput microfluidic technology, the OncoBean Chip, is reported. It employs radial flow that introduces a varying shear profile across the device, enabling efficient cell capture by affinity at high flow rates. The recovery from whole blood is validated with cancer cell lines H1650 and MCF7, achieving a mean efficiency >80% at a throughput of 10 mL h(-1) in contrast to a flow rate of 1 mL h(-1) standardly reported with other microfluidic devices. Cells are recovered with a viability rate of 93% at these high speeds, increasing the ability to use captured CTCs for downstream analysis. Broad clinical application is demonstrated using comparable flow rates from blood specimens obtained from breast, pancreatic, and lung cancer patients. Comparable CTC numbers are recovered in all the samples at the two flow rates, demonstrating the ability of the technology to perform at high throughputs. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The MaNGA integral field unit fiber feed system for the Sloan 2.5 m telescope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drory, N.; MacDonald, N.; Byler, N.
2015-02-01
We describe the design, manufacture, and performance of bare-fiber integral field units (IFUs) for the SDSS-IV survey Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) on the the Sloan 2.5 m telescope at Apache Point Observatory. MaNGA is a luminosity-selected integral-field spectroscopic survey of 10{sup 4} local galaxies covering 360–1030 nm at R∼2200. The IFUs have hexagonal dense packing of fibers with packing regularity of 3 μm (rms), and throughput of 96 ± 0.5% from 350 nm to 1 μm in the lab. Their sizes range from 19 to 127 fibers (3–7 hexagonal layers) using Polymicro FBP 120:132:150 μm core:clad:buffermore » fibers to reach a fill fraction of 56%. High throughput (and low focal-ratio degradation (FRD)) is achieved by maintaining the fiber cladding and buffer intact, ensuring excellent surface polish, and applying a multi-layer anti-reflection (AR) coating of the input and output surfaces. In operations on-sky, the IFUs show only an additional 2.3% FRD-related variability in throughput despite repeated mechanical stressing during plate plugging (however other losses are present). The IFUs achieve on-sky throughput 5% above the single-fiber feeds used in SDSS-III/BOSS, attributable to equivalent performance compared to single fibers and additional gains from the AR coating. The manufacturing process is geared toward mass-production of high-multiplex systems. The low-stress process involves a precision ferrule with a hexagonal inner shape designed to lead inserted fibers to settle in a dense hexagonal pattern. The ferrule ID is tapered at progressively shallower angles toward its tip and the final 2 mm are straight and only a few microns larger than necessary to hold the desired number of fibers. Our IFU manufacturing process scales easily to accommodate other fiber sizes and can produce IFUs with substantially larger fiber counts. To assure quality, automated testing in a simple and inexpensive system enables complete characterization of throughput and fiber metrology. Future applications include larger IFUs, higher fill factors with stripped buffer, de-cladding, and lenslet coupling.« less
The MaNGA Integral Field Unit Fiber Feed System for the Sloan 2.5 m Telescope
NASA Astrophysics Data System (ADS)
Drory, N.; MacDonald, N.; Bershady, M. A.; Bundy, K.; Gunn, J.; Law, D. R.; Smith, M.; Stoll, R.; Tremonti, C. A.; Wake, D. A.; Yan, R.; Weijmans, A. M.; Byler, N.; Cherinka, B.; Cope, F.; Eigenbrot, A.; Harding, P.; Holder, D.; Huehnerhoff, J.; Jaehnig, K.; Jansen, T. C.; Klaene, M.; Paat, A. M.; Percival, J.; Sayres, C.
2015-02-01
We describe the design, manufacture, and performance of bare-fiber integral field units (IFUs) for the SDSS-IV survey Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) on the the Sloan 2.5 m telescope at Apache Point Observatory. MaNGA is a luminosity-selected integral-field spectroscopic survey of 104 local galaxies covering 360-1030 nm at R˜ 2200. The IFUs have hexagonal dense packing of fibers with packing regularity of 3 μm (rms), and throughput of 96 ± 0.5% from 350 nm to 1 μm in the lab. Their sizes range from 19 to 127 fibers (3-7 hexagonal layers) using Polymicro FBP 120:132:150 μm core:clad:buffer fibers to reach a fill fraction of 56%. High throughput (and low focal-ratio degradation (FRD)) is achieved by maintaining the fiber cladding and buffer intact, ensuring excellent surface polish, and applying a multi-layer anti-reflection (AR) coating of the input and output surfaces. In operations on-sky, the IFUs show only an additional 2.3% FRD-related variability in throughput despite repeated mechanical stressing during plate plugging (however other losses are present). The IFUs achieve on-sky throughput 5% above the single-fiber feeds used in SDSS-III/BOSS, attributable to equivalent performance compared to single fibers and additional gains from the AR coating. The manufacturing process is geared toward mass-production of high-multiplex systems. The low-stress process involves a precision ferrule with a hexagonal inner shape designed to lead inserted fibers to settle in a dense hexagonal pattern. The ferrule ID is tapered at progressively shallower angles toward its tip and the final 2 mm are straight and only a few microns larger than necessary to hold the desired number of fibers. Our IFU manufacturing process scales easily to accommodate other fiber sizes and can produce IFUs with substantially larger fiber counts. To assure quality, automated testing in a simple and inexpensive system enables complete characterization of throughput and fiber metrology. Future applications include larger IFUs, higher fill factors with stripped buffer, de-cladding, and lenslet coupling.
Method and apparatus for maximizing throughput of indirectly heated rotary kilns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coates, Ralph L; Smoot, Douglas L.; Hatfield, Kent E
An apparatus and method for achieving improved throughput capacity of indirectly heated rotary kilns used to produce pyrolysis products such as shale oils or coal oils that are susceptible to decomposition by high kiln wall temperatures is disclosed. High throughput is achieved by firing the kiln such that optimum wall temperatures are maintained beginning at the point where the materials enter the heating section of the kiln and extending to the point where the materials leave the heated section. Multiple high velocity burners are arranged such that combustion products directly impact on the area of the kiln wall covered internallymore » by the solid material being heated. Firing rates for the burners are controlled to maintain optimum wall temperatures.« less
Method and apparatus for maximizing throughput of indirectly heated rotary kilns
Coates, Ralph L; Smoot, L. Douglas; Hatfield, Kent E
2012-10-30
An apparatus and method for achieving improved throughput capacity of indirectly heated rotary kilns used to produce pyrolysis products such as shale oils or coal oils that are susceptible to decomposition by high kiln wall temperatures is disclosed. High throughput is achieved by firing the kiln such that optimum wall temperatures are maintained beginning at the point where the materials enter the heating section of the kiln and extending to the point where the materials leave the heated section. Multiple high velocity burners are arranged such that combustion products directly impact on the area of the kiln wall covered internally by the solid material being heated. Firing rates for the burners are controlled to maintain optimum wall temperatures.
Development and Performance of the ACTS High Speed VSAT
NASA Technical Reports Server (NTRS)
Quintana, J.; Tran, Q.; Dendy, R.
1999-01-01
The Advanced Communication Technology Satellite (ACTS), developed by the U.S. National Aeronautics and Space Administration (NASA) has demonstrated the breakthrough technologies of Ka-band, spot beam antennas, and on-board processing. These technologies have enabled the development of very small aperture terminals (VSAT) and ultra-small aperture terminals (USAT) which have capabilities greater than were previously possible with conventional satellite technologies. However, the ACTS baseband processor (BBP) is designed using a time division multiple access (TDMA) scheme, which requires each earth station using the BBP to transmit data at a burst rate which is much higher than the user throughput data rate. This tends to mitigate the advantage of the new technologies by requiring a larger earth station antenna and/or a higher-powered uplink amplifier than would be necessary for a continuous transmission at the user data rate. Conversely, the user data rate is much less than the rate that can be supported by the antenna size and amplifier. For example, the ACTS TI VSAT operates at a burst rate of 27.5 Mbps, but the maximum user data rate is 1.792 Mbps. The throughput efficiency is slightly more than 6.5%. For an operational network, this level of overhead will greatly increase the cost of the user earth stations, and that increased cost must be repeated thousands of times, which may ultimately reduce the market for such a system. The ACTS High Speed VSAT (HS VSAT) is an effort to experimentally demonstrate the maximum user throughput data rate which can be achieved using the technologies developed and implemented on ACTS. Specifically, this was done by operating the system uplinks as frequency division multiple access (FDMA), essentially assigning all available TDMA time slots to a single user on each of two uplink frequencies. Preliminary results show that using a 1.2-m antenna in this mode, the HS VSAT can achieve between 22 and 24 Mbps out of the 27.5 Mbps burst rate, for a throughput efficiency of 80-88%. This paper describes the modifications made to the TI VSAT to enable it to operate at high speed, including hardware considerations, interface modifications, and software modifications. In addition, it describes the results of NASA HS VSAT experiments, continuing work on an improved user interface, and plans for future experiments.
Electro-optic resonant phase modulator
NASA Technical Reports Server (NTRS)
Chen, Chien-Chung (Inventor); Hemmati, Hamid (Inventor); Robinson, Deborah L. (Inventor)
1992-01-01
An electro-optic resonant cavity is used to achieve phase modulation with lower driving voltages. Laser damage thresholds are inherently higher than with previously used integrated optics due to the utilization of bulk optics. Phase modulation is achieved at higher speeds with lower driving voltages than previously obtained with non-resonant electro-optic phase modulators. The instant scheme uses a data locking dither approach as opposed to the conventional sinusoidal locking schemes. In accordance with a disclosed embodiment, a resonant cavity modulator has been designed to operate at a data rate in excess of 100 megabits per sec. By carefully choosing the cavity finesse and its dimension, it is possible to control the pulse switching time to within 4 nano-sec. and to limit the required switching voltage to within 10 V. This cavity locking scheme can be applied by using only the random data sequence, and without the need of dithering of the cavity. Compared to waveguide modulators, the resonant cavity has a comparable modulating voltage requirement. Because of its bulk geometry, the resonant cavity modulator has the potential of accommodating higher throughput power. Mode matching into the bulk device is easier and typically can be achieved with higher efficiency. An additional control loop is incorporated into the modulator to maintain the cavity on resonance.
Bayat, Pouriya; Rezai, Pouya
2018-05-21
One of the common operations in sample preparation is to separate specific particles (e.g. target cells, embryos or microparticles) from non-target substances (e.g. bacteria) in a fluid and to wash them into clean buffers for further processing like detection (called solution exchange in this paper). For instance, solution exchange is widely needed in preparing fluidic samples for biosensing at the point-of-care and point-of-use, but still conducted via the use of cumbersome and time-consuming off-chip analyte washing and purification techniques. Existing small-scale and handheld active and passive devices for washing particles are often limited to very low throughputs or require external sources of energy. Here, we integrated Dean flow recirculation of two fluids in curved microchannels with selective inertial focusing of target particles to develop a microfluidic centrifuge device that can isolate specific particles (as surrogates for target analytes) from bacteria and wash them into a clean buffer at high throughput and efficiency. We could process micron-size particles at a flow rate of 1 mL min-1 and achieve throughputs higher than 104 particles per second. Our results reveal that the device is capable of singleplex solution exchange of 11 μm and 19 μm particles with efficiencies of 86 ± 2% and 93 ± 0.7%, respectively. A purity of 96 ± 2% was achieved in the duplex experiments where 11 μm particles were isolated from 4 μm particles. Application of our device in biological assays was shown by performing duplex experiments where 11 μm or 19 μm particles were isolated from an Escherichia coli bacterial suspension with purities of 91-98%. We envision that our technique will have applications in point-of-care devices for simultaneous purification and solution exchange of cells and embryos from smaller substances in high-volume suspensions at high throughput and efficiency.
NASA Astrophysics Data System (ADS)
Foronda, Augusto; Ohta, Chikara; Tamaki, Hisashi
Dirty paper coding (DPC) is a strategy to achieve the region capacity of multiple input multiple output (MIMO) downlink channels and a DPC scheduler is throughput optimal if users are selected according to their queue states and current rates. However, DPC is difficult to implement in practical systems. One solution, zero-forcing beamforming (ZFBF) strategy has been proposed to achieve the same asymptotic sum rate capacity as that of DPC with an exhaustive search over the entire user set. Some suboptimal user group selection schedulers with reduced complexity based on ZFBF strategy (ZFBF-SUS) and proportional fair (PF) scheduling algorithm (PF-ZFBF) have also been proposed to enhance the throughput and fairness among the users, respectively. However, they are not throughput optimal, fairness and throughput decrease if each user queue length is different due to different users channel quality. Therefore, we propose two different scheduling algorithms: a throughput optimal scheduling algorithm (ZFBF-TO) and a reduced complexity scheduling algorithm (ZFBF-RC). Both are based on ZFBF strategy and, at every time slot, the scheduling algorithms have to select some users based on user channel quality, user queue length and orthogonality among users. Moreover, the proposed algorithms have to produce the rate allocation and power allocation for the selected users based on a modified water filling method. We analyze the schedulers complexity and numerical results show that ZFBF-RC provides throughput and fairness improvements compared to the ZFBF-SUS and PF-ZFBF scheduling algorithms.
Wellhoefer, Martin; Sprinzl, Wolfgang; Hahn, Rainer; Jungbauer, Alois
2014-04-11
Continuous processing of recombinant proteins was accomplished by combining continuous matrix-assisted refolding and purification by tandem simulated moving bed (SMB) size-exclusion chromatography (SEC). Recombinant proteins, N(pro) fusion proteins from inclusion bodies were dissolved with NaOH and refolded in the SMB system with a closed-loop set-up with refolding buffer as the desorbent buffer and buffer recycling of the refolding buffer of the raffinate by tangential flow filtration. For further purification of the refolded proteins, a second SMB operation also based on SEC was added. The whole system could be operated isocratically with refolding buffer as the desorbent buffer, and buffer recycling could also be applied in the purification step. Thus, a significant reduction in buffer consumption was achieved. The system was evaluated with two proteins, the N(pro) fusion pep6His and N(pro) fusion MCP-1. Refolding solution, which contained residual N(pro) fusion peptide, the cleaved autoprotease N(pro), and the cleaved target peptide was used as feed solution. Full separation of the cleaved target peptide from residual proteins was achieved at a purity and recovery in the raffinate and extract, respectively, of approximately 100%. In addition, more than 99% of the refolding buffer of the raffinate was recycled. A comparison of throughput, productivity, and buffer consumption of the integrated continuous process with two batch processes demonstrated that up to 60-fold higher throughput, up to 180-fold higher productivity, and at least 28-fold lower buffer consumption can be obtained by the integrated continuous process, which compensates for the higher complexity. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Muhammad, Umar B.; Ezugwu, Absalom E.; Ofem, Paulinus O.; Rajamäki, Jyri; Aderemi, Adewumi O.
2017-06-01
Recently, researchers in the field of wireless sensor networks have resorted to energy harvesting techniques that allows energy to be harvested from the ambient environment to power sensor nodes. Using such Energy harvesting techniques together with proper routing protocols, an Energy Neutral state can be achieved so that sensor nodes can run perpetually. In this paper, we propose an Energy Neutral LEACH routing protocol which is an extension to the traditional LEACH protocol. The goal of the proposed protocol is to use Gateway node in each cluster so as to reduce the data transmission ranges of cluster head nodes. Simulation results show that the proposed routing protocol achieves a higher throughput and ensure the energy neutral status of the entire network.
High-throughput, image-based screening of pooled genetic variant libraries
Emanuel, George; Moffitt, Jeffrey R.; Zhuang, Xiaowei
2018-01-01
Image-based, high-throughput screening of genetic perturbations will advance both biology and biotechnology. We report a high-throughput screening method that allows diverse genotypes and corresponding phenotypes to be imaged in numerous individual cells. We achieve genotyping by introducing barcoded genetic variants into cells and using massively multiplexed FISH to measure the barcodes. We demonstrated this method by screening mutants of the fluorescent protein YFAST, yielding brighter and more photostable YFAST variants. PMID:29083401
Lu, Qin; Yi, Jing; Yang, Dianhai
2016-01-01
High-solid anaerobic digestion of sewage sludge achieves highly efficient volatile solid reduction, and production of volatile fatty acid (VFA) and methane compared with conventional low-solid anaerobic digestion. In this study, the potential mechanisms of the better performance in high-solid anaerobic digestion of sewage sludge were investigated by using 454 high-throughput pyrosequencing and real-time PCR to analyze the microbial characteristics in sewage sludge fermentation reactors. The results obtained by 454 high-throughput pyrosequencing revealed that the phyla Chloroflexi, Bacteroidetes, and Firmicutes were the dominant functional microorganisms in high-solid and low-solid anaerobic systems. Meanwhile, the real-time PCR assays showed that high-solid anaerobic digestion significantly increased the number of total bacteria, which enhanced the hydrolysis and acidification of sewage sludge. Further study indicated that the number of total archaea (dominated by Methanosarcina) in a high-solid anaerobic fermentation reactor was also higher than that in a low-solid reactor, resulting in higher VFA consumption and methane production. Hence, the increased key bacteria and methanogenic archaea involved in sewage sludge hydrolysis, acidification, and methanogenesis resulted in the better performance of high-solid anaerobic sewage sludge fermentation.
A Fair Contention Access Scheme for Low-Priority Traffic in Wireless Body Area Networks
Sajeel, Muhammad; Bashir, Faisal; Asfand-e-yar, Muhammad; Tauqir, Muhammad
2017-01-01
Recently, wireless body area networks (WBANs) have attracted significant consideration in ubiquitous healthcare. A number of medium access control (MAC) protocols, primarily derived from the superframe structure of the IEEE 802.15.4, have been proposed in literature. These MAC protocols aim to provide quality of service (QoS) by prioritizing different traffic types in WBANs. A contention access period (CAP)with high contention in priority-based MAC protocols can result in higher number of collisions and retransmissions. During CAP, traffic classes with higher priority are dominant over low-priority traffic; this has led to starvation of low-priority traffic, thus adversely affecting WBAN throughput, delay, and energy consumption. Hence, this paper proposes a traffic-adaptive priority-based superframe structure that is able to reduce contention in the CAP period, and provides a fair chance for low-priority traffic. Simulation results in ns-3 demonstrate that the proposed MAC protocol, called traffic- adaptive priority-based MAC (TAP-MAC), achieves low energy consumption, high throughput, and low latency compared to the IEEE 802.15.4 standard, and the most recent priority-based MAC protocol, called priority-based MAC protocol (PA-MAC). PMID:28832495
We previously integrated dosimetry and exposure with high-throughput screening (HTS) to enhance the utility of ToxCast™ HTS data by translating in vitro bioactivity concentrations to oral equivalent doses (OEDs) required to achieve these levels internally. These OEDs were compare...
High-throughput spectrometer designs in a compact form-factor: principles and applications
NASA Astrophysics Data System (ADS)
Norton, S. M.
2013-05-01
Many compact, portable Raman spectrometers have entered the market in the past few years with applications in narcotics and hazardous material identification, as well as verification applications in pharmaceuticals and security screening. Often, the required compact form-factor has forced designers to sacrifice throughput and sensitivity for portability and low-cost. We will show that a volume phase holographic (VPH)-based spectrometer design can achieve superior throughput and thus sensitivity over conventional Czerny-Turner reflective designs. We will look in depth at the factors influencing throughput and sensitivity and illustrate specific VPH-based spectrometer examples that highlight these design principles.
Coded throughput performance simulations for the time-varying satellite channel. M.S. Thesis
NASA Technical Reports Server (NTRS)
Han, LI
1995-01-01
The design of a reliable satellite communication link involving the data transfer from a small, low-orbit satellite to a ground station, but through a geostationary satellite, was examined. In such a scenario, the received signal power to noise density ratio increases as the transmitting low-orbit satellite comes into view, and then decreases as it then departs, resulting in a short-duration, time-varying communication link. The optimal values of the small satellite antenna beamwidth, signaling rate, modulation scheme and the theoretical link throughput (in bits per day) have been determined. The goal of this thesis is to choose a practical coding scheme which maximizes the daily link throughput while satisfying a prescribed probability of error requirement. We examine the throughput of both fixed rate and variable rate concatenated forward error correction (FEC) coding schemes for the additive white Gaussian noise (AWGN) channel, and then examine the effect of radio frequency interference (RFI) on the best coding scheme among them. Interleaving is used to mitigate degradation due to RFI. It was found that the variable rate concatenated coding scheme could achieve 74 percent of the theoretical throughput, equivalent to 1.11 Gbits/day based on the cutoff rate R(sub 0). For comparison, 87 percent is achievable for AWGN-only case.
Choi, Gihoon; Hassett, Daniel J; Choi, Seokheun
2015-06-21
There is a large global effort to improve microbial fuel cell (MFC) techniques and advance their translational potential toward practical, real-world applications. Significant boosts in MFC performance can be achieved with the development of new techniques in synthetic biology that can regulate microbial metabolic pathways or control their gene expression. For these new directions, a high-throughput and rapid screening tool for microbial biopower production is needed. In this work, a 48-well, paper-based sensing platform was developed for the high-throughput and rapid characterization of the electricity-producing capability of microbes. 48 spatially distinct wells of a sensor array were prepared by patterning 48 hydrophilic reservoirs on paper with hydrophobic wax boundaries. This paper-based platform exploited the ability of paper to quickly wick fluid and promoted bacterial attachment to the anode pads, resulting in instant current generation upon loading of the bacterial inoculum. We validated the utility of our MFC array by studying how strategic genetic modifications impacted the electrochemical activity of various Pseudomonas aeruginosa mutant strains. Within just 20 minutes, we successfully determined the electricity generation capacity of eight isogenic mutants of P. aeruginosa. These efforts demonstrate that our MFC array displays highly comparable performance characteristics and identifies genes in P. aeruginosa that can trigger a higher power density.
NASA Astrophysics Data System (ADS)
Rowlette, Jeremy A.; Fotheringham, Edeline; Nichols, David; Weida, Miles J.; Kane, Justin; Priest, Allen; Arnone, David B.; Bird, Benjamin; Chapman, William B.; Caffey, David B.; Larson, Paul; Day, Timothy
2017-02-01
The field of infrared spectral imaging and microscopy is advancing rapidly due in large measure to the recent commercialization of the first high-throughput, high-spatial-definition quantum cascade laser (QCL) microscope. Having speed, resolution and noise performance advantages while also eliminating the need for cryogenic cooling, its introduction has established a clear path to translating the well-established diagnostic capability of infrared spectroscopy into clinical and pre-clinical histology, cytology and hematology workflows. Demand for even higher throughput while maintaining high-spectral fidelity and low-noise performance continues to drive innovation in QCL-based spectral imaging instrumentation. In this talk, we will present for the first time, recent technological advances in tunable QCL photonics which have led to an additional 10X enhancement in spectral image data collection speed while preserving the high spectral fidelity and SNR exhibited by the first generation of QCL microscopes. This new approach continues to leverage the benefits of uncooled microbolometer focal plane array cameras, which we find to be essential for ensuring both reproducibility of data across instruments and achieving the high-reliability needed in clinical applications. We will discuss the physics underlying these technological advancements as well as the new biomedical applications these advancements are enabling, including automated whole-slide infrared chemical imaging on clinically relevant timescales.
Diffraction Efficiency Testing of Sinusoidal and Blazed Off-Plane Reflection Gratings
NASA Astrophysics Data System (ADS)
Tutt, James H.; McEntaffer, Randall L.; Marlowe, Hannah; Miles, Drew M.; Peterson, Thomas J.; Deroo, Casey T.; Scholze, Frank; Laubis, Christian
2016-09-01
Reflection gratings in the off-plane mount have the potential to enhance the performance of future high resolution soft X-ray spectrometers. Diffraction efficiency can be optimized through the use of blazed grating facets, achieving high-throughput on one side of zero-order. This paper presents the results from a comparison between a grating with a sinusoidally grooved profile and two gratings that have been blazed. The results show that the blaze does increase throughput to one side of zero-order; however, the total throughput of the sinusoidal gratings is greater than the blazed gratings, suggesting the method of manufacturing the blazed gratings does not produce precise facets. The blazed gratings were also tested in their Littrow and anti-Littrow configurations to quantify diffraction efficiency sensitivity to rotations about the grating normal. Only a small difference in the energy at which efficiency is maximized between the Littrow and anti-Littrow configurations is seen with a small shift in peak efficiency towards higher energies in the anti-Littrow case. This is due to a decrease in the effective blaze angle in the anti-Littrow mounting. This is supported by PCGrate-SX V6.1 modeling carried out for each blazed grating which predicts similar response trends in the Littrow and anti-Littrow orientations.
Achieving High Throughput for Data Transfer over ATM Networks
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.; Townsend, Jeffrey N.
1996-01-01
File-transfer rates for ftp are often reported to be relatively slow, compared to the raw bandwidth available in emerging gigabit networks. While a major bottleneck is disk I/O, protocol issues impact performance as well. Ftp was developed and optimized for use over the TCP/IP protocol stack of the Internet. However, TCP has been shown to run inefficiently over ATM. In an effort to maximize network throughput, data-transfer protocols can be developed to run over UDP or directly over IP, rather than over TCP. If error-free transmission is required, techniques for achieving reliable transmission can be included as part of the transfer protocol. However, selected image-processing applications can tolerate a low level of errors in images that are transmitted over a network. In this paper we report on experimental work to develop a high-throughput protocol for unreliable data transfer over ATM networks. We attempt to maximize throughput by keeping the communications pipe full, but still keep packet loss under five percent. We use the Bay Area Gigabit Network Testbed as our experimental platform.
A high-throughput in vitro ring assay for vasoactivity using magnetic 3D bioprinting
Tseng, Hubert; Gage, Jacob A.; Haisler, William L.; Neeley, Shane K.; Shen, Tsaiwei; Hebel, Chris; Barthlow, Herbert G.; Wagoner, Matthew; Souza, Glauco R.
2016-01-01
Vasoactive liabilities are typically assayed using wire myography, which is limited by its high cost and low throughput. To meet the demand for higher throughput in vitro alternatives, this study introduces a magnetic 3D bioprinting-based vasoactivity assay. The principle behind this assay is the magnetic printing of vascular smooth muscle cells into 3D rings that functionally represent blood vessel segments, whose contraction can be altered by vasodilators and vasoconstrictors. A cost-effective imaging modality employing a mobile device is used to capture contraction with high throughput. The goal of this study was to validate ring contraction as a measure of vasoactivity, using a small panel of known vasoactive drugs. In vitro responses of the rings matched outcomes predicted by in vivo pharmacology, and were supported by immunohistochemistry. Altogether, this ring assay robustly models vasoactivity, which could meet the need for higher throughput in vitro alternatives. PMID:27477945
Hybrid scheduling mechanisms for Next-generation Passive Optical Networks based on network coding
NASA Astrophysics Data System (ADS)
Zhao, Jijun; Bai, Wei; Liu, Xin; Feng, Nan; Maier, Martin
2014-10-01
Network coding (NC) integrated into Passive Optical Networks (PONs) is regarded as a promising solution to achieve higher throughput and energy efficiency. To efficiently support multimedia traffic under this new transmission mode, novel NC-based hybrid scheduling mechanisms for Next-generation PONs (NG-PONs) including energy management, time slot management, resource allocation, and Quality-of-Service (QoS) scheduling are proposed in this paper. First, we design an energy-saving scheme that is based on Bidirectional Centric Scheduling (BCS) to reduce the energy consumption of both the Optical Line Terminal (OLT) and Optical Network Units (ONUs). Next, we propose an intra-ONU scheduling and an inter-ONU scheduling scheme, which takes NC into account to support service differentiation and QoS assurance. The presented simulation results show that BCS achieves higher energy efficiency under low traffic loads, clearly outperforming the alternative NC-based Upstream Centric Scheduling (UCS) scheme. Furthermore, BCS is shown to provide better QoS assurance.
A comparison of high-throughput techniques for assaying circadian rhythms in plants.
Tindall, Andrew J; Waller, Jade; Greenwood, Mark; Gould, Peter D; Hartwell, James; Hall, Anthony
2015-01-01
Over the last two decades, the development of high-throughput techniques has enabled us to probe the plant circadian clock, a key coordinator of vital biological processes, in ways previously impossible. With the circadian clock increasingly implicated in key fitness and signalling pathways, this has opened up new avenues for understanding plant development and signalling. Our tool-kit has been constantly improving through continual development and novel techniques that increase throughput, reduce costs and allow higher resolution on the cellular and subcellular levels. With circadian assays becoming more accessible and relevant than ever to researchers, in this paper we offer a review of the techniques currently available before considering the horizons in circadian investigation at ever higher throughputs and resolutions.
An adaptive distributed data aggregation based on RCPC for wireless sensor networks
NASA Astrophysics Data System (ADS)
Hua, Guogang; Chen, Chang Wen
2006-05-01
One of the most important design issues in wireless sensor networks is energy efficiency. Data aggregation has significant impact on the energy efficiency of the wireless sensor networks. With massive deployment of sensor nodes and limited energy supply, data aggregation has been considered as an essential paradigm for data collection in sensor networks. Recently, distributed source coding has been demonstrated to possess several advantages in data aggregation for wireless sensor networks. Distributed source coding is able to encode sensor data with lower bit rate without direct communication among sensor nodes. To ensure reliable and high throughput transmission with the aggregated data, we proposed in this research a progressive transmission and decoding of Rate-Compatible Punctured Convolutional (RCPC) coded data aggregation with distributed source coding. Our proposed 1/2 RSC codes with Viterbi algorithm for distributed source coding are able to guarantee that, even without any correlation between the data, the decoder can always decode the data correctly without wasting energy. The proposed approach achieves two aspects in adaptive data aggregation for wireless sensor networks. First, the RCPC coding facilitates adaptive compression corresponding to the correlation of the sensor data. When the data correlation is high, higher compression ration can be achieved. Otherwise, lower compression ratio will be achieved. Second, the data aggregation is adaptively accumulated. There is no waste of energy in the transmission; even there is no correlation among the data, the energy consumed is at the same level as raw data collection. Experimental results have shown that the proposed distributed data aggregation based on RCPC is able to achieve high throughput and low energy consumption data collection for wireless sensor networks
Everett, Kibri H; Potter, Margaret A; Wheaton, William D; Gleason, Sherrianne M; Brown, Shawn T; Lee, Bruce Y
2013-01-01
Public health agencies use mass immunization locations to quickly administer vaccines to protect a population against an epidemic. The selection of such locations is frequently determined by available staffing levels and in some places, not all potential sites can be opened, often because of a lack of resources. Public health agencies need assistance in determining which n sites are the prime ones to open given available staff to minimize travel time and travel distance for those in the population who need to get to a site to receive treatment. Employ geospatial analytical methods to identify the prime n locations from a predetermined set of potential locations (eg, schools) and determine which locations may not be able to achieve the throughput necessary to reach the herd immunity threshold based on varying R0 values. Spatial location-allocation algorithms were used to select the ideal n mass vaccination locations. Allegheny County, Pennsylvania, served as the study area. The most favorable sites were selected and the number of individuals required to be vaccinated to achieve the herd immunity threshold for a given R0, ranging from 1.5 to 7, was determined. Locations that did not meet the Centers for Disease Control and Prevention throughput recommendation for smallpox were identified. At R0 = 1.5, all mass immunization locations met the required throughput to achieve the herd immunity threshold within 5 days. As R0s increased from 2 to 7, an increasing number of sites were inadequate to meet throughput requirements. Identifying the top n sites and categorizing those with throughput challenges allows health departments to adjust staffing, shift length, or the number of sites. This method has the potential to be expanded to select immunization locations under a number of additional scenarios.
improved and higher throughput methods for analysis of biomass feedstocks Agronomics-using NIR spectroscopy in-house and external client training. She has also developed improved and high-throughput methods
Healy, B J; van der Merwe, D; Christaki, K E; Meghzifene, A
2017-02-01
Medical linear accelerators (linacs) and cobalt-60 machines are both mature technologies for external beam radiotherapy. A comparison is made between these two technologies in terms of infrastructure and maintenance, dosimetry, shielding requirements, staffing, costs, security, patient throughput and clinical use. Infrastructure and maintenance are more demanding for linacs due to the complex electric componentry. In dosimetry, a higher beam energy, modulated dose rate and smaller focal spot size mean that it is easier to create an optimised treatment with a linac for conformal dose coverage of the tumour while sparing healthy organs at risk. In shielding, the requirements for a concrete bunker are similar for cobalt-60 machines and linacs but extra shielding and protection from neutrons are required for linacs. Staffing levels can be higher for linacs and more staff training is required for linacs. Life cycle costs are higher for linacs, especially multi-energy linacs. Security is more complex for cobalt-60 machines because of the high activity radioactive source. Patient throughput can be affected by source decay for cobalt-60 machines but poor maintenance and breakdowns can severely affect patient throughput for linacs. In clinical use, more complex treatment techniques are easier to achieve with linacs, and the availability of electron beams on high-energy linacs can be useful for certain treatments. In summary, there is no simple answer to the question of the choice of either cobalt-60 machines or linacs for radiotherapy in low- and middle-income countries. In fact a radiotherapy department with a combination of technologies, including orthovoltage X-ray units, may be an option. Local needs, conditions and resources will have to be factored into any decision on technology taking into account the characteristics of both forms of teletherapy, with the primary goal being the sustainability of the radiotherapy service over the useful lifetime of the equipment. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
THE RABIT: A RAPID AUTOMATED BIODOSIMETRY TOOL FOR RADIOLOGICAL TRIAGE
Garty, Guy; Chen, Youhua; Salerno, Alessio; Turner, Helen; Zhang, Jian; Lyulko, Oleksandra; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Yao, Y. Lawrence; Amundson, Sally A.; Brenner, David J.
2010-01-01
In response to the recognized need for high throughput biodosimetry methods for use after large scale radiological events, a logical approach is complete automation of standard biodosimetric assays that are currently performed manually. We describe progress to date on the RABIT (Rapid Automated BIodosimetry Tool), designed to score micronuclei or γ-H2AX fluorescence in lymphocytes derived from a single drop of blood from a fingerstick. The RABIT system is designed to be completely automated, from the input of the capillary blood sample into the machine, to the output of a dose estimate. Improvements in throughput are achieved through use of a single drop of blood, optimization of the biological protocols for in-situ analysis in multi-well plates, implementation of robotic plate and liquid handling, and new developments in high-speed imaging. Automating well-established bioassays represents a promising approach to high-throughput radiation biodosimetry, both because high throughputs can be achieved, but also because the time to deployment is potentially much shorter than for a new biological assay. Here we describe the development of each of the individual modules of the RABIT system, and show preliminary data from key modules. Ongoing is system integration, followed by calibration and validation. PMID:20065685
Infrared Photometry for Automated Telescopes: Passband Selection
NASA Astrophysics Data System (ADS)
Milone, Gene; Young, Andrew T.
2011-03-01
The high precision that photometry in the near and intermediate infrared region can provide has not been achieved, partly because of technical challenges (including cryogenics, which most IR detectors require), and partly because the filters in common use are not optimized to avoid water-vapor absorptions, which are the principal impediment to precise ground-based IR photometry. We review the IRWG filters that achieve this goal, and the trials that were undertaken to demonstrate their superiority. We focus especially on the near IR set and, for high elevation sites, the passbands in the N window. We also discuss the price to be paid for the improved precision, in the form of lower throughput, and why it should be paid: to achieve not only higher precision (i.e., improved signal-to-noise ratio), but also lower extinction, thus producing higher accuracy in extra-atmospheric magnitudes. The edges of the IRWG passbands are not defined by the edges of the atmospheric windows: therefore, they admit no flux from these (constantly varying) edges. The throughput cost and the lack of a large body of data already obtained in these passbands are principal reasons why the IRWG filters are not in wide use at observatories around the world that currently do IR work. Yet a measure of the signal-to-noise ratio varies inversely with both extinction and with a measure of the Forbes effect. So, the small loss of raw throughput is recouped in signal-to-noise gain. We illustrate these points with passbands of both near and intermediate IR passbands. There is also the matter of cost for small production runs of these filters; reduced costs can be realized through bulk orders with uniform filter specifications. As a consequence, the near-IR IRWG passbands offer the prospect of being able to do photometry in those passbands at both high and low elevation sites that are capable of supporting precise photometry, thereby freeing infrared photometry from the need to access exclusively high and dry elevation sites, although photometry done at those sites can also benefit from improved accuracy and transformability. We suggest that if the IRWG passbands are made available, they will be used! New automated systems making use of these passbands have the advantage of establishing the system more widely, creating a larger body of data to which future observations will be fully transformable, and will be cheaper to purchase. This work has been supported in part by grants to EFM by the Canadian Natural Sciences and Engineering Research Council.
Electron beam throughput from raster to imaging
NASA Astrophysics Data System (ADS)
Zywno, Marek
2016-12-01
Two architectures of electron beam tools are presented: single beam MEBES Exara designed and built by Etec Systems for mask writing, and the Reflected E-Beam Lithography tool (REBL), designed and built by KLA-Tencor under a DARPA Agreement No. HR0011-07-9-0007. Both tools have implemented technologies not used before to achieve their goals. The MEBES X, renamed Exara for marketing purposes, used an air bearing stage running in vacuum to achieve smooth continuous scanning. The REBL used 2 dimensional imaging to distribute charge to a 4k pixel swath to achieve writing times on the order of 1 wafer per hour, scalable to throughput approaching optical projection tools. Three stage architectures were designed for continuous scanning of wafers: linear maglev, rotary maglev, and dual linear maglev.
Fast and Flexible Successive-Cancellation List Decoders for Polar Codes
NASA Astrophysics Data System (ADS)
Hashemi, Seyyed Ali; Condo, Carlo; Gross, Warren J.
2017-11-01
Polar codes have gained significant amount of attention during the past few years and have been selected as a coding scheme for the next generation of mobile broadband standard. Among decoding schemes, successive-cancellation list (SCL) decoding provides a reasonable trade-off between the error-correction performance and hardware implementation complexity when used to decode polar codes, at the cost of limited throughput. The simplified SCL (SSCL) and its extension SSCL-SPC increase the speed of decoding by removing redundant calculations when encountering particular information and frozen bit patterns (rate one and single parity check codes), while keeping the error-correction performance unaltered. In this paper, we improve SSCL and SSCL-SPC by proving that the list size imposes a specific number of bit estimations required to decode rate one and single parity check codes. Thus, the number of estimations can be limited while guaranteeing exactly the same error-correction performance as if all bits of the code were estimated. We call the new decoding algorithms Fast-SSCL and Fast-SSCL-SPC. Moreover, we show that the number of bit estimations in a practical application can be tuned to achieve desirable speed, while keeping the error-correction performance almost unchanged. Hardware architectures implementing both algorithms are then described and implemented: it is shown that our design can achieve 1.86 Gb/s throughput, higher than the best state-of-the-art decoders.
Optima HD Imax: Molecular Implant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tieger, D. R.; Splinter, P. R.; Hsieh, T. J.
2008-11-03
Molecular implantation offers semiconductor device manufacturers multiple advantages over traditional high current ion implanters. The dose multiplication due to implanting more than one atom per molecule and the transport of beams at higher energies relative to the effective particle energies result in significant throughput enhancements without risk of energy contamination. The Optima HD Imax is introduced with molecular implant capability and the ability to reach up to 4.2 keV effective {sup 11}B from octadecaborane (B{sub 18}H{sub 22}). The ion source and beamline are optimized for molecular species ionization and transport. The beamline is coupled to the Optima HD mechanically scannedmore » endstation. The use of spot beam technology with ionized molecules maximizes the throughput potential and produces uniform implants with fast setup time and with superior angle control. The implanter architecture is designed to run multiple molecular species; for example, in addition to B{sub 18}H{sub 22} the system is capable of implanting carbon molecules for strain engineering and shallow junction engineering. Source lifetime data and typical operating conditions are described both for high dose, memory applications such as dual poly gate as well as lower energy implants for source drain extension and contact implants. Throughputs have been achieved in excess of 50 wafers per hour at doses up to 1x10{sup 16} ions/cm{sup 2} and for energies as low as 1 keV.« less
TCP Throughput Profiles Using Measurements over Dedicated Connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata
Wide-area data transfers in high-performance computing infrastructures are increasingly being carried over dynamically provisioned dedicated network connections that provide high capacities with no competing traffic. We present extensive TCP throughput measurements and time traces over a suite of physical and emulated 10 Gbps connections with 0-366 ms round-trip times (RTTs). Contrary to the general expectation, they show significant statistical and temporal variations, in addition to the overall dependencies on the congestion control mechanism, buffer size, and the number of parallel streams. We analyze several throughput profiles that have highly desirable concave regions wherein the throughput decreases slowly with RTTs, inmore » stark contrast to the convex profiles predicted by various TCP analytical models. We present a generic throughput model that abstracts the ramp-up and sustainment phases of TCP flows, which provides insights into qualitative trends observed in measurements across TCP variants: (i) slow-start followed by well-sustained throughput leads to concave regions; (ii) large buffers and multiple parallel streams expand the concave regions in addition to improving the throughput; and (iii) stable throughput dynamics, indicated by a smoother Poincare map and smaller Lyapunov exponents, lead to wider concave regions. These measurements and analytical results together enable us to select a TCP variant and its parameters for a given connection to achieve high throughput with statistical guarantees.« less
Nemes, Peter; Hoover, William J; Keire, David A
2013-08-06
Sensors with high chemical specificity and enhanced sample throughput are vital to screening food products and medical devices for chemical or biochemical contaminants that may pose a threat to public health. For example, the rapid detection of oversulfated chondroitin sulfate (OSCS) in heparin could prevent reoccurrence of heparin adulteration that caused hundreds of severe adverse events including deaths worldwide in 2007-2008. Here, rapid pyrolysis is integrated with direct analysis in real time (DART) mass spectrometry to rapidly screen major glycosaminoglycans, including heparin, chondroitin sulfate A, dermatan sulfate, and OSCS. The results demonstrate that, compared to traditional liquid chromatography-based analyses, pyrolysis mass spectrometry achieved at least 250-fold higher sample throughput and was compatible with samples volume-limited to about 300 nL. Pyrolysis yielded an abundance of fragment ions (e.g., 150 different m/z species), many of which were specific to the parent compound. Using multivariate and statistical data analysis models, these data enabled facile differentiation of the glycosaminoglycans with high throughput. After method development was completed, authentically contaminated samples obtained during the heparin crisis by the FDA were analyzed in a blinded manner for OSCS contamination. The lower limit of differentiation and detection were 0.1% (w/w) OSCS in heparin and 100 ng/μL (20 ng) OSCS in water, respectively. For quantitative purposes the linear dynamic range spanned approximately 3 orders of magnitude. Moreover, this chemical readout was successfully employed to find clues in the manufacturing history of the heparin samples that can be used for surveillance purposes. The presented technology and data analysis protocols are anticipated to be readily adaptable to other chemical and biochemical agents and volume-limited samples.
nextPARS: parallel probing of RNA structures in Illumina
Saus, Ester; Willis, Jesse R.; Pryszcz, Leszek P.; Hafez, Ahmed; Llorens, Carlos; Himmelbauer, Heinz
2018-01-01
RNA molecules play important roles in virtually every cellular process. These functions are often mediated through the adoption of specific structures that enable RNAs to interact with other molecules. Thus, determining the secondary structures of RNAs is central to understanding their function and evolution. In recent years several sequencing-based approaches have been developed that allow probing structural features of thousands of RNA molecules present in a sample. Here, we describe nextPARS, a novel Illumina-based implementation of in vitro parallel probing of RNA structures. Our approach achieves comparable accuracy to previous implementations, while enabling higher throughput and sample multiplexing. PMID:29358234
Wide-field two-photon microscopy with temporal focusing and HiLo background rejection
NASA Astrophysics Data System (ADS)
Yew, Elijah Y. S.; Choi, Heejin; Kim, Daekeun; So, Peter T. C.
2011-03-01
Scanningless depth-resolved microscopy is achieved through spatial-temporal focusing and has been demonstrated previously. The advantage of this method is that a large area may be imaged without scanning resulting in higher throughput of the imaging system. Because it is a widefield technique, the optical sectioning effect is considerably poorer than with conventional spatial focusing two-photon microscopy. Here we propose wide-field two-photon microscopy based on spatio-temporal focusing and employing background rejection based on the HiLo microscope principle. We demonstrate the effects of applying HiLo microscopy to widefield temporally focused two-photon microscopy.
Shrink-induced sorting using integrated nanoscale magnetic traps.
Nawarathna, Dharmakeerthi; Norouzi, Nazila; McLane, Jolie; Sharma, Himanshu; Sharac, Nicholas; Grant, Ted; Chen, Aaron; Strayer, Scott; Ragan, Regina; Khine, Michelle
2013-02-11
We present a plastic microfluidic device with integrated nanoscale magnetic traps (NSMTs) that separates magnetic from non-magnetic beads with high purity and throughput, and unprecedented enrichments. Numerical simulations indicate significantly higher localized magnetic field gradients than previously reported. We demonstrated >20 000-fold enrichment for 0.001% magnetic bead mixtures. Since we achieve high purity at all flow-rates tested, this is a robust, rapid, portable, and simple solution to sort target species from small volumes amenable for point-of-care applications. We used the NSMT in a 96 well format to extract DNA from small sample volumes for quantitative polymerase chain reaction (qPCR).
Sun, Huaju; Chang, Qing; Liu, Long; Chai, Kungang; Lin, Guangyan; Huo, Qingling; Zhao, Zhenxia; Zhao, Zhongxing
2017-11-22
Several novel peptides with high ACE-I inhibitory activity were successfully screened from sericin hydrolysate (SH) by coupling in silico and in vitro approaches for the first time. Most screening processes for ACE-I inhibitory peptides were achieved through high-throughput in silico simulation followed by in vitro verification. QSAR model based predicted results indicated that the ACE-I inhibitory activity of these SH peptides and six chosen peptides exhibited moderate high ACE-I inhibitory activities (log IC 50 values: 1.63-2.34). Moreover, two tripeptides among the chosen six peptides were selected for ACE-I inhibition mechanism analysis which based on Lineweaver-Burk plots indicated that they behave as competitive ACE-I inhibitors. The C-terminal residues of short-chain peptides that contain more H-bond acceptor groups could easily form hydrogen bonds with ACE-I and have higher ACE-I inhibitory activity. Overall, sericin protein as a strong ACE-I inhibition source could be deemed a promising agent for antihypertension applications.
3D Cultivation Techniques for Primary Human Hepatocytes
Bachmann, Anastasia; Moll, Matthias; Gottwald, Eric; Nies, Cordula; Zantl, Roman; Wagner, Helga; Burkhardt, Britta; Sánchez, Juan J. Martínez; Ladurner, Ruth; Thasler, Wolfgang; Damm, Georg; Nussler, Andreas K.
2015-01-01
One of the main challenges in drug development is the prediction of in vivo toxicity based on in vitro data. The standard cultivation system for primary human hepatocytes is based on monolayer cultures, even if it is known that these conditions result in a loss of hepatocyte morphology and of liver-specific functions, such as drug-metabolizing enzymes and transporters. As it has been demonstrated that hepatocytes embedded between two sheets of collagen maintain their function, various hydrogels and scaffolds for the 3D cultivation of hepatocytes have been developed. To further improve or maintain hepatic functions, 3D cultivation has been combined with perfusion. In this manuscript, we discuss the benefits and drawbacks of different 3D microfluidic devices. For most systems that are currently available, the main issues are the requirement of large cell numbers, the low throughput, and expensive equipment, which render these devices unattractive for research and the drug-developing industry. A higher acceptance of these devices could be achieved by their simplification and their compatibility with high-throughput, as both aspects are of major importance for a user-friendly device. PMID:27600213
Workshop Background and Summary of Webinars (IVIVE workshop)
Toxicokinetics (TK) provides a bridge between hazard and exposure by predicting tissue concentrations due to exposure. Higher throughput toxicokinetics (HTTK) appears to provide essential data to established context for in vitro bioactivity data obtained through high throughput ...
Experiments and Analyses of Data Transfers Over Wide-Area Dedicated Connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata
Dedicated wide-area network connections are increasingly employed in high-performance computing and big data scenarios. One might expect the performance and dynamics of data transfers over such connections to be easy to analyze due to the lack of competing traffic. However, non-linear transport dynamics and end-system complexities (e.g., multi-core hosts and distributed filesystems) can in fact make analysis surprisingly challenging. We present extensive measurements of memory-to-memory and disk-to-disk file transfers over 10 Gbps physical and emulated connections with 0–366 ms round trip times (RTTs). For memory-to-memory transfers, profiles of both TCP and UDT throughput as a function of RTT show concavemore » and convex regions; large buffer sizes and more parallel flows lead to wider concave regions, which are highly desirable. TCP and UDT both also display complex throughput dynamics, as indicated by their Poincare maps and Lyapunov exponents. For disk-to-disk transfers, we determine that high throughput can be achieved via a combination of parallel I/O threads, parallel network threads, and direct I/O mode. Our measurements also show that Lustre filesystems can be mounted over long-haul connections using LNet routers, although challenges remain in jointly optimizing file I/O and transport method parameters to achieve peak throughput.« less
High-Reflectivity Coatings for a Vacuum Ultraviolet Spectropolarimeter
NASA Astrophysics Data System (ADS)
Narukage, Noriyuki; Kubo, Masahito; Ishikawa, Ryohko; Ishikawa, Shin-nosuke; Katsukawa, Yukio; Kobiki, Toshihiko; Giono, Gabriel; Kano, Ryouhei; Bando, Takamasa; Tsuneta, Saku; Auchère, Frédéric; Kobayashi, Ken; Winebarger, Amy; McCandless, Jim; Chen, Jianrong; Choi, Joanne
2017-03-01
Precise polarization measurements in the vacuum ultraviolet (VUV) region are expected to be a new tool for inferring the magnetic fields in the upper atmosphere of the Sun. High-reflectivity coatings are key elements to achieving high-throughput optics for precise polarization measurements. We fabricated three types of high-reflectivity coatings for a solar spectropolarimeter in the hydrogen Lyman-α (Lyα; 121.567 nm) region and evaluated their performance. The first high-reflectivity mirror coating offers a reflectivity of more than 80 % in Lyα optics. The second is a reflective narrow-band filter coating that has a peak reflectivity of 57 % in Lyα, whereas its reflectivity in the visible light range is lower than 1/10 of the peak reflectivity (˜ 5 % on average). This coating can be used to easily realize a visible light rejection system, which is indispensable for a solar telescope, while maintaining high throughput in the Lyα line. The third is a high-efficiency reflective polarizing coating that almost exclusively reflects an s-polarized beam at its Brewster angle of 68° with a reflectivity of 55 %. This coating achieves both high polarizing power and high throughput. These coatings contributed to the high-throughput solar VUV spectropolarimeter called the Chromospheric Lyman-Alpha SpectroPolarimeter (CLASP), which was launched on 3 September, 2015.
Achieving Fair Throughput among TCP Flows in Multi-Hop Wireless Mesh Networks
NASA Astrophysics Data System (ADS)
Hou, Ting-Chao; Hsu, Chih-Wei
Previous research shows that the IEEE 802.11 DCF channel contention mechanism is not capable of providing throughput fairness among nodes in different locations of the wireless mesh network. The node nearest the gateway will always strive for the chance to transmit data, causing fewer transmission opportunities for the nodes farther from the gateway, resulting in starvation. Prior studies modify the DCF mechanism to address the fairness problem. This paper focuses on the fairness study when TCP flows are carried over wireless mesh networks. By not modifying lower layer protocols, the current work identifies TCP parameters that impact throughput fairness and proposes adjusting those parameters to reduce frame collisions and improve throughput fairness. With the aid of mathematical formulation and ns2 simulations, this study finds that frame transmission from each node can be effectively controlled by properly controlling the delayed ACK timer and using a suitable advertised window. The proposed method reduces frame collisions and greatly improves TCP throughput fairness.
Higher Throughput Toxicokinetics to Allow Extrapolation (EPA-Japan Bilateral EDSP meeting)
As part of "Ongoing EDSP Directions & Activities" I will present CSS research on high throughput toxicokinetics, including in vitro data and models to allow rapid determination of the real world doses that may cause endocrine disruption.
NASA Technical Reports Server (NTRS)
Egen, N. B.; Twitty, G. E.; Bier, M.
1979-01-01
Isoelectric focusing is a high-resolution technique for separating and purifying large peptides, proteins, and other biomolecules. The apparatus described in the present paper constitutes a new approach to fluid stabilization and increased throughput. Stabilization is achieved by flowing the process fluid uniformly through an array of closely spaced filter elements oriented parallel both to the electrodes and the direction of the flow. This seems to overcome the major difficulties of parabolic flow and electroosmosis at the walls, while limiting the convection to chamber compartments defined by adjacent spacers. Increased throughput is achieved by recirculating the process fluid through external heat exchange reservoirs, where the Joule heat is dissipated.
High efficiency solution processed sintered CdTe nanocrystal solar cells: the role of interfaces.
Panthani, Matthew G; Kurley, J Matthew; Crisp, Ryan W; Dietz, Travis C; Ezzyat, Taha; Luther, Joseph M; Talapin, Dmitri V
2014-02-12
Solution processing of photovoltaic semiconducting layers offers the potential for drastic cost reduction through improved materials utilization and high device throughput. One compelling solution-based processing strategy utilizes semiconductor layers produced by sintering nanocrystals into large-grain semiconductors at relatively low temperatures. Using n-ZnO/p-CdTe as a model system, we fabricate sintered CdTe nanocrystal solar cells processed at 350 °C with power conversion efficiencies (PCE) as high as 12.3%. JSC of over 25 mA cm(-2) are achieved, which are comparable or higher than those achieved using traditional, close-space sublimated CdTe. We find that the VOC can be substantially increased by applying forward bias for short periods of time. Capacitance measurements as well as intensity- and temperature-dependent analysis indicate that the increased VOC is likely due to relaxation of an energetic barrier at the ITO/CdTe interface.
FBCOT: a fast block coding option for JPEG 2000
NASA Astrophysics Data System (ADS)
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).
Planar patch clamp: advances in electrophysiology.
Brüggemann, Andrea; Farre, Cecilia; Haarmann, Claudia; Haythornthwaite, Ali; Kreir, Mohamed; Stoelzle, Sonja; George, Michael; Fertig, Niels
2008-01-01
Ion channels have gained increased interest as therapeutic targets over recent years, since a growing number of human and animal diseases have been attributed to defects in ion channel function. Potassium channels are the largest and most diverse family of ion channels. Pharmaceutical agents such as Glibenclamide, an inhibitor of K(ATP) channel activity which promotes insulin release, have been successfully sold on the market for many years. So far, only a small group of the known ion channels have been addressed as potential drug targets. The functional testing of drugs on these ion channels has always been the bottleneck in the development of these types of pharmaceutical compounds.New generations of automated patch clamp screening platforms allow a higher throughput for drug testing and widen this bottleneck. Due to their planar chip design not only is a higher throughput achieved, but new applications have also become possible. One of the advantages of planar patch clamp is the possibility of perfusing the intracellular side of the membrane during a patch clamp experiment in the whole-cell configuration. Furthermore, the extracellular membrane remains accessible for compound application during the experiment.Internal perfusion can be used not only for patch clamp experiments with cell membranes, but also for those with artificial lipid bilayers. In this chapter we describe how internal perfusion can be applied to potassium channels expressed in Jurkat cells, and to Gramicidin channels reconstituted in a lipid bilayer.
Multiple-mouse MRI with multiple arrays of receive coils.
Ramirez, Marc S; Esparza-Coss, Emilio; Bankson, James A
2010-03-01
Compared to traditional single-animal imaging methods, multiple-mouse MRI has been shown to dramatically improve imaging throughput and reduce the potentially prohibitive cost for instrument access. To date, up to a single radiofrequency coil has been dedicated to each animal being simultaneously scanned, thus limiting the sensitivity, flexibility, and ultimate throughput. The purpose of this study was to investigate the feasibility of multiple-mouse MRI with a phased-array coil dedicated to each animal. A dual-mouse imaging system, consisting of a pair of two-element phased-array coils, was developed and used to achieve acceleration factors greater than the number of animals scanned at once. By simultaneously scanning two mice with a retrospectively gated cardiac cine MRI sequence, a 3-fold acceleration was achieved with signal-to-noise ratio in the heart that is equivalent to that achieved with an unaccelerated scan using a commercial mouse birdcage coil. (c) 2010 Wiley-Liss, Inc.
Electric-optic resonant phase modulator
NASA Technical Reports Server (NTRS)
Chen, Chien-Chung (Inventor); Robinson, Deborah L. (Inventor); Hemmati, Hamid (Inventor)
1994-01-01
An electro-optic resonant cavity is used to achieve phase modulation with lower driving voltages. Laser damage thresholds are inherently higher than with previously used integrated optics due to the utilization of bulk optics. Phase modulation is achieved at higher speeds with lower driving voltages than previously obtained with non-resonant electro-optic phase modulators. The instant scheme uses a data locking dither approach as opposed to the conventional sinusoidal locking schemes. In accordance with a disclosed embodiment, a resonant cavity modulator has been designed to operate at a data rate in excess of 100 Mbps. By carefully choosing the cavity finesse and its dimension, it is possible to control the pulse switching time to within 4 ns and to limit the required switching voltage to within 10 V. Experimentally, the resonant cavity can be maintained on resonance with respect to the input laser signal by monitoring the fluctuation of output intensity as the cavity is switched. This cavity locking scheme can be applied by using only the random data sequence, and without the need of additional dithering of the cavity. Compared to waveguide modulators, the resonant cavity has a comparable modulating voltage requirement. Because of its bulk geometry, resonant cavity modulator has the potential of accommodating higher throughput power. Furthermore, mode matching into a bulk device is easier and typically can be achieved with higher efficiency. On the other hand, unlike waveguide modulators which are essentially traveling wave devices, the resonant cavity modulator requires that the cavity be maintained in resonance with respect to the incoming laser signal. An additional control loop is incorporated into the modulator to maintain the cavity on resonance.
SINA: accurate high-throughput multiple sequence alignment of ribosomal RNA genes.
Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver
2012-07-15
In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license.
Kavlock, Robert; Dix, David
2010-02-01
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the Toxicity of Chemicals (U.S. EPA, 2009a). Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast) and exposure (ExpoCast), and creating virtual liver (v-Liver) and virtual embryo (v-Embryo) systems models. U.S. EPA-funded STAR centers are also providing bioinformatics, computational toxicology data and models, and developmental toxicity data and models. The models and underlying data are being made publicly available through the Aggregated Computational Toxicology Resource (ACToR), the Distributed Structure-Searchable Toxicity (DSSTox) Database Network, and other U.S. EPA websites. While initially focused on improving the hazard identification process, the CTRP is placing increasing emphasis on using high-throughput bioactivity profiling data in systems modeling to support quantitative risk assessments, and in developing complementary higher throughput exposure models. This integrated approach will enable analysis of life-stage susceptibility, and understanding of the exposures, pathways, and key events by which chemicals exert their toxicity in developing systems (e.g., endocrine-related pathways). The CTRP will be a critical component in next-generation risk assessments utilizing quantitative high-throughput data and providing a much higher capacity for assessing chemical toxicity than is currently available.
Zador, Anthony M.; Dubnau, Joshua; Oyibo, Hassana K.; Zhan, Huiqing; Cao, Gang; Peikon, Ian D.
2012-01-01
Connectivity determines the function of neural circuits. Historically, circuit mapping has usually been viewed as a problem of microscopy, but no current method can achieve high-throughput mapping of entire circuits with single neuron precision. Here we describe a novel approach to determining connectivity. We propose BOINC (“barcoding of individual neuronal connections”), a method for converting the problem of connectivity into a form that can be read out by high-throughput DNA sequencing. The appeal of using sequencing is that its scale—sequencing billions of nucleotides per day is now routine—is a natural match to the complexity of neural circuits. An inexpensive high-throughput technique for establishing circuit connectivity at single neuron resolution could transform neuroscience research. PMID:23109909
Identification of functional modules using network topology and high-throughput data.
Ulitsky, Igor; Shamir, Ron
2007-01-26
With the advent of systems biology, biological knowledge is often represented today by networks. These include regulatory and metabolic networks, protein-protein interaction networks, and many others. At the same time, high-throughput genomics and proteomics techniques generate very large data sets, which require sophisticated computational analysis. Usually, separate and different analysis methodologies are applied to each of the two data types. An integrated investigation of network and high-throughput information together can improve the quality of the analysis by accounting simultaneously for topological network properties alongside intrinsic features of the high-throughput data. We describe a novel algorithmic framework for this challenge. We first transform the high-throughput data into similarity values, (e.g., by computing pairwise similarity of gene expression patterns from microarray data). Then, given a network of genes or proteins and similarity values between some of them, we seek connected sub-networks (or modules) that manifest high similarity. We develop algorithms for this problem and evaluate their performance on the osmotic shock response network in S. cerevisiae and on the human cell cycle network. We demonstrate that focused, biologically meaningful and relevant functional modules are obtained. In comparison with extant algorithms, our approach has higher sensitivity and higher specificity. We have demonstrated that our method can accurately identify functional modules. Hence, it carries the promise to be highly useful in analysis of high throughput data.
Azpiazu, Rubén; Amaral, Alexandra; Castillo, Judit; Estanyol, Josep Maria; Guimerà, Marta; Ballescà, Josep Lluís; Balasch, Juan; Oliva, Rafael
2014-06-01
Are there quantitative alterations in the proteome of normozoospermic sperm samples that are able to complete IVF but whose female partner does not achieve pregnancy? Normozoospermic sperm samples with different IVF outcomes (pregnancy versus no pregnancy) differed in the levels of at least 66 proteins. The analysis of the proteome of sperm samples with distinct fertilization capacity using low-throughput proteomic techniques resulted in the detection of a few differential proteins. Current high-throughput mass spectrometry approaches allow the identification and quantification of a substantially higher number of proteins. This was a case-control study including 31 men with normozoospermic sperm and their partners who underwent IVF with successful fertilization recruited between 2007 and 2008. Normozoospermic sperm samples from 15 men whose female partners did not achieve pregnancy after IVF (no pregnancy) and 16 men from couples that did achieve pregnancy after IVF (pregnancy) were included in this study. To perform the differential proteomic experiments, 10 no pregnancy samples and 10 pregnancy samples were separately pooled and subsequently used for tandem mass tags (TMT) protein labelling, sodium dodecyl sulphate-polyacrylamide gel electrophoresis, liquid chromatography tandem mass spectrometry (LC-MS/MS) identification and peak intensity relative protein quantification. Bioinformatic analyses were performed using UniProt Knowledgebase, DAVID and Reactome. Individual samples (n = 5 no pregnancy samples; n = 6 pregnancy samples) and aliquots from the above TMT pools were used for western blotting. By using TMT labelling and LC-MS/MS, we have detected 31 proteins present at lower abundance (ratio no pregnancy/pregnancy < 0.67) and 35 at higher abundance (ratio no pregnancy/pregnancy > 1.5) in the no pregnancy group. Bioinformatic analyses showed that the proteins with differing abundance are involved in chromatin assembly and lipoprotein metabolism (P values < 0.05). In addition, the differential abundance of one of the proteins (SRSF protein kinase 1) was further validated by western blotting using independent samples (P value < 0.01). For individual samples the amount of recovered sperm not used for IVF was low and in most of the cases insufficient for MS analysis, therefore pools of samples had to be used to this end. Alterations in the proteins involved in chromatin assembly and metabolism may result in epigenetic errors during spermatogenesis, leading to inaccurate sperm epigenetic signatures, which could ultimately prevent embryonic development. These sperm proteins may thus possibly have clinical relevance. This work was supported by the Spanish Ministry of Economy and Competitiveness (Ministerio de Economia y Competividad; FEDER BFU 2009-07118 and PI13/00699) and Fundación Salud 2000 SERONO13-015. There are no competing interests to declare.
Cognitive ergonomics of operational tools
NASA Astrophysics Data System (ADS)
Lüdeke, A.
2012-10-01
Control systems have become increasingly more powerful over the past decades. The availability of high data throughput and sophisticated graphical interactions has opened a variety of new possibilities. But has this helped to provide intuitive, easy to use applications to simplify the operation of modern large scale accelerator facilities? We will discuss what makes an application useful to operation and what is necessary to make a tool easy to use. We will show that even the implementation of a small number of simple application design rules can help to create ergonomic operational tools. The author is convinced that such tools do indeed help to achieve higher beam availability and better beam performance at accelerator facilities.
Measurements of file transfer rates over dedicated long-haul connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Settlemyer, Bradley W; Imam, Neena
2016-01-01
Wide-area file transfers are an integral part of several High-Performance Computing (HPC) scenarios. Dedicated network connections with high capacity, low loss rate and low competing traffic, are increasingly being provisioned over current HPC infrastructures to support such transfers. To gain insights into these file transfers, we collected transfer rate measurements for Lustre and xfs file systems between dedicated multi-core servers over emulated 10 Gbps connections with round trip times (rtt) in 0-366 ms range. Memory transfer throughput over these connections is measured using iperf, and file IO throughput on host systems is measured using xddprof. We consider two file systemmore » configurations: Lustre over IB network and xfs over SSD connected to PCI bus. Files are transferred using xdd across these connections, and the transfer rates are measured, which indicate the need to jointly optimize the connection and host file IO parameters to achieve peak transfer rates. In particular, these measurements indicate that (i) peak file transfer rate is lower than peak connection and host IO throughput, in some cases by as much as 50% or lower, (ii) xdd request sizes that achieve peak throughput for host file IO do not necessarily lead to peak file transfer rates, and (iii) parallelism in host IO and TCP transport does not always improve the file transfer rates.« less
De Paepe, Domien; Coudijzer, Katleen; Noten, Bart; Valkenborg, Dirk; Servaes, Kelly; De Loose, Marc; Diels, Ludo; Voorspoels, Stefan; Van Droogenbroeck, Bart
2015-04-15
In this study, advantages and disadvantages of the innovative, low-oxygen spiral-filter press system were studied in comparison with the belt press, commonly applied in small and medium size enterprises for the production of cloudy apple juice. On the basis of equivalent throughput, a higher juice yield could be achieved with spiral-filter press. Also a more turbid juice with a higher content of suspended solids could be produced. The avoidance of enzymatic browning during juice extraction led to an attractive yellowish juice with an elevated phenolic content. Moreover, it was found that juice produced with spiral-filter press demonstrates a higher retention of phenolic compounds during the downstream processing steps and storage. The results demonstrates the advantage of the use of a spiral-filter press in comparison with belt press in the production of a high quality cloudy apple juice rich in phenolic compounds, without the use of oxidation inhibiting additives. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ultrawidefield microscope for high-speed fluorescence imaging and targeted optogenetic stimulation.
Werley, Christopher A; Chien, Miao-Ping; Cohen, Adam E
2017-12-01
The rapid increase in the number and quality of fluorescent reporters and optogenetic actuators has yielded a powerful set of tools for recording and controlling cellular state and function. To achieve the full benefit of these tools requires improved optical systems with high light collection efficiency, high spatial and temporal resolution, and patterned optical stimulation, in a wide field of view (FOV). Here we describe our 'Firefly' microscope, which achieves these goals in a Ø6 mm FOV. The Firefly optical system is optimized for simultaneous photostimulation and fluorescence imaging in cultured cells. All but one of the optical elements are commercially available, yet the microscope achieves 10-fold higher light collection efficiency at its design magnification than the comparable commercially available microscope using the same objective. The Firefly microscope enables all-optical electrophysiology ('Optopatch') in cultured neurons with a throughput and information content unmatched by other neuronal phenotyping systems. This capability opens possibilities in disease modeling and phenotypic drug screening. We also demonstrate applications of the system to voltage and calcium recordings in human induced pluripotent stem cell derived cardiomyocytes.
Ultrawidefield microscope for high-speed fluorescence imaging and targeted optogenetic stimulation
Werley, Christopher A.; Chien, Miao-Ping; Cohen, Adam E.
2017-01-01
The rapid increase in the number and quality of fluorescent reporters and optogenetic actuators has yielded a powerful set of tools for recording and controlling cellular state and function. To achieve the full benefit of these tools requires improved optical systems with high light collection efficiency, high spatial and temporal resolution, and patterned optical stimulation, in a wide field of view (FOV). Here we describe our ‘Firefly’ microscope, which achieves these goals in a Ø6 mm FOV. The Firefly optical system is optimized for simultaneous photostimulation and fluorescence imaging in cultured cells. All but one of the optical elements are commercially available, yet the microscope achieves 10-fold higher light collection efficiency at its design magnification than the comparable commercially available microscope using the same objective. The Firefly microscope enables all-optical electrophysiology (‘Optopatch’) in cultured neurons with a throughput and information content unmatched by other neuronal phenotyping systems. This capability opens possibilities in disease modeling and phenotypic drug screening. We also demonstrate applications of the system to voltage and calcium recordings in human induced pluripotent stem cell derived cardiomyocytes. PMID:29296505
Kuroda, Kouichi; Ueda, Mitsuyoshi
2016-02-01
Butanol is an attractive alternative energy fuel owing to several advantages over ethanol. Among the microbial hosts for biobutanol production, yeast Saccharomyces cerevisiae has a great potential as a microbial host due to its powerful genetic tools, a history of successful industrial use, and its inherent tolerance to higher alcohols. Butanol production by S. cerevisiae was first attempted by transferring the 1-butanol-producing metabolic pathway from native microorganisms or using the endogenous Ehrlich pathway for isobutanol synthesis. Utilizing alternative enzymes with higher activity, eliminating competitive pathways, and maintaining cofactor balance achieved significant improvements in butanol production. Meeting future challenges, such as enhancing butanol tolerance and implementing a comprehensive strategy by high-throughput screening, would further elevate the biobutanol-producing ability of S. cerevisiae toward an ideal microbial cell factory exhibiting high productivity of biobutanol. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A Novel Byte-Substitution Architecture for the AES Cryptosystem.
Hossain, Fakir Sharif; Ali, Md Liakot
2015-01-01
The performance of Advanced Encryption Standard (AES) mainly depends on speed, area and power. The S-box represents an important factor that affects the performance of AES on each of these factors. A number of techniques have been presented in the literature, which have attempted to improve the performance of the S-box byte-substitution. This paper proposes a new S-box architecture, defining it as ultra low power, robustly parallel and highly efficient in terms of area. The architecture is discussed for both CMOS and FPGA platforms, and the pipelined architecture of the proposed S-box is presented for further time savings and higher throughput along with higher hardware resources utilization. A performance analysis and comparison of the proposed architecture is also conducted with those achieved by the existing techniques. The results of the comparison verify the outperformance of the proposed architecture in terms of power, delay and size.
A Novel Byte-Substitution Architecture for the AES Cryptosystem
Hossain, Fakir Sharif; Ali, Md. Liakot
2015-01-01
The performance of Advanced Encryption Standard (AES) mainly depends on speed, area and power. The S-box represents an important factor that affects the performance of AES on each of these factors. A number of techniques have been presented in the literature, which have attempted to improve the performance of the S-box byte-substitution. This paper proposes a new S-box architecture, defining it as ultra low power, robustly parallel and highly efficient in terms of area. The architecture is discussed for both CMOS and FPGA platforms, and the pipelined architecture of the proposed S-box is presented for further time savings and higher throughput along with higher hardware resources utilization. A performance analysis and comparison of the proposed architecture is also conducted with those achieved by the existing techniques. The results of the comparison verify the outperformance of the proposed architecture in terms of power, delay and size. PMID:26491967
Nanosurveyor: a framework for real-time data processing
Daurer, Benedikt J.; Krishnan, Hari; Perciano, Talita; ...
2017-01-31
Background: The ever improving brightness of accelerator based sources is enabling novel observations and discoveries with faster frame rates, larger fields of view, higher resolution, and higher dimensionality. Results: Here we present an integrated software/algorithmic framework designed to capitalize on high-throughput experiments through efficient kernels, load-balanced workflows, which are scalable in design. We describe the streamlined processing pipeline of ptychography data analysis. Conclusions: The pipeline provides throughput, compression, and resolution as well as rapid feedback to the microscope operators.
Hunt, Rodney D.; Collins, Jack L.; Johnson, Jared A.; ...
2017-03-17
Hundreds of grams of calcined cerium dioxide (CeO 2) microspheres were produced in this paper using the internal gelation process with a focus on 75–150 µm and <75 µm diameter sizes. To achieve these small sizes, a modified internal gelation system was employed, which utilized a two-fluid nozzle, two static mixers for turbulent flow, and 2-ethyl-1-hexanol as the medium for gel formation at 333–338 K. This effort generated over 400 g of 75–150 µm and 300 g of <75 µm CeO 2 microspheres. The typical product yields for the 75–150 µm and <75 µm microspheres that were collected and processedmore » were 72 and 99%, respectively, with a typical throughput of 66–73 g of CeO 2 microspheres per test, which could generate a maximum of 78.6 g of CeO 2. The higher yield of very small cerium spheres led to challenges and modifications, which are discussed in detail. Finally, as expected, when the <75 µm microspheres were targeted, losses to the system increased significantly.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, Rodney D.; Collins, Jack L.; Johnson, Jared A.
Hundreds of grams of calcined cerium dioxide (CeO 2) microspheres were produced in this paper using the internal gelation process with a focus on 75–150 µm and <75 µm diameter sizes. To achieve these small sizes, a modified internal gelation system was employed, which utilized a two-fluid nozzle, two static mixers for turbulent flow, and 2-ethyl-1-hexanol as the medium for gel formation at 333–338 K. This effort generated over 400 g of 75–150 µm and 300 g of <75 µm CeO 2 microspheres. The typical product yields for the 75–150 µm and <75 µm microspheres that were collected and processedmore » were 72 and 99%, respectively, with a typical throughput of 66–73 g of CeO 2 microspheres per test, which could generate a maximum of 78.6 g of CeO 2. The higher yield of very small cerium spheres led to challenges and modifications, which are discussed in detail. Finally, as expected, when the <75 µm microspheres were targeted, losses to the system increased significantly.« less
The impact of the condenser on cytogenetic image quality in digital microscope system.
Ren, Liqiang; Li, Zheng; Li, Yuhua; Zheng, Bin; Li, Shibo; Chen, Xiaodong; Liu, Hong
2013-01-01
Optimizing operational parameters of the digital microscope system is an important technique to acquire high quality cytogenetic images and facilitate the process of karyotyping so that the efficiency and accuracy of diagnosis can be improved. This study investigated the impact of the condenser on cytogenetic image quality and system working performance using a prototype digital microscope image scanning system. Both theoretical analysis and experimental validations through objectively evaluating a resolution test chart and subjectively observing large numbers of specimen were conducted. The results show that the optimal image quality and large depth of field (DOF) are simultaneously obtained when the numerical aperture of condenser is set as 60%-70% of the corresponding objective. Under this condition, more analyzable chromosomes and diagnostic information are obtained. As a result, the system shows higher working stability and less restriction for the implementation of algorithms such as autofocusing especially when the system is designed to achieve high throughput continuous image scanning. Although the above quantitative results were obtained using a specific prototype system under the experimental conditions reported in this paper, the presented evaluation methodologies can provide valuable guidelines for optimizing operational parameters in cytogenetic imaging using the high throughput continuous scanning microscopes in clinical practice.
Han, Xiaoping; Chen, Haide; Huang, Daosheng; Chen, Huidong; Fei, Lijiang; Cheng, Chen; Huang, He; Yuan, Guo-Cheng; Guo, Guoji
2018-04-05
Human pluripotent stem cells (hPSCs) provide powerful models for studying cellular differentiations and unlimited sources of cells for regenerative medicine. However, a comprehensive single-cell level differentiation roadmap for hPSCs has not been achieved. We use high throughput single-cell RNA-sequencing (scRNA-seq), based on optimized microfluidic circuits, to profile early differentiation lineages in the human embryoid body system. We present a cellular-state landscape for hPSC early differentiation that covers multiple cellular lineages, including neural, muscle, endothelial, stromal, liver, and epithelial cells. Through pseudotime analysis, we construct the developmental trajectories of these progenitor cells and reveal the gene expression dynamics in the process of cell differentiation. We further reprogram primed H9 cells into naïve-like H9 cells to study the cellular-state transition process. We find that genes related to hemogenic endothelium development are enriched in naïve-like H9. Functionally, naïve-like H9 show higher potency for differentiation into hematopoietic lineages than primed cells. Our single-cell analysis reveals the cellular-state landscape of hPSC early differentiation, offering new insights that can be harnessed for optimization of differentiation protocols.
Strategies for high-throughput focused-beam ptychography
Jacobsen, Chris; Deng, Junjing; Nashed, Youssef
2017-08-08
X-ray ptychography is being utilized for a wide range of imaging experiments with a resolution beyond the limit of the X-ray optics used. Introducing a parameter for the ptychographic resolution gainG p(the ratio of the beam size over the achieved pixel size in the reconstructed image), strategies for data sampling and for increasing imaging throughput when the specimen is at the focus of an X-ray beam are considered. As a result, the tradeoffs between large and small illumination spots are examined.
Strategies for high-throughput focused-beam ptychography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobsen, Chris; Deng, Junjing; Nashed, Youssef
X-ray ptychography is being utilized for a wide range of imaging experiments with a resolution beyond the limit of the X-ray optics used. Introducing a parameter for the ptychographic resolution gainG p(the ratio of the beam size over the achieved pixel size in the reconstructed image), strategies for data sampling and for increasing imaging throughput when the specimen is at the focus of an X-ray beam are considered. As a result, the tradeoffs between large and small illumination spots are examined.
Direct assembling methodologies for high-throughput bioscreening
Rodríguez-Dévora, Jorge I.; Shi, Zhi-dong; Xu, Tao
2012-01-01
Over the last few decades, high-throughput (HT) bioscreening, a technique that allows rapid screening of biochemical compound libraries against biological targets, has been widely used in drug discovery, stem cell research, development of new biomaterials, and genomics research. To achieve these ambitions, scaffold-free (or direct) assembly of biological entities of interest has become critical. Appropriate assembling methodologies are required to build an efficient HT bioscreening platform. The development of contact and non-contact assembling systems as a practical solution has been driven by a variety of essential attributes of the bioscreening system, such as miniaturization, high throughput, and high precision. The present article reviews recent progress on these assembling technologies utilized for the construction of HT bioscreening platforms. PMID:22021162
NASA Technical Reports Server (NTRS)
Ruedger, W. H.; Aanstoos, J. V.; Snyder, W. E.
1982-01-01
The NASA NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. This volume addresses the impact of data set selection on data formatting required for efficient telemetering of the acquired satellite sensor data. More specifically, the FILE algorithm developed by Martin-Marietta provides a means for the determination of those pixels from the data stream effects an improvement in the achievable system throughput. It will be seen that based on the lack of statistical stationarity in cloud cover, spatial distribution periods exist where data acquisition rates exceed the throughput capability. The study therefore addresses various approaches to data compression and truncation as applicable to this sensor mission.
Ultrafast Microfluidic Cellular Imaging by Optical Time-Stretch.
Lau, Andy K S; Wong, Terence T W; Shum, Ho Cheung; Wong, Kenneth K Y; Tsia, Kevin K
2016-01-01
There is an unmet need in biomedicine for measuring a multitude of parameters of individual cells (i.e., high content) in a large population efficiently (i.e., high throughput). This is particularly driven by the emerging interest in bringing Big-Data analysis into this arena, encompassing pathology, drug discovery, rare cancer cell detection, emulsion microdroplet assays, to name a few. This momentum is particularly evident in recent advancements in flow cytometry. They include scaling of the number of measurable colors from the labeled cells and incorporation of imaging capability to access the morphological information of the cells. However, an unspoken predicament appears in the current technologies: higher content comes at the expense of lower throughput, and vice versa. For example, accessing additional spatial information of individual cells, imaging flow cytometers only achieve an imaging throughput ~1000 cells/s, orders of magnitude slower than the non-imaging flow cytometers. In this chapter, we introduce an entirely new imaging platform, namely optical time-stretch microscopy, for ultrahigh speed and high contrast label-free single-cell (in a ultrafast microfluidic flow up to 10 m/s) imaging and analysis with an ultra-fast imaging line-scan rate as high as tens of MHz. Based on this technique, not only morphological information of the individual cells can be obtained in an ultrafast manner, quantitative evaluation of cellular information (e.g., cell volume, mass, refractive index, stiffness, membrane tension) at nanometer scale based on the optical phase is also possible. The technology can also be integrated with conventional fluorescence measurements widely adopted in the non-imaging flow cytometers. Therefore, these two combinatorial and complementary measurement capabilities in long run is an attractive platform for addressing the pressing need for expanding the "parameter space" in high-throughput single-cell analysis. This chapter provides the general guidelines of constructing the optical system for time stretch imaging, fabrication and design of the microfluidic chip for ultrafast fluidic flow, as well as the image acquisition and processing.
Microfluidic Imaging Flow Cytometry by Asymmetric-detection Time-stretch Optical Microscopy (ATOM).
Tang, Anson H L; Lai, Queenie T K; Chung, Bob M F; Lee, Kelvin C M; Mok, Aaron T Y; Yip, G K; Shum, Anderson H C; Wong, Kenneth K Y; Tsia, Kevin K
2017-06-28
Scaling the number of measurable parameters, which allows for multidimensional data analysis and thus higher-confidence statistical results, has been the main trend in the advanced development of flow cytometry. Notably, adding high-resolution imaging capabilities allows for the complex morphological analysis of cellular/sub-cellular structures. This is not possible with standard flow cytometers. However, it is valuable for advancing our knowledge of cellular functions and can benefit life science research, clinical diagnostics, and environmental monitoring. Incorporating imaging capabilities into flow cytometry compromises the assay throughput, primarily due to the limitations on speed and sensitivity in the camera technologies. To overcome this speed or throughput challenge facing imaging flow cytometry while preserving the image quality, asymmetric-detection time-stretch optical microscopy (ATOM) has been demonstrated to enable high-contrast, single-cell imaging with sub-cellular resolution, at an imaging throughput as high as 100,000 cells/s. Based on the imaging concept of conventional time-stretch imaging, which relies on all-optical image encoding and retrieval through the use of ultrafast broadband laser pulses, ATOM further advances imaging performance by enhancing the image contrast of unlabeled/unstained cells. This is achieved by accessing the phase-gradient information of the cells, which is spectrally encoded into single-shot broadband pulses. Hence, ATOM is particularly advantageous in high-throughput measurements of single-cell morphology and texture - information indicative of cell types, states, and even functions. Ultimately, this could become a powerful imaging flow cytometry platform for the biophysical phenotyping of cells, complementing the current state-of-the-art biochemical-marker-based cellular assay. This work describes a protocol to establish the key modules of an ATOM system (from optical frontend to data processing and visualization backend), as well as the workflow of imaging flow cytometry based on ATOM, using human cells and micro-algae as the examples.
A thioacidolysis method tailored for higher-throughput quantitative analysis of lignin monomers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harman-Ware, Anne E.; Foster, Cliff; Happs, Renee M.
Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β-O-4 linkages. Current thioacidolysis methods are low-throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non-chlorinated organic solvent and is tailored for higher-throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1-2 mg of biomass per assay and has been quantified using fast-GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, includingmore » standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day-to-day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. As a result, the method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses.« less
A thioacidolysis method tailored for higher-throughput quantitative analysis of lignin monomers
Harman-Ware, Anne E.; Foster, Cliff; Happs, Renee M.; ...
2016-09-14
Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β-O-4 linkages. Current thioacidolysis methods are low-throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non-chlorinated organic solvent and is tailored for higher-throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1-2 mg of biomass per assay and has been quantified using fast-GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, includingmore » standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day-to-day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. As a result, the method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Emily J.; Habas, Susan E.; Wang, Lu
2016-11-07
The translation of batch chemistries to high-throughput continuous flow methods dresses scaling, automation, and reproducibility concerns associated with the implementation of colloidally prepared nanoparticle (NP) catalysts for industrial catalytic processes. Nickel NPs were synthesized by the high-temperature amine reduction of a Ni2+ precursor using a continuous millifluidic (mF) flow method, achieving yields greater than 60%. The resulting Ni NP catalysts were compared against catalysts prepared in a batch reaction under conditions analogous to the continuous flow conditions with respect to total reaction volume, time, and temperature and by traditional incipient wetness (IW) impregnation for the hydrodeoxygenation (HDO) of guaiacol undermore » ex situ catalytic fast pyrolysis conditions. Compared to the IW method, the colloidally prepared NPs displayed increased morphological control and narrowed size distributions, and the NPs prepared by both methods showed similar size, shape, and crystallinity. The Ni NP catalyst synthesized by the continuous flow method exhibited similar H-adsorption site densities, site-time yields, and selectivities towards deoxygenated products as compared to the analogous batch reaction, and outperformed the IW catalyst with respect to higher selectivity to lower oxygen content products and a 6.9-fold slower deactivation rate. These results demonstrate the utility of synthesizing colloidal Ni NP catalysts using continuous flow methods while maintaining the catalytic properties displayed by the batch equivalent. Finally, this methodology can be extended to other catalytically relevant base metals for the high-throughput synthesis of metal NPs for the catalytic production of biofuels.« less
High-Throughput Cancer Cell Sphere Formation for 3D Cell Culture.
Chen, Yu-Chih; Yoon, Euisik
2017-01-01
Three-dimensional (3D) cell culture is critical in studying cancer pathology and drug response. Though 3D cancer sphere culture can be performed in low-adherent dishes or well plates, the unregulated cell aggregation may skew the results. On contrary, microfluidic 3D culture can allow precise control of cell microenvironments, and provide higher throughput by orders of magnitude. In this chapter, we will look into engineering innovations in a microfluidic platform for high-throughput cancer cell sphere formation and review the implementation methods in detail.
NASA Astrophysics Data System (ADS)
Shah, Amy T.; Cannon, Taylor M.; Higginbotham, Jim N.; Skala, Melissa C.
2016-02-01
Tumor heterogeneity poses challenges for devising optimal treatment regimens for cancer patients. In particular, subpopulations of cells can escape treatment and cause relapse. There is a need for methods to characterize tumor heterogeneity of treatment response. Cell metabolism is altered in cancer (Warburg effect), and cells use the autofluorescent cofactor NADH in numerous metabolic reactions. Previous studies have shown that microscopy measurements of NADH autofluorescence are sensitive to treatment response in breast cancer, and these techniques typically assess hundreds of cells per group. An alternative approach is flow cytometry, which measures fluorescence on a single-cell level and is attractive for characterizing tumor heterogeneity because it achieves high-throughput analysis and cell sorting in millions of cells per group. Current applications for flow cytometry rely on staining with fluorophores. This study characterizes flow cytometry measurements of NADH autofluorescence in breast cancer cells. Preliminary results indicate flow cytometry of NADH is sensitive to cyanide perturbation, which inhibits oxidative phosphorylation, in nonmalignant MCF10A cells. Additionally, flow cytometry is sensitive to higher NADH intensity for HER2-positive SKBr3 cells compared with triple-negative MDA-MB-231 cells. These results agree with previous microscopy studies. Finally, a mixture of SKBr3 and MDA-MB-231 cells were sorted into each cell type using NADH intensity. Sorted cells were cultured, and microscopy validation showed the expected morphology for each cell type. Ultimately, flow cytometry could be applied to characterize tumor heterogeneity based on treatment response and sort cell subpopulations based on metabolic profile. These achievements could enable individualized treatment strategies and improved patient outcomes.
Implementation of a pulse coupled neural network in FPGA.
Waldemark, J; Millberg, M; Lindblad, T; Waldemark, K; Becanovic, V
2000-06-01
The Pulse Coupled neural network, PCNN, is a biologically inspired neural net and it can be used in various image analysis applications, e.g. time-critical applications in the field of image pre-processing like segmentation, filtering, etc. a VHDL implementation of the PCNN targeting FPGA was undertaken and the results presented here. The implementation contains many interesting features. By pipelining the PCNN structure a very high throughput of 55 million neuron iterations per second could be achieved. By making the coefficients re-configurable during operation, a complete recognition system could be implemented on one, or maybe two, chip(s). Reconsidering the ranges and resolutions of the constants may save a lot of hardware, since the higher resolution requires larger multipliers, adders, memories etc.
Parmeggiani, Fabio; Lovelock, Sarah L.; Weise, Nicholas J.; Ahmed, Syed T.
2015-01-01
Abstract The synthesis of substituted d‐phenylalanines in high yield and excellent optical purity, starting from inexpensive cinnamic acids, has been achieved with a novel one‐pot approach by coupling phenylalanine ammonia lyase (PAL) amination with a chemoenzymatic deracemization (based on stereoselective oxidation and nonselective reduction). A simple high‐throughput solid‐phase screening method has also been developed to identify PALs with higher rates of formation of non‐natural d‐phenylalanines. The best variants were exploited in the chemoenzymatic cascade, thus increasing the yield and ee value of the d‐configured product. Furthermore, the system was extended to the preparation of those l‐phenylalanines which are obtained with a low ee value using PAL amination. PMID:27478261
Parmeggiani, Fabio; Lovelock, Sarah L; Weise, Nicholas J; Ahmed, Syed T; Turner, Nicholas J
2015-04-07
The synthesis of substituted d-phenylalanines in high yield and excellent optical purity, starting from inexpensive cinnamic acids, has been achieved with a novel one-pot approach by coupling phenylalanine ammonia lyase (PAL) amination with a chemoenzymatic deracemization (based on stereoselective oxidation and nonselective reduction). A simple high-throughput solid-phase screening method has also been developed to identify PALs with higher rates of formation of non-natural d-phenylalanines. The best variants were exploited in the chemoenzymatic cascade, thus increasing the yield and ee value of the d-configured product. Furthermore, the system was extended to the preparation of those l-phenylalanines which are obtained with a low ee value using PAL amination.
Parmeggiani, Fabio; Lovelock, Sarah L; Weise, Nicholas J; Ahmed, Syed T; Turner, Nicholas J
2015-01-01
The synthesis of substituted d-phenylalanines in high yield and excellent optical purity, starting from inexpensive cinnamic acids, has been achieved with a novel one-pot approach by coupling phenylalanine ammonia lyase (PAL) amination with a chemoenzymatic deracemization (based on stereoselective oxidation and nonselective reduction). A simple high-throughput solid-phase screening method has also been developed to identify PALs with higher rates of formation of non-natural d-phenylalanines. The best variants were exploited in the chemoenzymatic cascade, thus increasing the yield and ee value of the d-configured product. Furthermore, the system was extended to the preparation of those l-phenylalanines which are obtained with a low ee value using PAL amination. PMID:25728350
Analysis and an image recovery algorithm for ultrasonic tomography system
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1994-01-01
The problem of an ultrasonic reflectivity tomography is similar to that of a spotlight-mode aircraft Synthetic Aperture Radar (SAR) system. The analysis for a circular path spotlight mode SAR in this paper leads to the insight of the system characteristics. It indicates that such a system when operated in a wide bandwidth is capable of achieving the ultimate resolution; one quarter of the wavelength of the carrier frequency. An efficient processing algorithm based on the exact two dimensional spectrum is presented. The results of simulation indicate that the impulse responses meet the predicted resolution performance. Compared to an algorithm previously developed for the ultrasonic reflectivity tomography, the throughput rate of this algorithm is about ten times higher.
Throughput Maximization for Sensor-Aided Cognitive Radio Networks with Continuous Energy Arrivals
Nguyen, Thanh-Tung; Koo, Insoo
2015-01-01
We consider a Sensor-Aided Cognitive Radio Network (SACRN) in which sensors capable of harvesting energy are distributed throughout the network to support secondary transmitters for sensing licensed channels in order to improve both energy and spectral efficiency. Harvesting ambient energy is one of the most promising solutions to mitigate energy deficiency, prolong device lifetime, and partly reduce the battery size of devices. So far, many works related to SACRN have considered single secondary users capable of harvesting energy in whole slot as well as short-term throughput. In the paper, we consider two types of energy harvesting sensor nodes (EHSN): Type-I sensor nodes will harvest ambient energy in whole slot duration, whereas type-II sensor nodes will only harvest energy after carrying out spectrum sensing. In the paper, we also investigate long-term throughput in the scheduling window, and formulate the throughput maximization problem by considering energy-neutral operation conditions of type-I and -II sensors and the target detection probability. Through simulations, it is shown that the sensing energy consumption of all sensor nodes can be efficiently managed with the proposed scheme to achieve optimal long-term throughput in the window. PMID:26633393
Duplex-imprinted nano well arrays for promising nanoparticle assembly
NASA Astrophysics Data System (ADS)
Li, Xiangping; Manz, Andreas
2018-02-01
A large area nano-duplex-imprint technique is presented in this contribution using natural cicada wings as stamps. The glassy wings of the cicada, which are abundant in nature, exhibit strikingly interesting nanopillar structures over their membrane. This technique, with excellent performance despite the nonplanar surface of the wings, combines both top-down and bottom-up nanofabrication techniques. It transitions micro-nanofabrication from a cleanroom environment to the bench. Two different materials, dicing tape with an acrylic layer and a UV optical adhesive, are used to make replications at the same time, thus achieving duplex imprinting. The promise of a large volume of commercial manufacturing of these nanostructure elements can be envisaged through this contribution to speeding up the fabrication process and achieving a higher throughput. The contact angle of the replicated nanowell arrays before and after oxygen plasma was measured. Gold nanoparticles (50 nm) were used to test how the nanoparticles behaved on the untreated and plasma-treated replica surface. The experiments show that promising nanoparticle self-assembly can be obtained.
NASA Astrophysics Data System (ADS)
Aldhaibani, Jaafar A.; Ahmad, R. B.; Yahya, A.; Azeez, Suzan A.
2015-05-01
Wireless multi-hop relay networks have become very important technologies in mobile communications. These networks ensure high throughput and coverage extension with a low cost. The poor capacity at cell edges is not enough to meet with growing demand of high capacity and throughput irrespective of user's placement in the cellular network. In this paper we propose optimal placement of relay node that provides maximum achievable rate at users and enhances the throughput and coverage at cell edge region. The proposed scheme is based on the outage probability at users and taken on account the interference between nodes. Numerical analyses along with simulation results indicated there are an improvement in capacity for users at the cell edge is 40% increment from all cell capacity.
McDermott, W R; Tri, J L; Mitchell, M P; Levens, S P; Wondrow, M A; Huie, L M; Khandheria, B K; Gilbert, B K
1999-01-01
A high data rate terrestrial and satellite network was implemented to transfer medical images and data. This article describes the a optimization of the workstations and switching equipment incorporated into the network. Topics discussed in this article include tuning of the network software, the configuration of the Sun Microsystems workstations, the FORE Systems asynchronous transfer mode switches, as well as the throughput results of two telemedicine experiments undertaken by Mayo's physician staff. The technical staff was successful in achieving the data throughput needed by the telemedicine software; particularly important was the proper determination of peak throughput and TCP window sizes to ensure optimum use of the resources available on the Sun Microsystems and Hewlett Packard workstations.
New transmission scheme to enhance throughput of DF relay network using rate and power adaptation
NASA Astrophysics Data System (ADS)
Taki, Mehrdad; Heshmati, Milad
2017-09-01
This paper presents a new transmission scheme for a decode and forward (DF) relay network using continuous power adaptation while independent average power constraints are provisioned for each node. To have analytical insight, the achievable throughputs are analysed using continuous adaptation of the rates and the powers. As shown by numerical evaluations, a considerable outperformance is seen by continuous power adaptation compared to the case where constant powers are utilised. Also for practical systems, a new throughput maximised transmission scheme is developed using discrete rate adaptation (adaptive modulation and coding) and continuous transmission power adaptation. First a 2-hop relay network is considered and then the scheme is extended for an N-hop network. Numerical evaluations show the efficiency of the designed schemes.
Davenport, Paul B; Carter, Kimberly F; Echternach, Jeffrey M; Tuck, Christopher R
2018-02-01
High-reliability organizations (HROs) demonstrate unique and consistent characteristics, including operational sensitivity and control, situational awareness, hyperacute use of technology and data, and actionable process transformation. System complexity and reliance on information-based processes challenge healthcare organizations to replicate HRO processes. This article describes a healthcare organization's 3-year journey to achieve key HRO features to deliver high-quality, patient-centric care via an operations center powered by the principles of high-reliability data and software to impact patient throughput and flow.
VIRTEX-5 Fpga Implementation of Advanced Encryption Standard Algorithm
NASA Astrophysics Data System (ADS)
Rais, Muhammad H.; Qasim, Syed M.
2010-06-01
In this paper, we present an implementation of Advanced Encryption Standard (AES) cryptographic algorithm using state-of-the-art Virtex-5 Field Programmable Gate Array (FPGA). The design is coded in Very High Speed Integrated Circuit Hardware Description Language (VHDL). Timing simulation is performed to verify the functionality of the designed circuit. Performance evaluation is also done in terms of throughput and area. The design implemented on Virtex-5 (XC5VLX50FFG676-3) FPGA achieves a maximum throughput of 4.34 Gbps utilizing a total of 399 slices.
International business communications via Intelsat K-band transponders
NASA Astrophysics Data System (ADS)
Hagmann, W.; Rhodes, S.; Fang, R.
This paper discusses how the transponder throughput and the required earth station HPA power in the Intelsat Business Services Network vary as a function of coding rate and required fade margin. The results indicate that transponder throughputs of 40 to 50 Mbit/s are achievable. A comparison of time domain simulation results with results based on a straightforward link analysis shows that the link analysis results may be fairly optimistic if the satellite traveling wave tube amplifier (TWTA) is operated near saturation; however, there is good agreement for large backoffs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kruger, Albert A.
2013-07-01
The current estimates and glass formulation efforts have been conservative in terms of achievable waste loadings. These formulations have been specified to ensure that the glasses are homogenous, contain essentially no crystalline phases, are processable in joule-heated, ceramic-lined melters and meet Hanford Tank Waste Treatment and Immobilization Plant (WTP) Contract terms. The WTP's overall mission will require the immobilization of tank waste compositions that are dominated by mixtures of aluminum (Al), chromium (Cr), bismuth (Bi), iron (Fe), phosphorous (P), zirconium (Zr), and sulphur (S) compounds as waste-limiting components. Glass compositions for these waste mixtures have been developed based upon previousmore » experience and current glass property models. Recently, DOE has initiated a testing program to develop and characterize HLW glasses with higher waste loadings and higher throughput efficiencies. Results of this work have demonstrated the feasibility of increases in waste loading from about 25 wt% to 33-50 wt% (based on oxide loading) in the glass depending on the waste stream. In view of the importance of aluminum limited waste streams at Hanford (and also Savannah River), the ability to achieve high waste loadings without adversely impacting melt rates has the potential for enormous cost savings from reductions in canister count and the potential for schedule acceleration. Consequently, the potential return on the investment made in the development of these enhancements is extremely favorable. Glass composition development for one of the latest Hanford HLW projected compositions with sulphate concentrations high enough to limit waste loading have been successfully tested and show tolerance for previously unreported tolerance for sulphate. Though a significant increase in waste loading for high-iron wastes has been achieved, the magnitude of the increase is not as substantial as those achieved for high-aluminum, high-chromium, high-bismuth or sulphur. Waste processing rate increases for high-iron streams as a combined effect of higher waste loadings and higher melt rates resulting from new formulations have been achieved. (author)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kruger, Albert A.
2013-01-16
The current estimates and glass formulation efforts have been conservative in terms of achievable waste loadings. These formulations have been specified to ensure that the glasses are homogenous, contain essentially no crystalline phases, are processable in joule-heated, ceramic-lined melters and meet Hanford Tank Waste Treatment and Immobilization Plant (WTP) Contract terms. The WTP?s overall mission will require the immobilization of tank waste compositions that are dominated by mixtures of aluminum (Al), chromium (Cr), bismuth (Bi), iron (Fe), phosphorous (P), zirconium (Zr), and sulphur (S) compounds as waste-limiting components. Glass compositions for these waste mixtures have been developed based upon previousmore » experience and current glass property models. Recently, DOE has initiated a testing program to develop and characterize HLW glasses with higher waste loadings and higher throughput efficiencies. Results of this work have demonstrated the feasibility of increases in waste loading from about 25 wt% to 33-50 wt% (based on oxide loading) in the glass depending on the waste stream. In view of the importance of aluminum limited waste streams at Hanford (and also Savannah River), the ability to achieve high waste loadings without adversely impacting melt rates has the potential for enormous cost savings from reductions in canister count and the potential for schedule acceleration. Consequently, the potential return on the investment made in the development of these enhancements is extremely favorable. Glass composition development for one of the latest Hanford HLW projected compositions with sulphate concentrations high enough to limit waste loading have been successfully tested and show tolerance for previously unreported tolerance for sulphate. Though a significant increase in waste loading for high-iron wastes has been achieved, the magnitude of the increase is not as substantial as those achieved for high-aluminum, high-chromium, high-bismuth or sulphur. Waste processing rate increases for high-iron streams as a combined effect of higher waste loadings and higher melt rates resulting from new formulations have been achieved.« less
NASA Astrophysics Data System (ADS)
Xiong, Yanmei; Zhang, Yuyan; Rong, Pengfei; Yang, Jie; Wang, Wei; Liu, Dingbin
2015-09-01
We developed a simple high-throughput colorimetric assay to detect glucose based on the glucose oxidase (GOx)-catalysed enlargement of gold nanoparticles (AuNPs). Compared with the currently available glucose kit method, the AuNP-based assay provides higher clinical sensitivity at lower cost, indicating its great potential to be a powerful tool for clinical screening of glucose.We developed a simple high-throughput colorimetric assay to detect glucose based on the glucose oxidase (GOx)-catalysed enlargement of gold nanoparticles (AuNPs). Compared with the currently available glucose kit method, the AuNP-based assay provides higher clinical sensitivity at lower cost, indicating its great potential to be a powerful tool for clinical screening of glucose. Electronic supplementary information (ESI) available: Experimental section and additional figures. See DOI: 10.1039/c5nr03758a
20170308 - Higher Throughput Toxicokinetics to Allow ...
As part of "Ongoing EDSP Directions & Activities" I will present CSS research on high throughput toxicokinetics, including in vitro data and models to allow rapid determination of the real world doses that may cause endocrine disruption. This is a presentation as part of the U.S. Environmental Protection Agency – Japan Ministry of the Environment 12th Bilateral Meeting on Endocrine Disruption Test Methods Development.
Wu, Zhenlong; Chen, Yu; Wang, Moran; Chung, Aram J
2016-02-07
Fluid inertia which has conventionally been neglected in microfluidics has been gaining much attention for particle and cell manipulation because inertia-based methods inherently provide simple, passive, precise and high-throughput characteristics. Particularly, the inertial approach has been applied to blood separation for various biomedical research studies mainly using spiral microchannels. For higher throughput, parallelization is essential; however, it is difficult to realize using spiral channels because of their large two dimensional layouts. In this work, we present a novel inertial platform for continuous sheathless particle and blood cell separation in straight microchannels containing microstructures. Microstructures within straight channels exert secondary flows to manipulate particle positions similar to Dean flow in curved channels but with higher controllability. Through a balance between inertial lift force and microstructure-induced secondary flow, we deterministically position microspheres and cells based on their sizes to be separated downstream. Using our inertial platform, we successfully sorted microparticles and fractionized blood cells with high separation efficiencies, high purities and high throughputs. The inertial separation platform developed here can be operated to process diluted blood with a throughput of 10.8 mL min(-1)via radially arrayed single channels with one inlet and two rings of outlets.
Optimization of throughput in semipreparative chiral liquid chromatography using stacked injection.
Taheri, Mohammadreza; Fotovati, Mohsen; Hosseini, Seyed-Kiumars; Ghassempour, Alireza
2017-10-01
An interesting mode of chromatography for preparation of pure enantiomers from pure samples is the method of stacked injection as a pseudocontinuous procedure. Maximum throughput and minimal production costs can be achieved by the use of total chiral column length in this mode of chromatography. To maximize sample loading, often touching bands of the two enantiomers is automatically achieved. Conventional equations show direct correlation between touching-band loadability and the selectivity factor of two enantiomers. The important question for one who wants to obtain the highest throughput is "How to optimize different factors including selectivity, resolution, run time, and loading of the sample in order to save time without missing the touching-band resolution?" To answer this question, tramadol and propranolol were separated on cellulose 3,5-dimethyl phenyl carbamate, as two pure racemic mixtures with low and high solubilities in mobile phase, respectively. The mobile phase composition consisted of n-hexane solvent with alcohol modifier and diethylamine as the additive. A response surface methodology based on central composite design was used to optimize separation factors against the main responses. According to the stacked injection properties, two processes were investigated for maximizing throughput: one with a poorly soluble and another with a highly soluble racemic mixture. For each case, different optimization possibilities were inspected. It was revealed that resolution is a crucial response for separations of this kind. Peak area and run time are two critical parameters in optimization of stacked injection for binary mixtures which have low solubility in the mobile phase. © 2017 Wiley Periodicals, Inc.
Characterization of lipid films by an angle-interrogation surface plasmon resonance imaging device.
Liu, Linlin; Wang, Qiong; Yang, Zhong; Wang, Wangang; Hu, Ning; Luo, Hongyan; Liao, Yanjian; Zheng, Xiaolin; Yang, Jun
2015-04-01
Surface topographies of lipid films have an important significance in the analysis of the preparation of giant unilamellar vesicles (GUVs). In order to achieve accurately high-throughput and rapidly analysis of surface topographies of lipid films, a homemade SPR imaging device is constructed based on the classical Kretschmann configuration and an angle interrogation manner. A mathematical model is developed to accurately describe the shift including the light path in different conditions and the change of the illumination point on the CCD camera, and thus a SPR curve for each sampling point can also be achieved, based on this calculation method. The experiment results show that the topographies of lipid films formed in distinct experimental conditions can be accurately characterized, and the measuring resolution of the thickness lipid film may reach 0.05 nm. Compared with existing SPRi devices, which realize detection by monitoring the change of the reflective-light intensity, this new SPRi system can achieve the change of the resonance angle on the entire sensing surface. Thus, it has higher detection accuracy as the traditional angle-interrogation SPR sensor, with much wider detectable range of refractive index. Copyright © 2015 Elsevier B.V. All rights reserved.
2018-01-01
Effect-directed analysis (EDA) is a commonly used approach for effect-based identification of endocrine disruptive chemicals in complex (environmental) mixtures. However, for routine toxicity assessment of, for example, water samples, current EDA approaches are considered time-consuming and laborious. We achieved faster EDA and identification by downscaling of sensitive cell-based hormone reporter gene assays and increasing fractionation resolution to allow testing of smaller fractions with reduced complexity. The high-resolution EDA approach is demonstrated by analysis of four environmental passive sampler extracts. Downscaling of the assays to a 384-well format allowed analysis of 64 fractions in triplicate (or 192 fractions without technical replicates) without affecting sensitivity compared to the standard 96-well format. Through a parallel exposure method, agonistic and antagonistic androgen and estrogen receptor activity could be measured in a single experiment following a single fractionation. From 16 selected candidate compounds, identified through nontargeted analysis, 13 could be confirmed chemically and 10 were found to be biologically active, of which the most potent nonsteroidal estrogens were identified as oxybenzone and piperine. The increased fractionation resolution and the higher throughput that downscaling provides allow for future application in routine high-resolution screening of large numbers of samples in order to accelerate identification of (emerging) endocrine disruptors. PMID:29547277
Wavelength Scanning with a Tilting Interference Filter for Glow-Discharge Elemental Imaging.
Storey, Andrew P; Ray, Steven J; Hoffmann, Volker; Voronov, Maxim; Engelhard, Carsten; Buscher, Wolfgang; Hieftje, Gary M
2017-06-01
Glow discharges have long been used for depth profiling and bulk analysis of solid samples. In addition, over the past decade, several methods of obtaining lateral surface elemental distributions have been introduced, each with its own strengths and weaknesses. Challenges for each of these techniques are acceptable optical throughput and added instrumental complexity. Here, these problems are addressed with a tilting-filter instrument. A pulsed glow discharge is coupled to an optical system comprising an adjustable-angle tilting filter, collimating and imaging lenses, and a gated, intensified charge-coupled device (CCD) camera, which together provide surface elemental mapping of solid samples. The tilting-filter spectrometer is instrumentally simpler, produces less image distortion, and achieves higher optical throughput than a monochromator-based instrument, but has a much more limited tunable spectral range and poorer spectral resolution. As a result, the tilting-filter spectrometer is limited to single-element or two-element determinations, and only when the target spectral lines fall within an appropriate spectral range and can be spectrally discerned. Spectral interferences that result from heterogeneous impurities can be flagged and overcome by observing the spatially resolved signal response across the available tunable spectral range. The instrument has been characterized and evaluated for the spatially resolved analysis of glow-discharge emission from selected but representative samples.
The Impact of the Condenser on Cytogenetic Image Quality in Digital Microscope System
Ren, Liqiang; Li, Zheng; Li, Yuhua; Zheng, Bin; Li, Shibo; Chen, Xiaodong; Liu, Hong
2013-01-01
Background: Optimizing operational parameters of the digital microscope system is an important technique to acquire high quality cytogenetic images and facilitate the process of karyotyping so that the efficiency and accuracy of diagnosis can be improved. OBJECTIVE: This study investigated the impact of the condenser on cytogenetic image quality and system working performance using a prototype digital microscope image scanning system. Methods: Both theoretical analysis and experimental validations through objectively evaluating a resolution test chart and subjectively observing large numbers of specimen were conducted. Results: The results show that the optimal image quality and large depth of field (DOF) are simultaneously obtained when the numerical aperture of condenser is set as 60%–70% of the corresponding objective. Under this condition, more analyzable chromosomes and diagnostic information are obtained. As a result, the system shows higher working stability and less restriction for the implementation of algorithms such as autofocusing especially when the system is designed to achieve high throughput continuous image scanning. Conclusions: Although the above quantitative results were obtained using a specific prototype system under the experimental conditions reported in this paper, the presented evaluation methodologies can provide valuable guidelines for optimizing operational parameters in cytogenetic imaging using the high throughput continuous scanning microscopes in clinical practice. PMID:23676284
A novel LTE scheduling algorithm for green technology in smart grid.
Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid
2015-01-01
Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application's priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively.
A Novel LTE Scheduling Algorithm for Green Technology in Smart Grid
Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid
2015-01-01
Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application’s priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively. PMID:25830703
The stabilisation of purified, reconstituted P-glycoprotein by freeze drying with disaccharides.
Heikal, Adam; Box, Karl; Rothnie, Alice; Storm, Janet; Callaghan, Richard; Allen, Marcus
2009-02-01
The drug efflux pump P-glycoprotein (P-gp) (ABCB1) confers multidrug resistance, a major cause of failure in the chemotherapy of tumours, exacerbated by a shortage of potent and selective inhibitors. A high throughput assay using purified P-gp to screen and characterise potential inhibitors would greatly accelerate their development. However, long-term stability of purified reconstituted ABCB1 can only be reliably achieved with storage at -80 degrees C. For example, at 20 degrees C, the activity of ABCB1 was abrogated with a half-life of <1 day. The aim of this investigation was to stabilise purified, reconstituted ABCB1 to enable storage at higher temperatures and thereby enable design of a high throughput assay system. The ABCB1 purification procedure was optimised to allow successful freeze drying by substitution of glycerol with the disaccharides trehalose or maltose. Addition of disaccharides resulted in ATPase activity being retained immediately following lyophilisation with no significant difference between the two disaccharides. However, during storage trehalose preserved ATPase activity for several months regardless of the temperature (e.g. 60% retention at 150 days), whereas ATPase activity in maltose purified P-gp was affected by both storage time and temperature. The data provide an effective mechanism for the production of resilient purified, reconstituted ABCB1.
Junqueira, João R C; de Araujo, William R; Salles, Maiara O; Paixão, Thiago R L C
2013-01-30
A simple and fast electrochemical method for quantitative analysis of picric acid explosive (nitro-explosive) based on its electrochemical reduction at copper surfaces is reported. To achieve a higher sample throughput, the electrochemical sensor was adapted in a flow injection system. Under optimal experimental conditions, the peak current response increases linearly with picric acid concentration over the range of 20-300 μmol L(-1). The repeatability of the electrode response in the flow injection analysis (FIA) configuration was evaluated as 3% (n=10), and the detection limit of the method was estimated to be 6.0 μmol L(-1) (S/N=3). The sample throughput under optimised conditions was estimated to be 550 samples h(-1). Peroxide explosives like triacetone triperoxide (TATP) and hexamethylene triperoxide diamine (HMTD) were tested as potential interfering substances for the proposed method, and no significant interference by these explosives was noticed. The proposed method has interesting analytical parameters, environmental applications, and low cost compared with other electroanalytical methods that have been reported for the quantification of picric acid. Additionally, the possibility to develop an in situ device for the detection of picric acid using a disposable sensor was evaluated. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Asati, Atul; Kachurina, Olga; Kachurin, Anatoly
2012-01-01
Considering importance of ganglioside antibodies as biomarkers in various immune-mediated neuropathies and neurological disorders, we developed a high throughput multiplexing tool for the assessment of gangliosides-specific antibodies based on Biolpex/Luminex platform. In this report, we demonstrate that the ganglioside high throughput multiplexing tool is robust, highly specific and demonstrating ∼100-fold higher concentration sensitivity for IgG detection than ELISA. In addition to the ganglioside-coated array, the high throughput multiplexing tool contains beads coated with influenza hemagglutinins derived from H1N1 A/Brisbane/59/07 and H1N1 A/California/07/09 strains. Influenza beads provided an added advantage of simultaneous detection of ganglioside- and influenza-specific antibodies, a capacity important for the assay of both infectious antigen-specific and autoimmune antibodies following vaccination or disease. Taken together, these results support the potential adoption of the ganglioside high throughput multiplexing tool for measuring ganglioside antibodies in various neuropathic and neurological disorders. PMID:22952605
Control structures for high speed processors
NASA Technical Reports Server (NTRS)
Maki, G. K.; Mankin, R.; Owsley, P. A.; Kim, G. M.
1982-01-01
A special processor was designed to function as a Reed Solomon decoder with throughput data rate in the Mhz range. This data rate is significantly greater than is possible with conventional digital architectures. To achieve this rate, the processor design includes sequential, pipelined, distributed, and parallel processing. The processor was designed using a high level language register transfer language. The RTL can be used to describe how the different processes are implemented by the hardware. One problem of special interest was the development of dependent processes which are analogous to software subroutines. For greater flexibility, the RTL control structure was implemented in ROM. The special purpose hardware required approximately 1000 SSI and MSI components. The data rate throughput is 2.5 megabits/second. This data rate is achieved through the use of pipelined and distributed processing. This data rate can be compared with 800 kilobits/second in a recently proposed very large scale integration design of a Reed Solomon encoder.
Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.
Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M
2010-02-01
In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.
Latest performance of ArF immersion scanner NSR-S630D for high-volume manufacturing for 7nm node
NASA Astrophysics Data System (ADS)
Funatsu, Takayuki; Uehara, Yusaku; Hikida, Yujiro; Hayakawa, Akira; Ishiyama, Satoshi; Hirayama, Toru; Kono, Hirotaka; Shirata, Yosuke; Shibazaki, Yuichi
2015-03-01
In order to achieve stable operation in cutting-edge semiconductor manufacturing, Nikon has developed NSR-S630D with extremely accurate overlay while maintaining throughput in various conditions resembling a real production environment. In addition, NSR-S630D has been equipped with enhanced capabilities to maintain long-term overlay stability and user interface improvement all due to our newly developed application software platform. In this paper, we describe the most recent S630D performance in various conditions similar to real productions. In a production environment, superior overlay accuracy with high dose conditions and high throughput are often required; therefore, we have performed several experiments with high dose conditions to demonstrate NSR's thermal aberration capabilities in order to achieve world class overlay performance. Furthermore, we will introduce our new software that enables long term overlay performance.
Polarization masks: concept and initial assessment
NASA Astrophysics Data System (ADS)
Lam, Michael; Neureuther, Andrew R.
2002-07-01
Polarization from photomasks can be used as a new lever to improve lithographic performance in both binary and phase-shifting masks (PSMs). While PSMs manipulate the phase of light to control the temporal addition of electric field vectors, polarization masks manipulate the vector direction of electric field vectors to control the spatial addition of electric field components. This paper explores the theoretical possibilities of polarization masks, showing that it is possible to use bar structures within openings on the mask itself to polarize incident radiation. Rigorous electromagnetic scattering simulations using TEMPEST and imaging with SPLAT are used to give an initial assessment on the functionality of polarization masks, discussing the polarization quality and throughputs achieved with the masks. Openings between 1/8 and 1/3 of a wavelength provide both a low polarization ratio and good transmission. A final overall throughput of 33% - 40% is achievable, corresponding to a dose hit of 2.5x - 3x.
High-throughput NGL electron-beam direct-write lithography system
NASA Astrophysics Data System (ADS)
Parker, N. William; Brodie, Alan D.; McCoy, John H.
2000-07-01
Electron beam lithography systems have historically had low throughput. The only practical solution to this limitation is an approach using many beams writing simultaneously. For single-column multi-beam systems, including projection optics (SCALPELR and PREVAIL) and blanked aperture arrays, throughput and resolution are limited by space-charge effects. Multibeam micro-column (one beam per column) systems are limited by the need for low voltage operation, electrical connection density and fabrication complexities. In this paper, we discuss a new multi-beam concept employing multiple columns each with multiple beams to generate a very large total number of parallel writing beams. This overcomes the limitations of space-charge interactions and low voltage operation. We also discuss a rationale leading to the optimum number of columns and beams per column. Using this approach we show how production throughputs >= 60 wafers per hour can be achieved at CDs
Microfluidic guillotine for single-cell wound repair studies
NASA Astrophysics Data System (ADS)
Blauch, Lucas R.; Gai, Ya; Khor, Jian Wei; Sood, Pranidhi; Marshall, Wallace F.; Tang, Sindy K. Y.
2017-07-01
Wound repair is a key feature distinguishing living from nonliving matter. Single cells are increasingly recognized to be capable of healing wounds. The lack of reproducible, high-throughput wounding methods has hindered single-cell wound repair studies. This work describes a microfluidic guillotine for bisecting single Stentor coeruleus cells in a continuous-flow manner. Stentor is used as a model due to its robust repair capacity and the ability to perform gene knockdown in a high-throughput manner. Local cutting dynamics reveals two regimes under which cells are bisected, one at low viscous stress where cells are cut with small membrane ruptures and high viability and one at high viscous stress where cells are cut with extended membrane ruptures and decreased viability. A cutting throughput up to 64 cells per minute—more than 200 times faster than current methods—is achieved. The method allows the generation of more than 100 cells in a synchronized stage of their repair process. This capacity, combined with high-throughput gene knockdown in Stentor, enables time-course mechanistic studies impossible with current wounding methods.
Handheld Fluorescence Microscopy based Flow Analyzer.
Saxena, Manish; Jayakumar, Nitin; Gorthi, Sai Siva
2016-03-01
Fluorescence microscopy has the intrinsic advantages of favourable contrast characteristics and high degree of specificity. Consequently, it has been a mainstay in modern biological inquiry and clinical diagnostics. Despite its reliable nature, fluorescence based clinical microscopy and diagnostics is a manual, labour intensive and time consuming procedure. The article outlines a cost-effective, high throughput alternative to conventional fluorescence imaging techniques. With system level integration of custom-designed microfluidics and optics, we demonstrate fluorescence microscopy based imaging flow analyzer. Using this system we have imaged more than 2900 FITC labeled fluorescent beads per minute. This demonstrates high-throughput characteristics of our flow analyzer in comparison to conventional fluorescence microscopy. The issue of motion blur at high flow rates limits the achievable throughput in image based flow analyzers. Here we address the issue by computationally deblurring the images and show that this restores the morphological features otherwise affected by motion blur. By further optimizing concentration of the sample solution and flow speeds, along with imaging multiple channels simultaneously, the system is capable of providing throughput of about 480 beads per second.
NASA Astrophysics Data System (ADS)
Maher, Robert; Alvarado, Alex; Lavery, Domaniç; Bayvel, Polina
2016-02-01
Optical fibre underpins the global communications infrastructure and has experienced an astonishing evolution over the past four decades, with current commercial systems transmitting data rates in excess of 10 Tb/s over a single fibre core. The continuation of this dramatic growth in throughput has become constrained due to a power dependent nonlinear distortion arising from a phenomenon known as the Kerr effect. The mitigation of fibre nonlinearities is an area of intense research. However, even in the absence of nonlinear distortion, the practical limit on the transmission throughput of a single fibre core is dominated by the finite signal-to-noise ratio (SNR) afforded by current state-of-the-art coherent optical transceivers. Therefore, the key to maximising the number of information bits that can be reliably transmitted over a fibre channel hinges on the simultaneous optimisation of the modulation format and code rate, based on the SNR achieved at the receiver. In this work, we use an information theoretic approach based on the mutual information and the generalised mutual information to characterise a state-of-the-art dual polarisation m-ary quadrature amplitude modulation transceiver and subsequently apply this methodology to a 15-carrier super-channel to achieve the highest throughput (1.125 Tb/s) ever recorded using a single coherent receiver.
Cross-Layer Scheme to Control Contention Window for Per-Flow in Asymmetric Multi-Hop Networks
NASA Astrophysics Data System (ADS)
Giang, Pham Thanh; Nakagawa, Kenji
The IEEE 802.11 MAC standard for wireless ad hoc networks adopts Binary Exponential Back-off (BEB) mechanism to resolve bandwidth contention between stations. BEB mechanism controls the bandwidth allocation for each station by choosing a back-off value from one to CW according to the uniform random distribution, where CW is the contention window size. However, in asymmetric multi-hop networks, some stations are disadvantaged in opportunity of access to the shared channel and may suffer severe throughput degradation when the traffic load is large. Then, the network performance is degraded in terms of throughput and fairness. In this paper, we propose a new cross-layer scheme aiming to solve the per-flow unfairness problem and achieve good throughput performance in IEEE 802.11 multi-hop ad hoc networks. Our cross-layer scheme collects useful information from the physical, MAC and link layers of own station. This information is used to determine the optimal Contention Window (CW) size for per-station fairness. We also use this information to adjust CW size for each flow in the station in order to achieve per-flow fairness. Performance of our cross-layer scheme is examined on various asymmetric multi-hop network topologies by using Network Simulator (NS-2).
Probabilistic Assessment of High-Throughput Wireless Sensor Networks
Kim, Robin E.; Mechitov, Kirill; Sim, Sung-Han; Spencer, Billie F.; Song, Junho
2016-01-01
Structural health monitoring (SHM) using wireless smart sensors (WSS) has the potential to provide rich information on the state of a structure. However, because of their distributed nature, maintaining highly robust and reliable networks can be challenging. Assessing WSS network communication quality before and after finalizing a deployment is critical to achieve a successful WSS network for SHM purposes. Early studies on WSS network reliability mostly used temporal signal indicators, composed of a smaller number of packets, to assess the network reliability. However, because the WSS networks for SHM purpose often require high data throughput, i.e., a larger number of packets are delivered within the communication, such an approach is not sufficient. Instead, in this study, a model that can assess, probabilistically, the long-term performance of the network is proposed. The proposed model is based on readily-available measured data sets that represent communication quality during high-throughput data transfer. Then, an empirical limit-state function is determined, which is further used to estimate the probability of network communication failure. Monte Carlo simulation is adopted in this paper and applied to a small and a full-bridge wireless networks. By performing the proposed analysis in complex sensor networks, an optimized sensor topology can be achieved. PMID:27258270
Maher, Robert; Alvarado, Alex; Lavery, Domaniç; Bayvel, Polina
2016-01-01
Optical fibre underpins the global communications infrastructure and has experienced an astonishing evolution over the past four decades, with current commercial systems transmitting data rates in excess of 10 Tb/s over a single fibre core. The continuation of this dramatic growth in throughput has become constrained due to a power dependent nonlinear distortion arising from a phenomenon known as the Kerr effect. The mitigation of fibre nonlinearities is an area of intense research. However, even in the absence of nonlinear distortion, the practical limit on the transmission throughput of a single fibre core is dominated by the finite signal-to-noise ratio (SNR) afforded by current state-of-the-art coherent optical transceivers. Therefore, the key to maximising the number of information bits that can be reliably transmitted over a fibre channel hinges on the simultaneous optimisation of the modulation format and code rate, based on the SNR achieved at the receiver. In this work, we use an information theoretic approach based on the mutual information and the generalised mutual information to characterise a state-of-the-art dual polarisation m-ary quadrature amplitude modulation transceiver and subsequently apply this methodology to a 15-carrier super-channel to achieve the highest throughput (1.125 Tb/s) ever recorded using a single coherent receiver. PMID:26864633
You, Zhu-Hong; Li, Shuai; Gao, Xin; Luo, Xin; Ji, Zhen
2014-01-01
Protein-protein interactions are the basis of biological functions, and studying these interactions on a molecular level is of crucial importance for understanding the functionality of a living cell. During the past decade, biosensors have emerged as an important tool for the high-throughput identification of proteins and their interactions. However, the high-throughput experimental methods for identifying PPIs are both time-consuming and expensive. On the other hand, high-throughput PPI data are often associated with high false-positive and high false-negative rates. Targeting at these problems, we propose a method for PPI detection by integrating biosensor-based PPI data with a novel computational model. This method was developed based on the algorithm of extreme learning machine combined with a novel representation of protein sequence descriptor. When performed on the large-scale human protein interaction dataset, the proposed method achieved 84.8% prediction accuracy with 84.08% sensitivity at the specificity of 85.53%. We conducted more extensive experiments to compare the proposed method with the state-of-the-art techniques, support vector machine. The achieved results demonstrate that our approach is very promising for detecting new PPIs, and it can be a helpful supplement for biosensor-based PPI data detection.
Optimizing the Energy and Throughput of a Water-Quality Monitoring System.
Olatinwo, Segun O; Joubert, Trudi-H
2018-04-13
This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN), with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT) method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near-far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity.
Optimizing the Energy and Throughput of a Water-Quality Monitoring System
Olatinwo, Segun O.
2018-01-01
This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN), with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT) method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near–far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity. PMID:29652866
Fragment-based drug discovery and molecular docking in drug design.
Wang, Tao; Wu, Mian-Bin; Chen, Zheng-Jie; Chen, Hua; Lin, Jian-Ping; Yang, Li-Rong
2015-01-01
Fragment-based drug discovery (FBDD) has caused a revolution in the process of drug discovery and design, with many FBDD leads being developed into clinical trials or approved in the past few years. Compared with traditional high-throughput screening, it displays obvious advantages such as efficiently covering chemical space, achieving higher hit rates, and so forth. In this review, we focus on the most recent developments of FBDD for improving drug discovery, illustrating the process and the importance of FBDD. In particular, the computational strategies applied in the process of FBDD and molecular-docking programs are highlighted elaborately. In most cases, docking is used for predicting the ligand-receptor interaction modes and hit identification by structurebased virtual screening. The successful cases of typical significance and the hits identified most recently are discussed.
An Overview of the HST Advanced Camera for Surveys' On-orbit Performance
NASA Astrophysics Data System (ADS)
Hartig, G. F.; Ford, H. C.; Illingworth, G. D.; Clampin, M.; Bohlin, R. C.; Cox, C.; Krist, J.; Sparks, W. B.; De Marchi, G.; Martel, A. R.; McCann, W. J.; Meurer, G. R.; Sirianni, M.; Tsvetanov, Z.; Bartko, F.; Lindler, D. J.
2002-05-01
The Advanced Camera for Surveys (ACS) was installed in the HST on 7 March 2002 during the fourth servicing mission to the observatory, and is now beginning science operations. The ACS provides HST observers with a considerably more sensitive, higher-resolution camera with wider field and polarimetric, coronagraphic, low-resolution spectrographic and solar-blind FUV capabilities. We review selected results of the early verification and calibration program, comparing the achieved performance with the advertised specifications. Emphasis is placed on the optical characteristics of the camera, including image quality, throughput, geometric distortion and stray-light performance. More detailed analyses of various aspects of the ACS performance are presented in other papers at this meeting. This work was supported by a NASA contract and a NASA grant.
Emerging model systems for functional genomics analysis of Crassulacean acid metabolism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartwell, James; Dever, Louisa V.; Boxall, Susanna F.
Crassulacean acid metabolism (CAM) is one of three main pathways of photosynthetic carbon dioxide fixation found in higher plants. It stands out for its ability to underpin dramatic improvements in plant water use efficiency, which in turn has led to a recent renaissance in CAM research. The current ease with which candidate CAM-associated genes and proteins can be identified through high-throughput sequencing has opened up a new horizon for the development of diverse model CAM species that are amenable to genetic manipulations. The adoption of these model CAM species is underpinning rapid advances in our understanding of the complete genemore » set for CAM. Here, we highlight recent breakthroughs in the functional characterisation of CAM genes that have been achieved through transgenic approaches.« less
Emerging model systems for functional genomics analysis of Crassulacean acid metabolism
Hartwell, James; Dever, Louisa V.; Boxall, Susanna F.
2016-04-12
Crassulacean acid metabolism (CAM) is one of three main pathways of photosynthetic carbon dioxide fixation found in higher plants. It stands out for its ability to underpin dramatic improvements in plant water use efficiency, which in turn has led to a recent renaissance in CAM research. The current ease with which candidate CAM-associated genes and proteins can be identified through high-throughput sequencing has opened up a new horizon for the development of diverse model CAM species that are amenable to genetic manipulations. The adoption of these model CAM species is underpinning rapid advances in our understanding of the complete genemore » set for CAM. Here, we highlight recent breakthroughs in the functional characterisation of CAM genes that have been achieved through transgenic approaches.« less
Parmeggiani, Fabio; Lovelock, Sarah L; Weise, Nicholas J; Ahmed, Syed T; Turner, Nicholas J
2015-04-07
The synthesis of substituted D-phenylalanines in high yield and excellent optical purity, starting from inexpensive cinnamic acids, has been achieved with a novel one-pot approach by coupling phenylalanine ammonia lyase (PAL) amination with a chemoenzymatic deracemization (based on stereoselective oxidation and nonselective reduction). A simple high-throughput solid-phase screening method has also been developed to identify PALs with higher rates of formation of non-natural D-phenylalanines. The best variants were exploited in the chemoenzymatic cascade, thus increasing the yield and ee value of the D-configured product. Furthermore, the system was extended to the preparation of those L-phenylalanines which are obtained with a low ee value using PAL amination. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Parallel processing of embossing dies with ultrafast lasers
NASA Astrophysics Data System (ADS)
Jarczynski, Manfred; Mitra, Thomas; Brüning, Stephan; Du, Keming; Jenke, Gerald
2018-02-01
Functionalization of surfaces equips products and components with new features like hydrophilic behavior, adjustable gloss level, light management properties, etc. Small feature sizes demand diffraction-limited spots and adapted fluence for different materials. Through the availability of high power fast repeating ultrashort pulsed lasers and efficient optical processing heads delivering diffraction-limited small spot size of around 10μm it is feasible to achieve fluences higher than an adequate patterning requires. Hence, parallel processing is becoming of interest to increase the throughput and allow mass production of micro machined surfaces. The first step on the roadmap of parallel processing for cylinder embossing dies was realized with an eight- spot processing head based on ns-fiber laser with passive optical beam splitting, individual spot switching by acousto optical modulation and an advanced imaging. Patterning of cylindrical embossing dies shows a high efficiency of nearby 80%, diffraction-limited and equally spaced spots with pitches down to 25μm achieved by a compression using cascaded prism arrays. Due to the nanoseconds laser pulses the ablation shows the typical surrounding material deposition of a hot process. In the next step the processing head was adapted to a picosecond-laser source and the 500W fiber laser was replaced by an ultrashort pulsed laser with 300W, 12ps and a repetition frequency of up to 6MHz. This paper presents details about the processing head design and the analysis of ablation rates and patterns on steel, copper and brass dies. Furthermore, it gives an outlook on scaling the parallel processing head from eight to 16 individually switched beamlets to increase processing throughput and optimized utilization of the available ultrashort pulsed laser energy.
2017-01-01
Tight and tunable control of gene expression is a highly desirable goal in synthetic biology for constructing predictable gene circuits and achieving preferred phenotypes. Elucidating the sequence–function relationship of promoters is crucial for manipulating gene expression at the transcriptional level, particularly for inducible systems dependent on transcriptional regulators. Sort-seq methods employing fluorescence-activated cell sorting (FACS) and high-throughput sequencing allow for the quantitative analysis of sequence–function relationships in a robust and rapid way. Here we utilized a massively parallel sort-seq approach to analyze the formaldehyde-inducible Escherichia coli promoter (Pfrm) with single-nucleotide resolution. A library of mutated formaldehyde-inducible promoters was cloned upstream of gfp on a plasmid. The library was partitioned into bins via FACS on the basis of green fluorescent protein (GFP) expression level, and mutated promoters falling into each expression bin were identified with high-throughput sequencing. The resulting analysis identified two 19 base pair repressor binding sites, one upstream of the −35 RNA polymerase (RNAP) binding site and one overlapping with the −10 site, and assessed the relative importance of each position and base therein. Key mutations were identified for tuning expression levels and were used to engineer formaldehyde-inducible promoters with predictable activities. Engineered variants demonstrated up to 14-fold lower basal expression, 13-fold higher induced expression, and a 3.6-fold stronger response as indicated by relative dynamic range. Finally, an engineered formaldehyde-inducible promoter was employed to drive the expression of heterologous methanol assimilation genes and achieved increased biomass levels on methanol, a non-native substrate of E. coli. PMID:28463494
NASA Technical Reports Server (NTRS)
Crozier, Stewart N.
1990-01-01
Random access signaling, which allows slotted packets to spill over into adjacent slots, is investigated. It is shown that sloppy-slotted ALOHA can always provide higher throughput than conventional slotted ALOHA. The degree of improvement depends on the timing error distribution. Throughput performance is presented for Gaussian timing error distributions, modified to include timing error corrections. A general channel capacity lower bound, independent of the specific timing error distribution, is also presented.
High-throughput analysis of yeast replicative aging using a microfluidic system
Jo, Myeong Chan; Liu, Wei; Gu, Liang; Dang, Weiwei; Qin, Lidong
2015-01-01
Saccharomyces cerevisiae has been an important model for studying the molecular mechanisms of aging in eukaryotic cells. However, the laborious and low-throughput methods of current yeast replicative lifespan assays limit their usefulness as a broad genetic screening platform for research on aging. We address this limitation by developing an efficient, high-throughput microfluidic single-cell analysis chip in combination with high-resolution time-lapse microscopy. This innovative design enables, to our knowledge for the first time, the determination of the yeast replicative lifespan in a high-throughput manner. Morphological and phenotypical changes during aging can also be monitored automatically with a much higher throughput than previous microfluidic designs. We demonstrate highly efficient trapping and retention of mother cells, determination of the replicative lifespan, and tracking of yeast cells throughout their entire lifespan. Using the high-resolution and large-scale data generated from the high-throughput yeast aging analysis (HYAA) chips, we investigated particular longevity-related changes in cell morphology and characteristics, including critical cell size, terminal morphology, and protein subcellular localization. In addition, because of the significantly improved retention rate of yeast mother cell, the HYAA-Chip was capable of demonstrating replicative lifespan extension by calorie restriction. PMID:26170317
Break-up of droplets in a concentrated emulsion flowing through a narrow constriction
NASA Astrophysics Data System (ADS)
Kim, Minkyu; Rosenfeld, Liat; Tang, Sindy; Tang Lab Team
2014-11-01
Droplet microfluidics has enabled a wide range of high throughput screening applications. Compared with other technologies such as robotic screening technology, droplet microfluidics has 1000 times higher throughput, which makes the technology one of the most promising platforms for the ultrahigh throughput screening applications. Few studies have considered the throughput of the droplet interrogation process, however. In this research, we show that the probability of break-up increases with increasing flow rate, entrance angle to the constriction, and size of the drops. Since single drops do not break at the highest flow rate used in the system, break-ups occur primarily from the interactions between highly packed droplets close to each other. Moreover, the probabilistic nature of the break-up process arises from the stochastic variations in the packing configuration. Our results can be used to calculate the maximum throughput of the serial interrogation process. For 40 pL-drops, the highest throughput with less than 1% droplet break-up was measured to be approximately 7,000 drops per second. In addition, the results are useful for understanding the behavior of concentrated emulsions in applications such as mobility control in enhanced oil recovery.
Fourier transform spectroscopy of cotton and cotton trash
USDA-ARS?s Scientific Manuscript database
Fourier Transform techniques have been shown to have higher signal-to-noise capabilities, higher throughput, negligible stray light, continuous spectra, and higher resolution. In addition, FT spectroscopy affords for frequencies in spectra to be measured all at once and more precise wavelength calib...
Microarray Detection of Duplex and Triplex DNA Binders with DNA-Modified Gold Nanoparticles
Lytton-Jean, Abigail K. R.; Han, Min Su; Mirkin, Chad A.
2008-01-01
We have designed a chip-based assay, using microarray technology, for determining the relative binding affinities of duplex and triplex DNA binders. This assay combines the high discrimination capabilities afforded by DNA-modified Au nanoparticles with the high-throughput capabilities of DNA microarrays. The detection and screening of duplex DNA binders are important because these molecules, in many cases, are potential anticancer agents as well as toxins. Triplex DNA binders are also promising drug candidates. These molecules, in conjunction with triplex forming oligonucleotides, could potentially be used to achieve control of gene expression by interfering with transcription factors that bind to DNA. Therefore, the ability to screen for these molecules in a high-throughput fashion could dramatically improve the drug screening process. The assay reported here provides excellent discrimination between strong, intermediate, and weak duplex and triplex DNA binders in a high-throughput fashion. PMID:17614366
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katz, J., E-mail: jkat@lle.rochester.edu; Boni, R.; Rivlis, R.
A high-throughput, broadband optical spectrometer coupled to the Rochester optical streak system equipped with a Photonis P820 streak tube was designed to record time-resolved spectra with 1-ps time resolution. Spectral resolution of 0.8 nm is achieved over a wavelength coverage range of 480 to 580 nm, using a 300-groove/mm diffraction grating in conjunction with a pair of 225-mm-focal-length doublets operating at an f/2.9 aperture. Overall pulse-front tilt across the beam diameter generated by the diffraction grating is reduced by preferentially delaying discrete segments of the collimated input beam using a 34-element reflective echelon optic. The introduced delay temporally aligns themore » beam segments and the net pulse-front tilt is limited to the accumulation across an individual sub-element. The resulting spectrometer design balances resolving power and pulse-front tilt while maintaining high throughput.« less
Wen, X.; Datta, A.; Traverso, L. M.; Pan, L.; Xu, X.; Moon, E. E.
2015-01-01
Optical lithography, the enabling process for defining features, has been widely used in semiconductor industry and many other nanotechnology applications. Advances of nanotechnology require developments of high-throughput optical lithography capabilities to overcome the optical diffraction limit and meet the ever-decreasing device dimensions. We report our recent experimental advancements to scale up diffraction unlimited optical lithography in a massive scale using the near field nanolithography capabilities of bowtie apertures. A record number of near-field optical elements, an array of 1,024 bowtie antenna apertures, are simultaneously employed to generate a large number of patterns by carefully controlling their working distances over the entire array using an optical gap metrology system. Our experimental results reiterated the ability of using massively-parallel near-field devices to achieve high-throughput optical nanolithography, which can be promising for many important nanotechnology applications such as computation, data storage, communication, and energy. PMID:26525906
Genome sequencing in microfabricated high-density picolitre reactors.
Margulies, Marcel; Egholm, Michael; Altman, William E; Attiya, Said; Bader, Joel S; Bemben, Lisa A; Berka, Jan; Braverman, Michael S; Chen, Yi-Ju; Chen, Zhoutao; Dewell, Scott B; Du, Lei; Fierro, Joseph M; Gomes, Xavier V; Godwin, Brian C; He, Wen; Helgesen, Scott; Ho, Chun Heen; Ho, Chun He; Irzyk, Gerard P; Jando, Szilveszter C; Alenquer, Maria L I; Jarvie, Thomas P; Jirage, Kshama B; Kim, Jong-Bum; Knight, James R; Lanza, Janna R; Leamon, John H; Lefkowitz, Steven M; Lei, Ming; Li, Jing; Lohman, Kenton L; Lu, Hong; Makhijani, Vinod B; McDade, Keith E; McKenna, Michael P; Myers, Eugene W; Nickerson, Elizabeth; Nobile, John R; Plant, Ramona; Puc, Bernard P; Ronan, Michael T; Roth, George T; Sarkis, Gary J; Simons, Jan Fredrik; Simpson, John W; Srinivasan, Maithreyan; Tartaro, Karrie R; Tomasz, Alexander; Vogt, Kari A; Volkmer, Greg A; Wang, Shally H; Wang, Yong; Weiner, Michael P; Yu, Pengguang; Begley, Richard F; Rothberg, Jonathan M
2005-09-15
The proliferation of large-scale DNA-sequencing projects in recent years has driven a search for alternative methods to reduce time and cost. Here we describe a scalable, highly parallel sequencing system with raw throughput significantly greater than that of state-of-the-art capillary electrophoresis instruments. The apparatus uses a novel fibre-optic slide of individual wells and is able to sequence 25 million bases, at 99% or better accuracy, in one four-hour run. To achieve an approximately 100-fold increase in throughput over current Sanger sequencing technology, we have developed an emulsion method for DNA amplification and an instrument for sequencing by synthesis using a pyrosequencing protocol optimized for solid support and picolitre-scale volumes. Here we show the utility, throughput, accuracy and robustness of this system by shotgun sequencing and de novo assembly of the Mycoplasma genitalium genome with 96% coverage at 99.96% accuracy in one run of the machine.
The promise and challenge of high-throughput sequencing of the antibody repertoire
Georgiou, George; Ippolito, Gregory C; Beausang, John; Busse, Christian E; Wardemann, Hedda; Quake, Stephen R
2014-01-01
Efforts to determine the antibody repertoire encoded by B cells in the blood or lymphoid organs using high-throughput DNA sequencing technologies have been advancing at an extremely rapid pace and are transforming our understanding of humoral immune responses. Information gained from high-throughput DNA sequencing of immunoglobulin genes (Ig-seq) can be applied to detect B-cell malignancies with high sensitivity, to discover antibodies specific for antigens of interest, to guide vaccine development and to understand autoimmunity. Rapid progress in the development of experimental protocols and informatics analysis tools is helping to reduce sequencing artifacts, to achieve more precise quantification of clonal diversity and to extract the most pertinent biological information. That said, broader application of Ig-seq, especially in clinical settings, will require the development of a standardized experimental design framework that will enable the sharing and meta-analysis of sequencing data generated by different laboratories. PMID:24441474
High throughput ion-channel pharmacology: planar-array-based voltage clamp.
Kiss, Laszlo; Bennett, Paul B; Uebele, Victor N; Koblan, Kenneth S; Kane, Stefanie A; Neagle, Brad; Schroeder, Kirk
2003-02-01
Technological advances often drive major breakthroughs in biology. Examples include PCR, automated DNA sequencing, confocal/single photon microscopy, AFM, and voltage/patch-clamp methods. The patch-clamp method, first described nearly 30 years ago, was a major technical achievement that permitted voltage-clamp analysis (membrane potential control) of ion channels in most cells and revealed a role for channels in unimagined areas. Because of the high information content, voltage clamp is the best way to study ion-channel function; however, throughput is too low for drug screening. Here we describe a novel breakthrough planar-array-based HT patch-clamp technology developed by Essen Instruments capable of voltage-clamping thousands of cells per day. This technology provides greater than two orders of magnitude increase in throughput compared with the traditional voltage-clamp techniques. We have applied this method to study the hERG K(+) channel and to determine the pharmacological profile of QT prolonging drugs.
High-speed zero-copy data transfer for DAQ applications
NASA Astrophysics Data System (ADS)
Pisani, Flavio; Cámpora Pérez, Daniel Hugo; Neufeld, Niko
2015-05-01
The LHCb Data Acquisition (DAQ) will be upgraded in 2020 to a trigger-free readout. In order to achieve this goal we will need to connect around 500 nodes with a total network capacity of 32 Tb/s. To get such an high network capacity we are testing zero-copy technology in order to maximize the theoretical link throughput without adding excessive CPU and memory bandwidth overhead, leaving free resources for data processing resulting in less power, space and money used for the same result. We develop a modular test application which can be used with different transport layers. For the zero-copy implementation we choose the OFED IBVerbs API because it can provide low level access and high throughput. We present throughput and CPU usage measurements of 40 GbE solutions using Remote Direct Memory Access (RDMA), for several network configurations to test the scalability of the system.
High-throughput GPU-based LDPC decoding
NASA Astrophysics Data System (ADS)
Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin
2010-08-01
Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.
Next-Generation High-Throughput Functional Annotation of Microbial Genomes.
Baric, Ralph S; Crosson, Sean; Damania, Blossom; Miller, Samuel I; Rubin, Eric J
2016-10-04
Host infection by microbial pathogens cues global changes in microbial and host cell biology that facilitate microbial replication and disease. The complete maps of thousands of bacterial and viral genomes have recently been defined; however, the rate at which physiological or biochemical functions have been assigned to genes has greatly lagged. The National Institute of Allergy and Infectious Diseases (NIAID) addressed this gap by creating functional genomics centers dedicated to developing high-throughput approaches to assign gene function. These centers require broad-based and collaborative research programs to generate and integrate diverse data to achieve a comprehensive understanding of microbial pathogenesis. High-throughput functional genomics can lead to new therapeutics and better understanding of the next generation of emerging pathogens by rapidly defining new general mechanisms by which organisms cause disease and replicate in host tissues and by facilitating the rate at which functional data reach the scientific community. Copyright © 2016 Baric et al.
Bahrami-Samani, Emad; Vo, Dat T.; de Araujo, Patricia Rosa; Vogel, Christine; Smith, Andrew D.; Penalva, Luiz O. F.; Uren, Philip J.
2014-01-01
Co- and post-transcriptional regulation of gene expression is complex and multi-faceted, spanning the complete RNA lifecycle from genesis to decay. High-throughput profiling of the constituent events and processes is achieved through a range of technologies that continue to expand and evolve. Fully leveraging the resulting data is non-trivial, and requires the use of computational methods and tools carefully crafted for specific data sources and often intended to probe particular biological processes. Drawing upon databases of information pre-compiled by other researchers can further elevate analyses. Within this review, we describe the major co- and post-transcriptional events in the RNA lifecycle that are amenable to high-throughput profiling. We place specific emphasis on the analysis of the resulting data, in particular the computational tools and resources available, as well as looking towards future challenges that remain to be addressed. PMID:25515586
Ching, Travers; Zhu, Xun; Garmire, Lana X
2018-04-01
Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet.
Keenan, Martine; Alexander, Paul W; Chaplin, Jason H; Abbott, Michael J; Diao, Hugo; Wang, Zhisen; Best, Wayne M; Perez, Catherine J; Cornwall, Scott M J; Keatley, Sarah K; Thompson, R C Andrew; Charman, Susan A; White, Karen L; Ryan, Eileen; Chen, Gong; Ioset, Jean-Robert; von Geldern, Thomas W; Chatelain, Eric
2013-10-01
Inhibitors of Trypanosoma cruzi with novel mechanisms of action are urgently required to diversify the current clinical and preclinical pipelines. Increasing the number and diversity of hits available for assessment at the beginning of the discovery process will help to achieve this aim. We report the evaluation of multiple hits generated from a high-throughput screen to identify inhibitors of T. cruzi and from these studies the discovery of two novel series currently in lead optimization. Lead compounds from these series potently and selectively inhibit growth of T. cruzi in vitro and the most advanced compound is orally active in a subchronic mouse model of T. cruzi infection. High-throughput screening of novel compound collections has an important role to play in diversifying the trypanosomatid drug discovery portfolio. A new T. cruzi inhibitor series with good drug-like properties and promising in vivo efficacy has been identified through this process.
White, David T; Eroglu, Arife Unal; Wang, Guohua; Zhang, Liyun; Sengupta, Sumitra; Ding, Ding; Rajpurohit, Surendra K; Walker, Steven L; Ji, Hongkai; Qian, Jiang; Mumm, Jeff S
2017-01-01
The zebrafish has emerged as an important model for whole-organism small-molecule screening. However, most zebrafish-based chemical screens have achieved only mid-throughput rates. Here we describe a versatile whole-organism drug discovery platform that can achieve true high-throughput screening (HTS) capacities. This system combines our automated reporter quantification in vivo (ARQiv) system with customized robotics, and is termed ‘ARQiv-HTS’. We detail the process of establishing and implementing ARQiv-HTS: (i) assay design and optimization, (ii) calculation of sample size and hit criteria, (iii) large-scale egg production, (iv) automated compound titration, (v) dispensing of embryos into microtiter plates, and (vi) reporter quantification. We also outline what we see as best practice strategies for leveraging the power of ARQiv-HTS for zebrafish-based drug discovery, and address technical challenges of applying zebrafish to large-scale chemical screens. Finally, we provide a detailed protocol for a recently completed inaugural ARQiv-HTS effort, which involved the identification of compounds that elevate insulin reporter activity. Compounds that increased the number of insulin-producing pancreatic beta cells represent potential new therapeutics for diabetic patients. For this effort, individual screening sessions took 1 week to conclude, and sessions were performed iteratively approximately every other day to increase throughput. At the conclusion of the screen, more than a half million drug-treated larvae had been evaluated. Beyond this initial example, however, the ARQiv-HTS platform is adaptable to almost any reporter-based assay designed to evaluate the effects of chemical compounds in living small-animal models. ARQiv-HTS thus enables large-scale whole-organism drug discovery for a variety of model species and from numerous disease-oriented perspectives. PMID:27831568
Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.
2012-01-01
Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
Passive and Active Monitoring on a High Performance Research Network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, Warren
2001-05-01
The bold network challenges described in ''Internet End-to-end Performance Monitoring for the High Energy and Nuclear Physics Community'' presented at PAM 2000 have been tackled by the intrepid administrators and engineers providing the network services. After less than a year, the BaBar collaboration has collected almost 100 million particle collision events in a database approaching 165TB (Tera=10{sup 12}). Around 20TB has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon, France, for processing and around 40 TB of simulated events have been imported to SLAC from Lawrence Livermore National Laboratory (LLNL). An unforseen challenge hasmore » arisen due to recent events and highlighted security concerns at DoE funded labs. New rules and regulations suggest it is only a matter of time before many active performance measurements may not be possible between many sites. Yet, at the same time, the importance of understanding every aspect of the network and eradicating packet loss for high throughput data transfers has become apparent. Work at SLAC to employ passive monitoring using netflow and OC3MON is underway and techniques to supplement and possibly replace the active measurements are being considered. This paper will detail the special needs and traffic characterization of a remarkable research project, and how the networking hurdles have been resolved (or not!) to achieve the required high data throughput. Results from active and passive measurements will be compared, and methods for achieving high throughput and the effect on the network will be assessed along with tools that directly measure throughput and applications used to actually transfer data.« less
A Barcoding Strategy Enabling Higher-Throughput Library Screening by Microscopy.
Chen, Robert; Rishi, Harneet S; Potapov, Vladimir; Yamada, Masaki R; Yeh, Vincent J; Chow, Thomas; Cheung, Celia L; Jones, Austin T; Johnson, Terry D; Keating, Amy E; DeLoache, William C; Dueber, John E
2015-11-20
Dramatic progress has been made in the design and build phases of the design-build-test cycle for engineering cells. However, the test phase usually limits throughput, as many outputs of interest are not amenable to rapid analytical measurements. For example, phenotypes such as motility, morphology, and subcellular localization can be readily measured by microscopy, but analysis of these phenotypes is notoriously slow. To increase throughput, we developed microscopy-readable barcodes (MiCodes) composed of fluorescent proteins targeted to discernible organelles. In this system, a unique barcode can be genetically linked to each library member, making possible the parallel analysis of phenotypes of interest via microscopy. As a first demonstration, we MiCoded a set of synthetic coiled-coil leucine zipper proteins to allow an 8 × 8 matrix to be tested for specific interactions in micrographs consisting of mixed populations of cells. A novel microscopy-readable two-hybrid fluorescence localization assay for probing candidate interactions in the cytosol was also developed using a bait protein targeted to the peroxisome and a prey protein tagged with a fluorescent protein. This work introduces a generalizable, scalable platform for making microscopy amenable to higher-throughput library screening experiments, thereby coupling the power of imaging with the utility of combinatorial search paradigms.
Wu, Nicholas C.; Young, Arthur P.; Al-Mawsawi, Laith Q.; Olson, C. Anders; Feng, Jun; Qi, Hangfei; Luan, Harding H.; Li, Xinmin; Wu, Ting-Ting
2014-01-01
ABSTRACT Viral proteins often display several functions which require multiple assays to dissect their genetic basis. Here, we describe a systematic approach to screen for loss-of-function mutations that confer a fitness disadvantage under a specified growth condition. Our methodology was achieved by genetically monitoring a mutant library under two growth conditions, with and without interferon, by deep sequencing. We employed a molecular tagging technique to distinguish true mutations from sequencing error. This approach enabled us to identify mutations that were negatively selected against, in addition to those that were positively selected for. Using this technique, we identified loss-of-function mutations in the influenza A virus NS segment that were sensitive to type I interferon in a high-throughput fashion. Mechanistic characterization further showed that a single substitution, D92Y, resulted in the inability of NS to inhibit RIG-I ubiquitination. The approach described in this study can be applied under any specified condition for any virus that can be genetically manipulated. IMPORTANCE Traditional genetics focuses on a single genotype-phenotype relationship, whereas high-throughput genetics permits phenotypic characterization of numerous mutants in parallel. High-throughput genetics often involves monitoring of a mutant library with deep sequencing. However, deep sequencing suffers from a high error rate (∼0.1 to 1%), which is usually higher than the occurrence frequency for individual point mutations within a mutant library. Therefore, only mutations that confer a fitness advantage can be identified with confidence due to an enrichment in the occurrence frequency. In contrast, it is impossible to identify deleterious mutations using most next-generation sequencing techniques. In this study, we have applied a molecular tagging technique to distinguish true mutations from sequencing errors. It enabled us to identify mutations that underwent negative selection, in addition to mutations that experienced positive selection. This study provides a proof of concept by screening for loss-of-function mutations on the influenza A virus NS segment that are involved in its anti-interferon activity. PMID:24965464
Shahini, Mehdi; Yeow, John T W
2011-08-12
We report on the enhancement of electrical cell lysis using carbon nanotubes (CNTs). Electrical cell lysis systems are widely utilized in microchips as they are well suited to integration into lab-on-a-chip devices. However, cell lysis based on electrical mechanisms has high voltage requirements. Here, we demonstrate that by incorporating CNTs into microfluidic electrolysis systems, the required voltage for lysis is reduced by half and the lysis throughput at low voltages is improved by ten times, compared to non-CNT microchips. In our experiment, E. coli cells are lysed while passing through an electric field in a microchannel. Based on the lightning rod effect, the electric field strengthened at the tip of the CNTs enhances cell lysis at lower voltage and higher throughput. This approach enables easy integration of cell lysis with other on-chip high-throughput sample-preparation processes.
Photometric Repeatability of Scanned Imagery: UVIS
NASA Astrophysics Data System (ADS)
Shanahan, Clare E.; McCullough, Peter; Baggett, Sylvia
2017-08-01
We provide the preliminary results of a study on the photometric repeatability of spatial scans of bright, isolated white dwarf stars with the UVIS channel of the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST). We analyze straight-line scans from the first pair of identical orbits of HST program 14878 to assess if sub 0.1% repeatability can be attained with WFC3/UVIS. This study is motivated by the desire to achieve better signal-to-noise in the UVIS contamination and stability monitor, in which observations of standard stars in staring mode have been taken from the installation of WFC3 in 2009 to the present to assess temporal photometric stability. Higher signal to noise in this program would greatly benefit the sensitivity to detect contamination, and to better characterize the observed small throughput drifts over time. We find excellent repeatability between identical visits of program 14878, with sub 0.1% repeatability achieved in most filters. These! results support the initiative to transition the staring mode UVIS contamination and photometric stability monitor from staring mode images to spatial scans.
Synthetic antimicrobial peptides as agricultural pesticides for plant-disease control.
Montesinos, Emilio; Bardají, Eduard
2008-07-01
There is a need of antimicrobial compounds in agriculture for plant-disease control, with low toxicity and reduced negative environmental impact. Antimicrobial peptides are produced by living organisms and offer strong possibilities in agriculture because new compounds can be developed based on natural structures with improved properties of activity, specificity, biodegradability, and toxicity. Design of new molecules has been achieved using combinatorial-chemistry procedures coupled to high-throughput screening systems and data processing with design-of-experiments (DOE) methodology to obtain QSAR equation models and optimized compounds. Upon selection of best candidates with low cytotoxicity and moderate stability to protease digestion, anti-infective activity has been evaluated in plant-pathogen model systems. Suitable compounds have been submitted to acute toxicity testing in higher organisms and exhibited a low toxicity profile in a mouse model. Large-scale production can be achieved by solution organic or chemoenzymatic procedures in the case of very small peptides, but, in many cases, production can be performed by biotechnological methods using genetically modified microorganisms (fermentation) or transgenic crops (plant biofactories).
Jin, Ningben; Shou, Zongqi; Yuan, Haiping; Lou, Ziyang; Zhu, Nanwen
2016-03-01
The effect of ferric nitrate on microbial community and enhancement of stabilization process for sewage sludge was investigated in autothermal thermophilic aerobic digestion. The disinhibition of volatile fatty acids (VFA) was obtained with alteration of individual VFA concentration order. Bacterial taxonomic identification by 454 high-throughput pyrosequencing found the dominant phylum Proteobacteria in non-dosing group was converted to phylum Firmicutes in dosing group after ferric nitrate added and simplification of bacteria phylotypes was achieved. The preponderant Tepidiphilus sp. vanished, and Symbiobacterium sp. and Tepidimicrobium sp. were the most advantageous phylotypes with conditioning of ferric nitrate. Consequently, biodegradable substances in dissolved organic matters increased, which contributed to the favorable environment for microbial metabolism and resulted in acceleration of sludge stabilization. Ultimately, higher stabilization level was achieved as ratio of soluble chemical oxygen demand to total chemical oxygen demand (TCOD) decreased while TCOD reduced as well in dosing group comparing to non-dosing group. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhao, Xiaowen; Bailey, Mark R; Emery, Warren R; Lambooy, Peter K; Chen, Dayue
2007-06-01
Nanofiltration is commonly introduced into purification processes of biologics produced in mammalian cells to serve as a designated step for removal of potential exogenous viral contaminants and endogenous retrovirus-like particles. The LRV (log reduction value) achieved by nanofiltration is often determined by cell-based infectivity assay, which is time-consuming and labour-intensive. We have explored the possibility of employing QPCR (quantitative PCR) to evaluate LRV achieved by nanofiltration in scaled-down studies using two model viruses, namely xenotropic murine leukemia virus and murine minute virus. We report here the successful development of a QPCR-based method suitable for quantification of virus removal by nanofiltration. The method includes a nuclease treatment step to remove free viral nucleic acids, while viral genome associated with intact virus particles is shielded from the nuclease. In addition, HIV Armored RNA was included as an internal control to ensure the accuracy and reliability of the method. The QPCRbased method described here provides several advantages such as better sensitivity, faster turnaround time, reduced cost and higher throughput over the traditional cell-based infectivity assays.
Line spread functions of blazed off-plane gratings operated in the Littrow mounting
NASA Astrophysics Data System (ADS)
DeRoo, Casey T.; McEntaffer, Randall L.; Miles, Drew M.; Peterson, Thomas J.; Marlowe, Hannah; Tutt, James H.; Donovan, Benjamin D.; Menz, Benedikt; Burwitz, Vadim; Hartner, Gisela; Allured, Ryan; Smith, Randall K.; Günther, Ramses; Yanson, Alex; Vacanti, Giuseppe; Ackermann, Marcelo
2016-04-01
Future soft x-ray (10 to 50 Å) spectroscopy missions require higher effective areas and resolutions to perform critical science that cannot be done by instruments on current missions. An x-ray grating spectrometer employing off-plane reflection gratings would be capable of meeting these performance criteria. Off-plane gratings with blazed groove facets operating in the Littrow mounting can be used to achieve excellent throughput into orders achieving high resolutions. We have fabricated two off-plane gratings with blazed groove profiles via a technique that uses commonly available microfabrication processes, is easily scaled for mass production, and yields gratings customized for a given mission architecture. Both fabricated gratings were tested in the Littrow mounting at the Max Planck Institute for Extraterrestrial Physics (MPE) PANTER x-ray test facility to assess their performance. The line spread functions of diffracted orders were measured, and a maximum resolution of 800±20 is reported. In addition, we also observe evidence of a blaze effect from measurements of relative efficiencies of the diffracted orders.
A continuous high-throughput bioparticle sorter based on 3D traveling-wave dielectrophoresis.
Cheng, I-Fang; Froude, Victoria E; Zhu, Yingxi; Chang, Hsueh-Chia; Chang, Hsien-Chang
2009-11-21
We present a high throughput (maximum flow rate approximately 10 microl/min or linear velocity approximately 3 mm/s) continuous bio-particle sorter based on 3D traveling-wave dielectrophoresis (twDEP) at an optimum AC frequency of 500 kHz. The high throughput sorting is achieved with a sustained twDEP particle force normal to the continuous through-flow, which is applied over the entire chip by a single 3D electrode array. The design allows continuous fractionation of micron-sized particles into different downstream sub-channels based on differences in their twDEP mobility on both sides of the cross-over. Conventional DEP is integrated upstream to focus the particles into a single levitated queue to allow twDEP sorting by mobility difference and to minimize sedimentation and field-induced lysis. The 3D electrode array design minimizes the offsetting effect of nDEP (negative DEP with particle force towards regions with weak fields) on twDEP such that both forces increase monotonically with voltage to further increase the throughput. Effective focusing and separation of red blood cells from debris-filled heterogeneous samples are demonstrated, as well as size-based separation of poly-dispersed liposome suspensions into two distinct bands at 2.3 to 4.6 microm and 1.5 to 2.7 microm, at the highest throughput recorded in hand-held chips of 6 microl/min.
Protocols and programs for high-throughput growth and aging phenotyping in yeast.
Jung, Paul P; Christian, Nils; Kay, Daniel P; Skupin, Alexander; Linster, Carole L
2015-01-01
In microorganisms, and more particularly in yeasts, a standard phenotyping approach consists in the analysis of fitness by growth rate determination in different conditions. One growth assay that combines high throughput with high resolution involves the generation of growth curves from 96-well plate microcultivations in thermostated and shaking plate readers. To push the throughput of this method to the next level, we have adapted it in this study to the use of 384-well plates. The values of the extracted growth parameters (lag time, doubling time and yield of biomass) correlated well between experiments carried out in 384-well plates as compared to 96-well plates or batch cultures, validating the higher-throughput approach for phenotypic screens. The method is not restricted to the use of the budding yeast Saccharomyces cerevisiae, as shown by consistent results for other species selected from the Hemiascomycete class. Furthermore, we used the 384-well plate microcultivations to develop and validate a higher-throughput assay for yeast Chronological Life Span (CLS), a parameter that is still commonly determined by a cumbersome method based on counting "Colony Forming Units". To accelerate analysis of the large datasets generated by the described growth and aging assays, we developed the freely available software tools GATHODE and CATHODE. These tools allow for semi-automatic determination of growth parameters and CLS behavior from typical plate reader output files. The described protocols and programs will increase the time- and cost-efficiency of a number of yeast-based systems genetics experiments as well as various types of screens.
Optimizing multi-dimensional high throughput screening using zebrafish
Truong, Lisa; Bugel, Sean M.; Chlebowski, Anna; Usenko, Crystal Y.; Simonich, Michael T.; Massey Simonich, Staci L.; Tanguay, Robert L.
2016-01-01
The use of zebrafish for high throughput screening (HTS) for chemical bioactivity assessments is becoming routine in the fields of drug discovery and toxicology. Here we report current recommendations from our experiences in zebrafish HTS. We compared the effects of different high throughput chemical delivery methods on nominal water concentration, chemical sorption to multi-well polystyrene plates, transcription responses, and resulting whole animal responses. We demonstrate that digital dispensing consistently yields higher data quality and reproducibility compared to standard plastic tip-based liquid handling. Additionally, we illustrate the challenges in using this sensitive model for chemical assessment when test chemicals have trace impurities. Adaptation of these better practices for zebrafish HTS should increase reproducibility across laboratories. PMID:27453428
NASA Technical Reports Server (NTRS)
Clare, L. P.; Yan, T.-Y.
1985-01-01
The analysis of the ALOHA random access protocol for communications channels with fading is presented. The protocol is modified to send multiple contiguous copies of a message at each transmission attempt. Both pure and slotted ALOHA channels are considered. A general two state model is used for the channel error process to account for the channel fading memory. It is shown that greater throughput and smaller delay may be achieved using repetitions. The model is applied to the analysis of the delay-throughput performance in a fading mobile communications environment. Numerical results are given for NASA's Mobile Satellite Experiment.
Howard, Dougal P; Marchand, Peter; McCafferty, Liam; Carmalt, Claire J; Parkin, Ivan P; Darr, Jawwad A
2017-04-10
High-throughput continuous hydrothermal flow synthesis was used to generate a library of aluminum and gallium-codoped zinc oxide nanoparticles of specific atomic ratios. Resistivities of the materials were determined by Hall Effect measurements on heat-treated pressed discs and the results collated into a conductivity-composition map. Optimal resistivities of ∼9 × 10 -3 Ω cm were reproducibly achieved for several samples, for example, codoped ZnO with 2 at% Ga and 1 at% Al. The optimum sample on balance of performance and cost was deemed to be ZnO codoped with 3 at% Al and 1 at% Ga.
Connecting Earth observation to high-throughput biodiversity data.
Bush, Alex; Sollmann, Rahel; Wilting, Andreas; Bohmann, Kristine; Cole, Beth; Balzter, Heiko; Martius, Christopher; Zlinszky, András; Calvignac-Spencer, Sébastien; Cobbold, Christina A; Dawson, Terence P; Emerson, Brent C; Ferrier, Simon; Gilbert, M Thomas P; Herold, Martin; Jones, Laurence; Leendertz, Fabian H; Matthews, Louise; Millington, James D A; Olson, John R; Ovaskainen, Otso; Raffaelli, Dave; Reeve, Richard; Rödel, Mark-Oliver; Rodgers, Torrey W; Snape, Stewart; Visseren-Hamakers, Ingrid; Vogler, Alfried P; White, Piran C L; Wooster, Martin J; Yu, Douglas W
2017-06-22
Understandably, given the fast pace of biodiversity loss, there is much interest in using Earth observation technology to track biodiversity, ecosystem functions and ecosystem services. However, because most biodiversity is invisible to Earth observation, indicators based on Earth observation could be misleading and reduce the effectiveness of nature conservation and even unintentionally decrease conservation effort. We describe an approach that combines automated recording devices, high-throughput DNA sequencing and modern ecological modelling to extract much more of the information available in Earth observation data. This approach is achievable now, offering efficient and near-real-time monitoring of management impacts on biodiversity and its functions and services.
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Bolcar, Matt; Liu, Alice; Guyon, Olivier; Stark,Chris; Arenberg, Jon
2016-01-01
Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance.
Joslin, John; Gilligan, James; Anderson, Paul; Garcia, Catherine; Sharif, Orzala; Hampton, Janice; Cohen, Steven; King, Miranda; Zhou, Bin; Jiang, Shumei; Trussell, Christopher; Dunn, Robert; Fathman, John W; Snead, Jennifer L; Boitano, Anthony E; Nguyen, Tommy; Conner, Michael; Cooke, Mike; Harris, Jennifer; Ainscow, Ed; Zhou, Yingyao; Shaw, Chris; Sipes, Dan; Mainquist, James; Lesley, Scott
2018-05-01
The goal of high-throughput screening is to enable screening of compound libraries in an automated manner to identify quality starting points for optimization. This often involves screening a large diversity of compounds in an assay that preserves a connection to the disease pathology. Phenotypic screening is a powerful tool for drug identification, in that assays can be run without prior understanding of the target and with primary cells that closely mimic the therapeutic setting. Advanced automation and high-content imaging have enabled many complex assays, but these are still relatively slow and low throughput. To address this limitation, we have developed an automated workflow that is dedicated to processing complex phenotypic assays for flow cytometry. The system can achieve a throughput of 50,000 wells per day, resulting in a fully automated platform that enables robust phenotypic drug discovery. Over the past 5 years, this screening system has been used for a variety of drug discovery programs, across many disease areas, with many molecules advancing quickly into preclinical development and into the clinic. This report will highlight a diversity of approaches that automated flow cytometry has enabled for phenotypic drug discovery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, D. T.
Ion beam interference coating (IBIC) is a sputter-deposition process for multiple layers of optical thin films employing a Kaufman gun. It has achieved coatings of extremely low optical loss and high mechanical strength. It has many potential applications for a wide spectral range. This coating process is described in terms of principle, fabrication procedure, and optical measurements. Some discussions follow the history and outlooks of IBIC with emphasis on how to achieve low loss and on the throughput improvements.
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1995-01-01
In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.
Ormes, James D; Zhang, Dan; Chen, Alex M; Hou, Shirley; Krueger, Davida; Nelson, Todd; Templeton, Allen
2013-02-01
There has been a growing interest in amorphous solid dispersions for bioavailability enhancement in drug discovery. Spray drying, as shown in this study, is well suited to produce prototype amorphous dispersions in the Candidate Selection stage where drug supply is limited. This investigation mapped the processing window of a micro-spray dryer to achieve desired particle characteristics and optimize throughput/yield. Effects of processing variables on the properties of hypromellose acetate succinate were evaluated by a fractional factorial design of experiments. Parameters studied include solid loading, atomization, nozzle size, and spray rate. Response variables include particle size, morphology and yield. Unlike most other commercial small-scale spray dryers, the ProCepT was capable of producing particles with a relatively wide mean particle size, ca. 2-35 µm, allowing material properties to be tailored to support various applications. In addition, an optimized throughput of 35 g/hour with a yield of 75-95% was achieved, which affords to support studies from Lead-identification/Lead-optimization to early safety studies. A regression model was constructed to quantify the relationship between processing parameters and the response variables. The response surface curves provide a useful tool to design processing conditions, leading to a reduction in development time and drug usage to support drug discovery.
Emerging model systems for functional genomics analysis of Crassulacean acid metabolism.
Hartwell, James; Dever, Louisa V; Boxall, Susanna F
2016-06-01
Crassulacean acid metabolism (CAM) is one of three main pathways of photosynthetic carbon dioxide fixation found in higher plants. It stands out for its ability to underpin dramatic improvements in plant water use efficiency, which in turn has led to a recent renaissance in CAM research. The current ease with which candidate CAM-associated genes and proteins can be identified through high-throughput sequencing has opened up a new horizon for the development of diverse model CAM species that are amenable to genetic manipulations. The adoption of these model CAM species is underpinning rapid advances in our understanding of the complete gene set for CAM. We highlight recent breakthroughs in the functional characterisation of CAM genes that have been achieved through transgenic approaches. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
The Boom in 3D-Printed Sensor Technology
Xu, Yuanyuan; Wu, Xiaoyue; Guo, Xiao; Kong, Bin; Zhang, Min; Qian, Xiang; Mi, Shengli; Sun, Wei
2017-01-01
Future sensing applications will include high-performance features, such as toxin detection, real-time monitoring of physiological events, advanced diagnostics, and connected feedback. However, such multi-functional sensors require advancements in sensitivity, specificity, and throughput with the simultaneous delivery of multiple detection in a short time. Recent advances in 3D printing and electronics have brought us closer to sensors with multiplex advantages, and additive manufacturing approaches offer a new scope for sensor fabrication. To this end, we review the recent advances in 3D-printed cutting-edge sensors. These achievements demonstrate the successful application of 3D-printing technology in sensor fabrication, and the selected studies deeply explore the potential for creating sensors with higher performance. Further development of multi-process 3D printing is expected to expand future sensor utility and availability. PMID:28534832
NASA Astrophysics Data System (ADS)
Smith, Geoffrey B.; Earp, Alan; Franklin, Jim B.; McCredie, Geoffrey
2001-11-01
Simple quantitative performance criteria are developed for translucent materials in terms of hemispherical visible transmittance, and angular spread of transmitted luminance using a half angle. Criteria are linked to applications in luminaires and skylights with emphasis on maximising visible throughput while minimising glare. These basic criteria are also extended to angle of incidence changes which are substantial. Example data is provided showing that acrylic pigmented with spherical polymer particles can have total hemispherical transmittance with weak thickness dependence, which is better than clear sheet, while the spread of transmitted light is quite thickness-sensitive and occurs over wider angles than inorganic pigments. This combination means significantly fewer lamps can achieve specified lux levels with low glare, and smaller skylights can provide higher, more uniform daylight illuminance.
Adaptive threshold control for auto-rate fallback algorithm in IEEE 802.11 multi-rate WLANs
NASA Astrophysics Data System (ADS)
Wu, Qilin; Lu, Yang; Zhu, Xiaolin; Ge, Fangzhen
2012-03-01
The IEEE 802.11 standard supports multiple rates for data transmission in the physical layer. Nowadays, to improve network performance, a rate adaptation scheme called auto-rate fallback (ARF) is widely adopted in practice. However, ARF scheme suffers performance degradation in multiple contending nodes environments. In this article, we propose a novel rate adaptation scheme called ARF with adaptive threshold control. In multiple contending nodes environment, the proposed scheme can effectively mitigate the frame collision effect on rate adaptation decision by adaptively adjusting rate-up and rate-down threshold according to the current collision level. Simulation results show that the proposed scheme can achieve significantly higher throughput than the other existing rate adaptation schemes. Furthermore, the simulation results also demonstrate that the proposed scheme can effectively respond to the varying channel condition.
NASA Astrophysics Data System (ADS)
Li, Ke Sherry; Chu, Phillip Y.; Fourie-O'Donohue, Aimee; Srikumar, Neha; Kozak, Katherine R.; Liu, Yichin; Tran, John C.
2018-05-01
Antibody-drug conjugates (ADCs) present unique challenges for ligand-binding assays primarily due to the dynamic changes of the drug-to-antibody ratio (DAR) distribution in vivo and in vitro. Here, an automated on-tip affinity capture platform with subsequent mass spectrometry analysis was developed to accurately characterize the DAR distribution of ADCs from biological matrices. A variety of elution buffers were tested to offer optimal recovery, with trastuzumab serving as a surrogate to the ADCs. High assay repeatability (CV 3%) was achieved for trastuzumab antibody when captured below the maximal binding capacity of 7.5 μg. Efficient on-tip deglycosylation was also demonstrated in 1 h followed by affinity capture. Moreover, this tip-based platform affords higher throughput for DAR characterization when compared with a well-characterized bead-based method.
Solid-phase assays for small molecule screening using sol-gel entrapped proteins.
Lebert, Julie M; Forsberg, Erica M; Brennan, John D
2008-04-01
With compound libraries exceeding one million compounds, the ability to quickly and effectively screen these compounds against relevant pharmaceutical targets has become crucial. Solid-phase assays present several advantages over solution-based methods. For example, a higher degree of miniaturization can be achieved, functional- and affinity-based studies are possible, and a variety of detection methods can be used. Unfortunately, most protein immobilization methods are either too harsh or require recombinant proteins and thus are not amenable to delicate proteins such as kinases and membrane-bound receptors. Sol-gel encapsulation of proteins in an inorganic silica matrix has emerged as a novel solid-phase assay platform. In this minireview, we discuss the development of sol-gel derived protein microarrays and sol-gel based monolithic bioaffinity columns for the high-throughput screening of small molecule libraries and mixtures.
Sakai, Kenichi; Obata, Kouki; Yoshikawa, Mayumi; Takano, Ryusuke; Shibata, Masaki; Maeda, Hiroyuki; Mizutani, Akihiko; Terada, Katsuhide
2012-10-01
To design a high drug loading formulation of self-microemulsifying/micelle system. A poorly-soluble model drug (CH5137291), 8 hydrophilic surfactants (HS), 10 lipophilic surfactants (LS), 5 oils, and PEG400 were used. A high loading formulation was designed by a following stepwise approach using a high-throughput formulation screening (HTFS) system: (1) an oil/solvent was selected by solubility of the drug; (2) a suitable HS for highly loading was selected by the screenings of emulsion/micelle size and phase stability in binary systems (HS, oil/solvent) with increasing loading levels; (3) a LS that formed a broad SMEDDS/micelle area on a phase diagram containing the HS and oil/solvent was selected by the same screenings; (4) an optimized formulation was selected by evaluating the loading capacity of the crystalline drug. Aqueous solubility behavior and oral absorption (Beagle dog) of the optimized formulation were compared with conventional formulations (jet-milled, PEG400). As an optimized formulation, d-α-tocopheryl polyoxyethylene 1000 succinic ester: PEG400 = 8:2 was selected, and achieved the target loading level (200 mg/mL). The formulation formed fine emulsion/micelle (49.1 nm), and generated and maintained a supersaturated state at a higher level compared with the conventional formulations. In the oral absorption test, the area under the plasma concentration-time curve of the optimized formulation was 16.5-fold higher than that of the jet-milled formulation. The high loading formulation designed by the stepwise approach using the HTFS system improved the oral absorption of the poorly-soluble model drug.
Nanoimprint system development and status for high volume semiconductor manufacturing
NASA Astrophysics Data System (ADS)
Hiura, Hiromi; Takabayashi, Yukio; Takashima, Tsuneo; Emoto, Keiji; Choi, Jin; Schumaker, Phil
2016-10-01
Imprint lithography has been shown to be an effective technique for replication of nano-scale features. Jet and Flash Imprint Lithography* (J-FIL*) involves the field-by-field deposition and exposure of a low viscosity resist deposited by jetting technology onto the substrate. The patterned mask is lowered into the fluid which then quickly flows into the relief patterns in the mask by capillary action. Following this filling step, the resist is crosslinked under UV radiation, and then the mask is removed, leaving a patterned resist on the substrate. There are many criteria that determine whether a particular technology is ready for wafer manufacturing. For imprint lithography, recent attention has been given to the areas of overlay, throughput, defectivity, and mask replication. This paper reviews progress in these critical areas. Recent demonstrations have proven that mix and match overlay of less than 5nm can achieved. Further reductions require a higher order correction system. Modeling and experimental data are presented which provide a path towards reducing the overlay errors to less than 3nm. Throughput is mainly impacted by the fill time of the relief images on the mask. Improvement in resist materials provides a solution that allows 15 wafers per hour per station, or a tool throughput of 60 wafers per hour. Defectivity and mask life play a significant role relative to meeting the cost of ownership (CoO) requirements in the production of semiconductor devices. Hard particles on a wafer or mask create the possibility of inducing a permanent defect on the mask that can impact device yield and mask life. By using material methods to reduce particle shedding and by introducing an air curtain system, the lifetime of both the master mask and the replica mask can be extended. In this work, we report results that demonstrate a path towards achieving mask lifetimes of better than 1000 wafers. Finally, on the mask side, a new replication tool, the FPA-1100NR2 is introduced. Mask replication is required for nanoimprint lithography (NIL), and criteria that are crucial to the success of a replication platform include both particle control and IP accuracy. In particular, by improving the specifications on the mask chuck, residual errors of only 1nm can be realized.
ECONOMICS OF SAMPLE COMPOSITING AS A SCREENING TOOL IN GROUND WATER QUALITY MONITORING
Recent advances in high throughput/automated compositing with robotics/field-screening methods offer seldom-tapped opportunities for achieving cost-reduction in ground water quality monitoring programs. n economic framework is presented in this paper for the evaluation of sample ...
Possibilities for serial femtosecond crystallography sample delivery at future light sourcesa)
Chavas, L. M. G.; Gumprecht, L.; Chapman, H. N.
2015-01-01
Serial femtosecond crystallography (SFX) uses X-ray pulses from free-electron laser (FEL) sources that can outrun radiation damage and thereby overcome long-standing limits in the structure determination of macromolecular crystals. Intense X-ray FEL pulses of sufficiently short duration allow the collection of damage-free data at room temperature and give the opportunity to study irreversible time-resolved events. SFX may open the way to determine the structure of biological molecules that fail to crystallize readily into large well-diffracting crystals. Taking advantage of FELs with high pulse repetition rates could lead to short measurement times of just minutes. Automated delivery of sample suspensions for SFX experiments could potentially give rise to a much higher rate of obtaining complete measurements than at today's third generation synchrotron radiation facilities, as no crystal alignment or complex robotic motions are required. This capability will also open up extensive time-resolved structural studies. New challenges arise from the resulting high rate of data collection, and in providing reliable sample delivery. Various developments for fully automated high-throughput SFX experiments are being considered for evaluation, including new implementations for a reliable yet flexible sample environment setup. Here, we review the different methods developed so far that best achieve sample delivery for X-ray FEL experiments and present some considerations towards the goal of high-throughput structure determination with X-ray FELs. PMID:26798808
Pulsed laser activated cell sorter (PLACS) for high-throughput fluorescent mammalian cell sorting
NASA Astrophysics Data System (ADS)
Chen, Yue; Wu, Ting-Hsiang; Chung, Aram; Kung, Yu-Chung; Teitell, Michael A.; Di Carlo, Dino; Chiou, Pei-Yu
2014-09-01
We present a Pulsed Laser Activated Cell Sorter (PLACS) realized by exciting laser induced cavitation bubbles in a PDMS microfluidic channel to create high speed liquid jets to deflect detected fluorescent samples for high speed sorting. Pulse laser triggered cavitation bubbles can expand in few microseconds and provide a pressure higher than tens of MPa for fluid perturbation near the focused spot. This ultrafast switching mechanism has a complete on-off cycle less than 20 μsec. Two approaches have been utilized to achieve 3D sample focusing in PLACS. One is relying on multilayer PDMS channels to provide 3D hydrodynamic sheath flows. It offers accurate timing control of fast (2 m sec-1) passing particles so that synchronization with laser bubble excitation is possible, an critically important factor for high purity and high throughput sorting. PLACS with 3D hydrodynamic focusing is capable of sorting at 11,000 cells/sec with >95% purity, and 45,000 cells/sec with 45% purity using a single channel in a single step. We have also demonstrated 3D focusing using inertial flows in PLACS. This sheathless focusing approach requires 10 times lower initial cell concentration than that in sheath-based focusing and avoids severe sample dilution from high volume sheath flows. Inertia PLACS is capable of sorting at 10,000 particles sec-1 with >90% sort purity.
Collision avoidance in TV white spaces: a cross-layer design approach for cognitive radio networks
NASA Astrophysics Data System (ADS)
Foukalas, Fotis; Karetsos, George T.
2015-07-01
One of the most promising applications of cognitive radio networks (CRNs) is the efficient exploitation of TV white spaces (TVWSs) for enhancing the performance of wireless networks. In this paper, we propose a cross-layer design (CLD) of carrier sense multiple access with collision avoidance (CSMA/CA) mechanism at the medium access control (MAC) layer with spectrum sensing (SpSe) at the physical layer, for identifying the occupancy status of TV bands. The proposed CLD relies on a Markov chain model with a state pair containing both the SpSe and the CSMA/CA from which we derive the collision probability and the achievable throughput. Analytical and simulation results are obtained for different collision avoidance and SpSe implementation scenarios by varying the contention window, back off stage and probability of detection. The obtained results depict the achievable throughput under different collision avoidance and SpSe implementation scenarios indicating thereby the performance of collision avoidance in TVWSs-based CRNs.
NASA Astrophysics Data System (ADS)
Zhang, Yuli; Han, Jun; Weng, Xinqian; He, Zhongzhu; Zeng, Xiaoyang
This paper presents an Application Specific Instruction-set Processor (ASIP) for the SHA-3 BLAKE algorithm family by instruction set extensions (ISE) from an RISC (reduced instruction set computer) processor. With a design space exploration for this ASIP to increase the performance and reduce the area cost, we accomplish an efficient hardware and software implementation of BLAKE algorithm. The special instructions and their well-matched hardware function unit improve the calculation of the key section of the algorithm, namely G-functions. Also, relaxing the time constraint of the special function unit can decrease its hardware cost, while keeping the high data throughput of the processor. Evaluation results reveal the ASIP achieves 335Mbps and 176Mbps for BLAKE-256 and BLAKE-512. The extra area cost is only 8.06k equivalent gates. The proposed ASIP outperforms several software approaches on various platforms in cycle per byte. In fact, both high throughput and low hardware cost achieved by this programmable processor are comparable to that of ASIC implementations.
ACTS High-Speed VSAT Demonstrated
NASA Technical Reports Server (NTRS)
Tran, Quang K.
1999-01-01
The Advanced Communication Technology Satellite (ACTS) developed by NASA has demonstrated the breakthrough technologies of Ka-band transmission, spot-beam antennas, and onboard processing. These technologies have enabled the development of very small and ultrasmall aperture terminals (VSAT s and USAT's), which have capabilities greater than have been possible with conventional satellite technologies. The ACTS High Speed VSAT (HS VSAT) is an effort at the NASA Glenn Research Center at Lewis Field to experimentally demonstrate the maximum user throughput data rate that can be achieved using the technologies developed and implemented on ACTS. This was done by operating the system uplinks as frequency division multiple access (FDMA), essentially assigning all available time division multiple access (TDMA) time slots to a single user on each of two uplink frequencies. Preliminary results show that, using a 1.2-m antenna in this mode, the High Speed VSAT can achieve between 22 and 24 Mbps of the 27.5 Mbps burst rate, for a throughput efficiency of 80 to 88 percent.
Nanophotonic trapping for precise manipulation of biomolecular arrays.
Soltani, Mohammad; Lin, Jun; Forties, Robert A; Inman, James T; Saraf, Summer N; Fulbright, Robert M; Lipson, Michal; Wang, Michelle D
2014-06-01
Optical trapping is a powerful manipulation and measurement technique widely used in the biological and materials sciences. Miniaturizing optical trap instruments onto optofluidic platforms holds promise for high-throughput lab-on-a-chip applications. However, a persistent challenge with existing optofluidic devices has been achieving controlled and precise manipulation of trapped particles. Here, we report a new class of on-chip optical trapping devices. Using photonic interference functionalities, an array of stable, three-dimensional on-chip optical traps is formed at the antinodes of a standing-wave evanescent field on a nanophotonic waveguide. By employing the thermo-optic effect via integrated electric microheaters, the traps can be repositioned at high speed (∼30 kHz) with nanometre precision. We demonstrate sorting and manipulation of individual DNA molecules. In conjunction with laminar flows and fluorescence, we also show precise control of the chemical environment of a sample with simultaneous monitoring. Such a controllable trapping device has the potential to achieve high-throughput precision measurements on chip.
Diagnostic Applications of Next Generation Sequencing in Immunogenetics and Molecular Oncology
Grumbt, Barbara; Eck, Sebastian H.; Hinrichsen, Tanja; Hirv, Kaimo
2013-01-01
Summary With the introduction of the next generation sequencing (NGS) technologies, remarkable new diagnostic applications have been established in daily routine. Implementation of NGS is challenging in clinical diagnostics, but definite advantages and new diagnostic possibilities make the switch to the technology inevitable. In addition to the higher sequencing capacity, clonal sequencing of single molecules, multiplexing of samples, higher diagnostic sensitivity, workflow miniaturization, and cost benefits are some of the valuable features of the technology. After the recent advances, NGS emerged as a proven alternative for classical Sanger sequencing in the typing of human leukocyte antigens (HLA). By virtue of the clonal amplification of single DNA molecules ambiguous typing results can be avoided. Simultaneously, a higher sample throughput can be achieved by tagging of DNA molecules with multiplex identifiers and pooling of PCR products before sequencing. In our experience, up to 380 samples can be typed for HLA-A, -B, and -DRB1 in high-resolution during every sequencing run. In molecular oncology, NGS shows a markedly increased sensitivity in comparison to the conventional Sanger sequencing and is developing to the standard diagnostic tool in detection of somatic mutations in cancer cells with great impact on personalized treatment of patients. PMID:23922545
NASA Astrophysics Data System (ADS)
Jia, Yan; Sun, He-yun; Tan, Qiao-yi; Gao, Hong-shan; Feng, Xing-liang; Ruan, Ren-man
2018-03-01
The effects of temperature on chalcocite/pyrite oxidation and the microbial population in the bioleaching columns of a low-grade chalcocite ore were investigated in this study. Raffinate from the industrial bioleaching heap was used as an irrigation solution for columns operated at 20, 30, 45, and 60°C. The dissolution of copper and iron were investigated during the bioleaching processes, and the microbial community was revealed by using a high-throughput sequencing method. The genera of Ferroplasma, Acidithiobacillus, Leptospirillum, Acidiplasma, and Sulfobacillus dominated the microbial community, and the column at a higher temperature favored the growth of moderate thermophiles. Even though microbial abundance and activity were highest at 30°C, the column at a higher temperature achieved a much higher Cu leaching efficiency and recovery, which suggested that the promotion of chemical oxidation by elevated temperature dominated the dissolution of Cu. The highest pyrite oxidation percentage was detected at 45°C. Higher temperature resulted in precipitation of jarosite in columns, especially at 60°C. The results gave implications to the optimization of heap bioleaching of secondary copper sulfide in both enhanced chalcocite leaching and acid/iron balance, from the perspective of leaching temperature and affected microbial community and activity.
Analog Correlator Based on One Bit Digital Correlator
NASA Technical Reports Server (NTRS)
Prokop, Norman (Inventor); Krasowski, Michael (Inventor)
2017-01-01
A two input time domain correlator may perform analog correlation. In order to achieve high throughput rates with reduced or minimal computational overhead, the input data streams may be hard limited through adaptive thresholding to yield two binary bit streams. Correlation may be achieved through the use of a Hamming distance calculation, where the distance between the two bit streams approximates the time delay that separates them. The resulting Hamming distance approximates the correlation time delay with high accuracy.
Optimizing multi-dimensional high throughput screening using zebrafish.
Truong, Lisa; Bugel, Sean M; Chlebowski, Anna; Usenko, Crystal Y; Simonich, Michael T; Simonich, Staci L Massey; Tanguay, Robert L
2016-10-01
The use of zebrafish for high throughput screening (HTS) for chemical bioactivity assessments is becoming routine in the fields of drug discovery and toxicology. Here we report current recommendations from our experiences in zebrafish HTS. We compared the effects of different high throughput chemical delivery methods on nominal water concentration, chemical sorption to multi-well polystyrene plates, transcription responses, and resulting whole animal responses. We demonstrate that digital dispensing consistently yields higher data quality and reproducibility compared to standard plastic tip-based liquid handling. Additionally, we illustrate the challenges in using this sensitive model for chemical assessment when test chemicals have trace impurities. Adaptation of these better practices for zebrafish HTS should increase reproducibility across laboratories. Copyright © 2016 Elsevier Inc. All rights reserved.
Forward genetics by sequencing EMS variation-induced inbred lines
USDA-ARS?s Scientific Manuscript database
The dramatic increase in throughput of sequencing techniques enables gene cloning through pre-existing forward genetics approaches. We show that it also brings with it the potential to change the crossing designs and approach of forward genetics. To achieve this for eukaryotic organisms with complex...
Reverse Toxicokinetics: From In Vitro Concentration to In Vivo Dose
This talk provided an update to an international audience about the state of the science to relate results from high-throughput bioactivity screening efforts out to an external exposure that would be required to achieve blood concentrations at which these bioactivities may be obs...
Park, Chanhun; Nam, Hee-Geun; Kim, Pung-Ho; Mun, Sungyong
2014-06-01
The removal of isoleucine from valine has been a key issue in the stage of valine crystallization, which is the final step in the valine production process in industry. To address this issue, a three-zone simulated moving-bed (SMB) process for the separation of valine and isoleucine has been developed previously. However, the previous process, which was based on a classical port-location mode, had some limitations in throughput and valine product concentration. In this study, a three-zone SMB process based on a modified port-location mode was applied to the separation of valine and isoleucine for the purpose of making a marked improvement in throughput and valine product concentration. Computer simulations and a lab-scale process experiment showed that the modified three-zone SMB for valine separation led to >65% higher throughput and >160% higher valine concentration compared to the previous three-zone SMB for the same separation. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Kim, Seunggyu; Lee, Seokhun; Jeon, Jessie S.
2017-11-01
To determine the most effective antimicrobial treatments of infectious pathogen, high-throughput antibiotic susceptibility test (AST) is critically required. However, the conventional AST requires at least 16 hours to reach the minimum observable population. Therefore, we developed a microfluidic system that allows maintenance of linear antibiotic concentration and measurement of local bacterial density. Based on the Stokes-Einstein equation, the flow rate in the microchannel was optimized so that linearization was achieved within 10 minutes, taking into account the diffusion coefficient of each antibiotic in the agar gel. As a result, the minimum inhibitory concentration (MIC) of each antibiotic against P. aeruginosa could be immediately determined 6 hours after treatment of the linear antibiotic concentration. In conclusion, our system proved the efficacy of a high-throughput AST platform through MIC comparison with Clinical and Laboratory Standards Institute (CLSI) range of antibiotics. This work was supported by the Climate Change Research Hub (Grant No. N11170060) of the KAIST and by the Brain Korea 21 Plus project.
Mathematical and Computational Modeling in Complex Biological Systems
Li, Wenyang; Zhu, Xiaoliang
2017-01-01
The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology. PMID:28386558
Mathematical and Computational Modeling in Complex Biological Systems.
Ji, Zhiwei; Yan, Ke; Li, Wenyang; Hu, Haigen; Zhu, Xiaoliang
2017-01-01
The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.
Extension of analog network coding in wireless information exchange
NASA Astrophysics Data System (ADS)
Chen, Cheng; Huang, Jiaqing
2012-01-01
Ever since the concept of analog network coding(ANC) was put forward by S.Katti, much attention has been focused on how to utilize analog network coding to take advantage of wireless interference, which used to be considered generally harmful, to improve throughput performance. Previously, only the case of two nodes that need to exchange information has been fully discussed while the issue of extending analog network coding to more than three nodes remains undeveloped. In this paper, we propose a practical transmission scheme to extend analog network coding to more than two nodes that need to exchange information among themselves. We start with the case of three nodes that need to exchange information and demonstrate that through utilizing our algorithm, the throughput can achieve 33% and 20% increase compared with that of traditional transmission scheduling and digital network coding, respectively. Then, we generalize the algorithm so that it can fit for occasions with any number of nodes. We also discuss some technical issues and throughput analysis as well as the bit error rate.
Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data
Ching, Travers; Zhu, Xun
2018-01-01
Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet. PMID:29634719
High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving
NASA Astrophysics Data System (ADS)
Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.
This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.
A Comparison of Variant Calling Pipelines Using Genome in a Bottle as a Reference
2015-01-01
High-throughput sequencing, especially of exomes, is a popular diagnostic tool, but it is difficult to determine which tools are the best at analyzing this data. In this study, we use the NIST Genome in a Bottle results as a novel resource for validation of our exome analysis pipeline. We use six different aligners and five different variant callers to determine which pipeline, of the 30 total, performs the best on a human exome that was used to help generate the list of variants detected by the Genome in a Bottle Consortium. Of these 30 pipelines, we found that Novoalign in conjunction with GATK UnifiedGenotyper exhibited the highest sensitivity while maintaining a low number of false positives for SNVs. However, it is apparent that indels are still difficult for any pipeline to handle with none of the tools achieving an average sensitivity higher than 33% or a Positive Predictive Value (PPV) higher than 53%. Lastly, as expected, it was found that aligners can play as vital a role in variant detection as variant callers themselves. PMID:26539496
NASA Technical Reports Server (NTRS)
Nikzad, Shouleh; Hoenk, M. E.; Carver, A. G.; Jones, T. J.; Greer, F.; Hamden, E.; Goodsall, T.
2013-01-01
In this paper we discuss the high throughput end-to-end post fabrication processing of high performance delta-doped and superlattice-doped silicon imagers for UV, visible, and NIR applications. As an example, we present our results on far ultraviolet and ultraviolet quantum efficiency (QE) in a photon counting, detector array. We have improved the QE by nearly an order of magnitude over microchannel plates (MCPs) that are the state-of-the-art UV detectors for many NASA space missions as well as defense applications. These achievements are made possible by precision interface band engineering of Molecular Beam Epitaxy (MBE) and Atomic Layer Deposition (ALD).
Gene cassette knock-in in mammalian cells and zygotes by enhanced MMEJ.
Aida, Tomomi; Nakade, Shota; Sakuma, Tetsushi; Izu, Yayoi; Oishi, Ayu; Mochida, Keiji; Ishikubo, Harumi; Usami, Takako; Aizawa, Hidenori; Yamamoto, Takashi; Tanaka, Kohichi
2016-11-28
Although CRISPR/Cas enables one-step gene cassette knock-in, assembling targeting vectors containing long homology arms is a laborious process for high-throughput knock-in. We recently developed the CRISPR/Cas-based precise integration into the target chromosome (PITCh) system for a gene cassette knock-in without long homology arms mediated by microhomology-mediated end-joining. Here, we identified exonuclease 1 (Exo1) as an enhancer for PITCh in human cells. By combining the Exo1 and PITCh-directed donor vectors, we achieved convenient one-step knock-in of gene cassettes and floxed allele both in human cells and mouse zygotes. Our results provide a technical platform for high-throughput knock-in.
Medium Access Control for Opportunistic Concurrent Transmissions under Shadowing Channels
Son, In Keun; Mao, Shiwen; Hur, Seung Min
2009-01-01
We study the problem of how to alleviate the exposed terminal effect in multi-hop wireless networks in the presence of log-normal shadowing channels. Assuming node location information, we propose an extension of the IEEE 802.11 MAC protocol that sched-ules concurrent transmissions in the presence of log-normal shadowing, thus mitigating the exposed terminal problem and improving network throughput and delay performance. We observe considerable improvements in throughput and delay achieved over the IEEE 802.11 MAC under various network topologies and channel conditions in ns-2 simulations, which justify the importance of considering channel randomness in MAC protocol design for multi-hop wireless networks. PMID:22408556
Plasma Enhanced Growth of Carbon Nanotubes For Ultrasensitive Biosensors
NASA Technical Reports Server (NTRS)
Cassell, Alan M.; Meyyappan, M.
2004-01-01
The multitude of considerations facing nanostructure growth and integration lends itself to combinatorial optimization approaches. Rapid optimization becomes even more important with wafer-scale growth and integration processes. Here we discuss methodology for developing plasma enhanced CVD growth techniques for achieving individual, vertically aligned carbon nanostructures that show excellent properties as ultrasensitive electrodes for nucleic acid detection. We utilize high throughput strategies for optimizing the upstream and downstream processing and integration of carbon nanotube electrodes as functional elements in various device types. An overview of ultrasensitive carbon nanotube based sensor arrays for electrochemical bio-sensing applications and the high throughput methodology utilized to combine novel electrode technology with conventional MEMS processing will be presented.
Plasma Enhanced Growth of Carbon Nanotubes For Ultrasensitive Biosensors
NASA Technical Reports Server (NTRS)
Cassell, Alan M.; Li, J.; Ye, Q.; Koehne, J.; Chen, H.; Meyyappan, M.
2004-01-01
The multitude of considerations facing nanostructure growth and integration lends itself to combinatorial optimization approaches. Rapid optimization becomes even more important with wafer-scale growth and integration processes. Here we discuss methodology for developing plasma enhanced CVD growth techniques for achieving individual, vertically aligned carbon nanostructures that show excellent properties as ultrasensitive electrodes for nucleic acid detection. We utilize high throughput strategies for optimizing the upstream and downstream processing and integration of carbon nanotube electrodes as functional elements in various device types. An overview of ultrasensitive carbon nanotube based sensor arrays for electrochemical biosensing applications and the high throughput methodology utilized to combine novel electrode technology with conventional MEMS processing will be presented.
Total Quality Training: The Quality Culture and Quality Trainer.
ERIC Educational Resources Information Center
Thomas, Brian
This book examines the application of total quality management (TQM) principles to training and development. It contains 10 chapters on the following topics: the quality revolution (the nature of and rationale for quality); major barriers to achieving quality (supplier-led approaches, problems with customers, throughput orientation, variable…
NASA Astrophysics Data System (ADS)
Loisel, G.; Lake, P.; Gard, P.; Dunham, G.; Nielsen-Weber, L.; Wu, M.; Norris, E.
2016-11-01
At Sandia National Laboratories, the x-ray generator Manson source model 5 was upgraded from 10 to 25 kV. The purpose of the upgrade is to drive higher characteristics photon energies with higher throughput. In this work we present characterization studies for the source size and the x-ray intensity when varying the source voltage for a series of K-, L-, and M-shell lines emitted from Al, Y, and Au elements composing the anode. We used a 2-pinhole camera to measure the source size and an energy dispersive detector to monitor the spectral content and intensity of the x-ray source. As the voltage increases, the source size is significantly reduced and line intensity is increased for the three materials. We can take advantage of the smaller source size and higher source throughput to effectively calibrate the suite of Z Pulsed Power Facility crystal spectrometers.
NASA Astrophysics Data System (ADS)
Potyrailo, Radislav A.; Chisholm, Bret J.; Olson, Daniel R.; Brennan, Michael J.; Molaison, Chris A.
2002-02-01
Design, validation, and implementation of an optical spectroscopic system for high-throughput analysis of combinatorially developed protective organic coatings are reported. Our approach replaces labor-intensive coating evaluation steps with an automated system that rapidly analyzes 8x6 arrays of coating elements that are deposited on a plastic substrate. Each coating element of the library is 10 mm in diameter and 2 to 5 micrometers thick. Performance of coatings is evaluated with respect to their resistance to wear abrasion because this parameter is one of the primary considerations in end-use applications. Upon testing, the organic coatings undergo changes that are impossible to quantitatively predict using existing knowledge. Coatings are abraded using industry-accepted abrasion test methods at single-or multiple-abrasion conditions, followed by high- throughput analysis of abrasion-induced light scatter. The developed automated system is optimized for the analysis of diffusively scattered light that corresponds to 0 to 30% haze. System precision of 0.1 to 2.5% relative standard deviation provides capability for the reliable ranking of coatings performance. While the system was implemented for high-throughput screening of combinatorially developed organic protective coatings for automotive applications, it can be applied to a variety of other applications where materials ranking can be achieved using optical spectroscopic tools.
Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter
2015-01-01
Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438
Machine learning in computational biology to accelerate high-throughput protein expression.
Sastry, Anand; Monk, Jonathan; Tegel, Hanna; Uhlen, Mathias; Palsson, Bernhard O; Rockberg, Johan; Brunk, Elizabeth
2017-08-15
The Human Protein Atlas (HPA) enables the simultaneous characterization of thousands of proteins across various tissues to pinpoint their spatial location in the human body. This has been achieved through transcriptomics and high-throughput immunohistochemistry-based approaches, where over 40 000 unique human protein fragments have been expressed in E. coli. These datasets enable quantitative tracking of entire cellular proteomes and present new avenues for understanding molecular-level properties influencing expression and solubility. Combining computational biology and machine learning identifies protein properties that hinder the HPA high-throughput antibody production pipeline. We predict protein expression and solubility with accuracies of 70% and 80%, respectively, based on a subset of key properties (aromaticity, hydropathy and isoelectric point). We guide the selection of protein fragments based on these characteristics to optimize high-throughput experimentation. We present the machine learning workflow as a series of IPython notebooks hosted on GitHub (https://github.com/SBRG/Protein_ML). The workflow can be used as a template for analysis of further expression and solubility datasets. ebrunk@ucsd.edu or johanr@biotech.kth.se. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Ahmad, Afandi; Roslan, Muhammad Faris; Amira, Abbes
2017-09-01
In high jump sports, approach take-off speed and force during the take-off are two (2) main important parts to gain maximum jump. To measure both parameters, wireless sensor network (WSN) that contains microcontroller and sensor are needed to describe the results of speed and force for jumpers. Most of the microcontroller exhibit transmission issues in terms of throughput, latency and cost. Thus, this study presents the comparison of wireless microcontrollers in terms of throughput, latency and cost, and the microcontroller that have best performances and cost will be implemented in high jump wearable device. In the experiments, three (3) parts have been integrated - input, process and output. Force (for ankle) and global positioning system (GPS) sensor (for body waist) acts as an input for data transmission. These data were then being processed by both microcontrollers, ESP8266 and Arduino Yun Mini to transmit the data from sensors to the server (host-PC) via message queuing telemetry transport (MQTT) protocol. The server acts as receiver and the results was calculated from the MQTT log files. At the end, results obtained have shown ESP8266 microcontroller had been chosen since it achieved high throughput, low latency and 11 times cheaper in term of prices compared to Arduino Yun Mini microcontroller.
Optimizing Crawler4j using MapReduce Programming Model
NASA Astrophysics Data System (ADS)
Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.
2017-06-01
World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.
Novel organosilicone materials and patterning techniques for nanoimprint lithography
NASA Astrophysics Data System (ADS)
Pina, Carlos Alberto
Nanoimprint Lithography (NIL) is a high-throughput patterning technique that allows the fabrication of nanostructures with great precision. It has been listed on the International Technology Roadmap for Semiconductors (ITRS) as a candidate technology for future generation Si chip manufacturing. In nanoimprint Lithography a resist material, e.g. a thermoplastic polymer, is placed in contact with a mold and then mechanically deformed under an applied load to transfer the nano-features on the mold surface into the resist. The success of NIL relies heavily in the capability of fabricating nanostructures on different types of materials. Thus, a key factor for NIL implementation in industrial settings is the development of advanced materials suitable as the nanoimprint resist. This dissertation focuses on the engineering of new polymer materials suitable as NIL resist. A variety of silicone-based polymer precursors were synthesized and formulated for NIL applications. High throughput and high yield nanopatterning was successfully achieved. Furthermore, additional capabilities of the developed materials were explored for a range of NIL applications such as their use as flexible, UV-transparent stamps and silicon compatible etching layers. Finally, new strategies were investigated to expand the NIL potentiality. High throughput, non-residual layer imprinting was achieved with the newly developed resist materials. In addition, several strategies were designed for the precise control of nanoscale size patterned structures with multifunctional resist systems by post-imprinting modification of the pattern size. These developments provide NIL with a new set of tools for a variety of additional important applications.
Wu, Liang; Chen, Pu; Dong, Yingsong; Feng, Xiaojun; Liu, Bi-Feng
2013-06-01
Encapsulation of single cells is a challenging task in droplet microfluidics due to the random compartmentalization of cells dictated by Poisson statistics. In this paper, a microfluidic device was developed to improve the single-cell encapsulation rate by integrating droplet generation with fluorescence-activated droplet sorting. After cells were loaded into aqueous droplets by hydrodynamic focusing, an on-flight fluorescence-activated sorting process was conducted to isolate droplets containing one cell. Encapsulation of fluorescent polystyrene beads was investigated to evaluate the developed method. A single-bead encapsulation rate of more than 98 % was achieved under the optimized conditions. Application to encapsulate single HeLa cells was further demonstrated with a single-cell encapsulation rate of 94.1 %, which is about 200 % higher than those obtained by random compartmentalization. We expect this new method to provide a useful platform for encapsulating single cells, facilitating the development of high-throughput cell-based assays.
Modulated Raman Spectroscopy for Enhanced Cancer Diagnosis at the Cellular Level
De Luca, Anna Chiara; Dholakia, Kishan; Mazilu, Michael
2015-01-01
Raman spectroscopy is emerging as a promising and novel biophotonics tool for non-invasive, real-time diagnosis of tissue and cell abnormalities. However, the presence of a strong fluorescence background is a key issue that can detract from the use of Raman spectroscopy in routine clinical care. The review summarizes the state-of-the-art methods to remove the fluorescence background and explores recent achievements to address this issue obtained with modulated Raman spectroscopy. This innovative approach can be used to extract the Raman spectral component from the fluorescence background and improve the quality of the Raman signal. We describe the potential of modulated Raman spectroscopy as a rapid, inexpensive and accurate clinical tool to detect the presence of bladder cancer cells. Finally, in a broader context, we show how this approach can greatly enhance the sensitivity of integrated Raman spectroscopy and microfluidic systems, opening new prospects for portable higher throughput Raman cell sorting. PMID:26110401
Increased fracture depth range in controlled spalling of (100)-oriented germanium via electroplating
Crouse, Dustin; Simon, John; Schulte, Kevin L.; ...
2018-01-31
Controlled spalling in (100)-oriented germanium using a nickel stressor layer shows promise for semiconductor device exfoliation and kerfless wafering. Demonstrated spall depths of 7-60 um using DC sputtering to deposit the stressor layer are appropriate for the latter application but spall depths < 5 um may be required to minimize waste for device applications. This work investigates the effect of tuning both electroplating current density and electrolyte chemistry on the residual stress in the nickel and on the achievable spall depth range for the Ni/Ge system as a lower-cost, higher-throughput alternative to sputtering. By tuning current density and electrolyte phosphorousmore » concentration, it is shown that electroplating can successfully span the same range of spalled thicknesses as has previously been demonstrated by sputtering and can reach sufficiently high stresses to enter a regime of thickness (<7 um) appropriate to minimize substrate consumption for device applications.« less
Method and apparatus for digitally based high speed x-ray spectrometer
Warburton, W.K.; Hubbard, B.
1997-11-04
A high speed, digitally based, signal processing system which accepts input data from a detector-preamplifier and produces a spectral analysis of the x-rays illuminating the detector. The system achieves high throughputs at low cost by dividing the required digital processing steps between a ``hardwired`` processor implemented in combinatorial digital logic, which detects the presence of the x-ray signals in the digitized data stream and extracts filtered estimates of their amplitudes, and a programmable digital signal processing computer, which refines the filtered amplitude estimates and bins them to produce the desired spectral analysis. One set of algorithms allow this hybrid system to match the resolution of analog systems while operating at much higher data rates. A second set of algorithms implemented in the processor allow the system to be self calibrating as well. The same processor also handles the interface to an external control computer. 19 figs.
Method and apparatus for digitally based high speed x-ray spectrometer
Warburton, William K.; Hubbard, Bradley
1997-01-01
A high speed, digitally based, signal processing system which accepts input data from a detector-preamplifier and produces a spectral analysis of the x-rays illuminating the detector. The system achieves high throughputs at low cost by dividing the required digital processing steps between a "hardwired" processor implemented in combinatorial digital logic, which detects the presence of the x-ray signals in the digitized data stream and extracts filtered estimates of their amplitudes, and a programmable digital signal processing computer, which refines the filtered amplitude estimates and bins them to produce the desired spectral analysis. One set of algorithms allow this hybrid system to match the resolution of analog systems while operating at much higher data rates. A second set of algorithms implemented in the processor allow the system to be self calibrating as well. The same processor also handles the interface to an external control computer.
Particle migration and sorting in microbubble streaming flows
Thameem, Raqeeb; Hilgenfeldt, Sascha
2016-01-01
Ultrasonic driving of semicylindrical microbubbles generates strong streaming flows that are robust over a wide range of driving frequencies. We show that in microchannels, these streaming flow patterns can be combined with Poiseuille flows to achieve two distinctive, highly tunable methods for size-sensitive sorting and trapping of particles much smaller than the bubble itself. This method allows higher throughput than typical passive sorting techniques, since it does not require the inclusion of device features on the order of the particle size. We propose a simple mechanism, based on channel and flow geometry, which reliably describes and predicts the sorting behavior observed in experiment. It is also shown that an asymptotic theory that incorporates the device geometry and superimposed channel flow accurately models key flow features such as peak speeds and particle trajectories, provided it is appropriately modified to account for 3D effects caused by the axial confinement of the bubble. PMID:26958103
Adaptive Precoded MIMO for LTE Wireless Communication
NASA Astrophysics Data System (ADS)
Nabilla, A. F.; Tiong, T. C.
2015-04-01
Long-Term Evolution (LTE) and Long Term Evolution-Advanced (ATE-A) have provided a major step forward in mobile communication capability. The objectives to be achieved are high peak data rates in high spectrum bandwidth and high spectral efficiencies. Technically, pre-coding means that multiple data streams are emitted from the transmit antenna with independent and appropriate weightings such that the link throughput is maximized at the receiver output thus increasing or equalizing the received signal to interference and noise (SINR) across the multiple receiver terminals. However, it is not reliable enough to fully utilize the information transfer rate to fit the condition of channel according to the bandwidth size. Thus, adaptive pre-coding is proposed. It applies pre-coding matrix indicator (PMI) channel state making it possible to change the pre-coding codebook accordingly thus improving the data rate higher than fixed pre-coding.
Analog to digital workflow improvement: a quantitative study.
Wideman, Catherine; Gallet, Jacqueline
2006-01-01
This study tracked a radiology department's conversion from utilization of a Kodak Amber analog system to a Kodak DirectView DR 5100 digital system. Through the use of ProModel Optimization Suite, a workflow simulation software package, significant quantitative information was derived from workflow process data measured before and after the change to a digital system. Once the digital room was fully operational and the radiology staff comfortable with the new system, average patient examination time was reduced from 9.24 to 5.28 min, indicating that a higher patient throughput could be achieved. Compared to the analog system, chest examination time for modality specific activities was reduced by 43%. The percentage of repeat examinations experienced with the digital system also decreased to 8% vs. the level of 9.5% experienced with the analog system. The study indicated that it is possible to quantitatively study clinical workflow and productivity by using commercially available software.
Increased fracture depth range in controlled spalling of (100)-oriented germanium via electroplating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crouse, Dustin; Simon, John; Schulte, Kevin L.
Controlled spalling in (100)-oriented germanium using a nickel stressor layer shows promise for semiconductor device exfoliation and kerfless wafering. Demonstrated spall depths of 7-60 um using DC sputtering to deposit the stressor layer are appropriate for the latter application but spall depths < 5 um may be required to minimize waste for device applications. This work investigates the effect of tuning both electroplating current density and electrolyte chemistry on the residual stress in the nickel and on the achievable spall depth range for the Ni/Ge system as a lower-cost, higher-throughput alternative to sputtering. By tuning current density and electrolyte phosphorousmore » concentration, it is shown that electroplating can successfully span the same range of spalled thicknesses as has previously been demonstrated by sputtering and can reach sufficiently high stresses to enter a regime of thickness (<7 um) appropriate to minimize substrate consumption for device applications.« less
Performance Optimization of Priority Assisted CSMA/CA Mechanism of 802.15.6 under Saturation Regime
Shakir, Mustafa; Rehman, Obaid Ur; Rahim, Mudassir; Alrajeh, Nabil; Khan, Zahoor Ali; Khan, Mahmood Ashraf; Niaz, Iftikhar Azim; Javaid, Nadeem
2016-01-01
Due to the recent development in the field of Wireless Sensor Networks (WSNs), the Wireless Body Area Networks (WBANs) have become a major area of interest for the developers and researchers. Human body exhibits postural mobility due to which distance variation occurs and the status of connections amongst sensors change time to time. One of the major requirements of WBAN is to prolong the network lifetime without compromising on other performance measures, i.e., delay, throughput and bandwidth efficiency. Node prioritization is one of the possible solutions to obtain optimum performance in WBAN. IEEE 802.15.6 CSMA/CA standard splits the nodes with different user priorities based on Contention Window (CW) size. Smaller CW size is assigned to higher priority nodes. This standard helps to reduce delay, however, it is not energy efficient. In this paper, we propose a hybrid node prioritization scheme based on IEEE 802.15.6 CSMA/CA to reduce energy consumption and maximize network lifetime. In this scheme, optimum performance is achieved by node prioritization based on CW size as well as power in respective user priority. Our proposed scheme reduces the average back off time for channel access due to CW based prioritization. Additionally, power based prioritization for a respective user priority helps to minimize required number of retransmissions. Furthermore, we also compare our scheme with IEEE 802.15.6 CSMA/CA standard (CW assisted node prioritization) and power assisted node prioritization under postural mobility in WBAN. Mathematical expressions are derived to determine the accurate analytical model for throughput, delay, bandwidth efficiency, energy consumption and life time for each node prioritization scheme. With the intention of analytical model validation, we have performed the simulations in OMNET++/MIXIM framework. Analytical and simulation results show that our proposed hybrid node prioritization scheme outperforms other node prioritization schemes in terms of average network delay, average throughput, average bandwidth efficiency and network lifetime. PMID:27598167
NASA Astrophysics Data System (ADS)
Kruse, Daniele Francesco
2014-06-01
Physics data stored in CERN tapes is quickly reaching the 100 PB milestone. Tape is an ever-changing technology that is still following Moore's law in terms of capacity. This means we can store every year more and more data in the same amount of tapes. However this doesn't come for free: the first obvious cost is the new higher capacity media. The second less known cost is related to moving the data from the old tapes to the new ones. This activity is what we call repack. Repack is vital for any large tape user: without it, one would have to buy more tape libraries and more floor space and, eventually, data on old non supported tapes would become unreadable and be lost forever. In this paper we describe the challenge of repacking 115 PB before LHC data taking starts in the beginning of 2015. This process will have to run concurrently with the existing experiment tape activities, and therefore needs to be as transparent as possible for users. Making sure that this works out seamlessly implies careful planning of the resources and the various policies for sharing them fairly and conveniently. To tackle this problem we need to fully exploit the speed and throughput of our modern tape drives. This involves proper dimensioning and configuration of the disk arrays and all the links between them and the tape servers, i.e the machines responsible for managing the tape drives. It is also equally important to provide tools to improve the efficiency with which we use our tape libraries. The new repack setup we deployed has on average increased tape drive throughput by 80%, allowing them to perform closer to their design specifications. This improvement in turn means a 48% decrease in the number of drives needed to achieve the required throughput to complete the full repack on time.
Jones, Jaime R; Neff, Linda J; Ely, Elizabeth K; Parker, Andrew M
2012-12-01
The Cities Readiness Initiative is a federally funded program designed to assist 72 metropolitan statistical areas (MSAs) in preparing to dispense life-saving medical countermeasures within 48 hours of a public health emergency. Beginning in 2008, the 72 MSAs were required to conduct 3 drills related to the distribution and dispensing of emergency medical countermeasures. The report describes the results of the first year of pilot data for medical countermeasure drills conducted by the MSAs. The MSAs were provided templates with key metrics for 5 functional elements critical for a successful dispensing campaign: personnel call down, site activation, facility setup, pick-list generation, and dispensing throughput. Drill submissions were compiled into single data sets for each of the 5 drills. Analyses were conducted to determine whether the measures were comparable across business and non-business hours. Descriptive statistics were computed for each of the key metrics identified in the 5 drills. Most drills were conducted on Mondays and Wednesdays during business hours (8:00 am-5:00 pm). The median completion time for the personnel call-down drill was 1 hour during business hours (n = 287) and 55 minutes during non-business hours (n = 136). Site-activation drills were completed in a median of 30 minutes during business hours and 5 minutes during non-business hours. Facility setup drills were completed more rapidly during business hours (75 minutes) compared with non-business hours (96 minutes). During business hours, pick lists were generated in a median of 3 minutes compared with 5 minutes during non-business hours. Aggregate results from the dispensing throughput drills demonstrated that the median observed throughput during business hours (60 people/h) was higher than that during non-business hours (43 people/h). The results of the analyses from this pilot sample of drill submissions provide a baseline for the determination of a national standard in operational capabilities for local jurisdictions to achieve in their planning efforts for a mass dispensing campaign during an emergency.
Towards practical time-of-flight secondary ion mass spectrometry lignocellulolytic enzyme assays
2013-01-01
Background Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS) is a surface sensitive mass spectrometry technique with potential strengths as a method for detecting enzymatic activity on solid materials. In particular, ToF-SIMS has been applied to detect the enzymatic degradation of woody lignocellulose. Proof-of-principle experiments previously demonstrated the detection of both lignin-degrading and cellulose-degrading enzymes on solvent-extracted hardwood and softwood. However, these preliminary experiments suffered from low sample throughput and were restricted to samples which had been solvent-extracted in order to minimize the potential for mass interferences between low molecular weight extractive compounds and polymeric lignocellulose components. Results The present work introduces a new, higher-throughput method for processing powdered wood samples for ToF-SIMS, meanwhile exploring likely sources of sample contamination. Multivariate analysis (MVA) including Principal Component Analysis (PCA) and Multivariate Curve Resolution (MCR) was regularly used to check for sample contamination as well as to detect extractives and enzyme activity. New data also demonstrates successful ToF-SIMS analysis of unextracted samples, placing an emphasis on identifying the low-mass secondary ion peaks related to extractives, revealing how extractives change previously established peak ratios used to describe enzyme activity, and elucidating peak intensity patterns for better detection of cellulase activity in the presence of extractives. The sensitivity of ToF-SIMS to a range of cellulase doses is also shown, along with preliminary experiments augmenting the cellulase cocktail with other proteins. Conclusions These new procedures increase the throughput of sample preparation for ToF-SIMS analysis of lignocellulose and expand the applications of the method to include unextracted lignocellulose. These are important steps towards the practical use of ToF-SIMS as a tool to screen for changes in plant composition, whether the transformation of the lignocellulose is achieved through enzyme application, plant mutagenesis, or other treatments. PMID:24034438
Paul, Albert Jesuran; Bickel, Fabian; Röhm, Martina; Hospach, Lisa; Halder, Bettina; Rettich, Nina; Handrick, René; Herold, Eva Maria; Kiefer, Hans; Hesse, Friedemann
2017-07-01
Aggregation of therapeutic proteins is a major concern as aggregates lower the yield and can impact the efficacy of the drug as well as the patient's safety. It can occur in all production stages; thus, it is essential to perform a detailed analysis for protein aggregates. Several methods such as size exclusion high-performance liquid chromatography (SE-HPLC), light scattering, turbidity, light obscuration, and microscopy-based approaches are used to analyze aggregates. None of these methods allows determination of all types of higher molecular weight (HMW) species due to a limited size range. Furthermore, quantification and specification of different HMW species are often not possible. Moreover, automation is a perspective challenge coming up with automated robotic laboratory systems. Hence, there is a need for a fast, high-throughput-compatible method, which can detect a broad size range and enable quantification and classification. We describe a novel approach for the detection of aggregates in the size range 1 to 1000 μm combining fluorescent dyes for protein aggregate labelling and automated fluorescence microscope imaging (aFMI). After appropriate selection of the dye and method optimization, our method enabled us to detect various types of HMW species of monoclonal antibodies (mAbs). Using 10 μmol L -1 4,4'-dianilino-1,1'-binaphthyl-5,5'-disulfonate (Bis-ANS) in combination with aFMI allowed the analysis of mAb aggregates induced by different stresses occurring during downstream processing, storage, and administration. Validation of our results was performed by SE-HPLC, UV-Vis spectroscopy, and dynamic light scattering. With this new approach, we could not only reliably detect different HMW species but also quantify and classify them in an automated approach. Our method achieves high-throughput requirements and the selection of various fluorescent dyes enables a broad range of applications.
Monolithic amorphous silicon modules on continuous polymer substrate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grimmer, D.P.
This report examines manufacturing monolithic amorphous silicon modules on a continuous polymer substrate. Module production costs can be reduced by increasing module performance, expanding production, and improving and modifying production processes. Material costs can be reduced by developing processes that use a 1-mil polyimide substrate and multilayers of low-cost material for the front encapsulant. Research to speed up a-Si and ZnO deposition rates is needed to improve throughputs. To keep throughput rates compatible with depositions, multibeam fiber optic delivery systems for laser scribing can be used. However, mechanical scribing systems promise even higher throughputs. Tandem cells and production experience canmore » increase device efficiency and stability. Two alternative manufacturing processes are described: (1) wet etching and sheet handling and (2) wet etching and roll-to-roll fabrication.« less
Xia, Juan; Zhou, Junyu; Zhang, Ronggui; Jiang, Dechen; Jiang, Depeng
2018-06-04
In this communication, a gold-coated polydimethylsiloxane (PDMS) chip with cell-sized microwells was prepared through a stamping and spraying process that was applied directly for high-throughput electrochemiluminescence (ECL) analysis of intracellular glucose at single cells. As compared with the previous multiple-step fabrication of photoresist-based microwells on the electrode, the preparation process is simple and offers fresh electrode surface for higher luminescence intensity. More luminescence intensity was recorded from cell-retained microwells than that at the planar region among the microwells that was correlated with the content of intracellular glucose. The successful monitoring of intracellular glucose at single cells using this PDMS chip will provide an alternative strategy for high-throughput single-cell analysis. Graphical abstract ᅟ.
NASA Astrophysics Data System (ADS)
Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo
2016-03-01
In this paper, high-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (Aluminium, Copper, Stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high pulse repetition frequency picosecond laser with maximum average output power of 270 W in conjunction with a unique, in-house developed two-axis polygon scanner. Initially, different concepts of polygon scanners are engineered and tested to find out the optimal architecture for ultrafast and precision laser beam scanning. Remarkable 1,000 m/s scan speed is achieved on the substrate, and thanks to the resulting low pulse overlap, thermal accumulation and plasma absorption effects are avoided at up to 20 MHz pulse repetition frequencies. In order to identify optimum processing conditions for efficient high-average power laser machining, the depths of cavities produced under varied parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. The maximum removal rate is achieved as high as 27.8 mm3/min for Aluminium, 21.4 mm3/min for Copper, 15.3 mm3/min for Stainless steel and 129.1 mm3/min for Al2O3 when full available laser power is irradiated at optimum pulse repetition frequency.
Normal-incidence EXtreme-Ultraviolet imaging Spectrometer - NEXUS
NASA Astrophysics Data System (ADS)
Dere, K. P.
2003-05-01
NEXUS is the result of a breakthrough optical design that incorporates new technologies to achieve high optical throughput at high spatial (1 arcsec) and spectral (1-2 km s-1) resolution over a wide field of view in an optimal extreme-ultraviolet spectral band. This achievement was made possible primarily by two technical developments. First, a coating of boron-carbide deposited onto a layer of iridium provided a greatly enhanced reflectivity at EUV wavelengths that would enable NEXUS to observe the Sun over a wide temperature range at high cadence. The reflectivity of these coatings have been measured and demonstrated in the laboratory. The second key development was the use of a variable-line-spaced toroidal grating spectrometer. The spectrometer design allowed the Sun to be imaged at high spatial and spectral resolution along a 1 solar radius-long slit and over a wavelength range from 450 to 800 Å, nearly an entire spectral order. Because the spectrograph provided a magnification of about a factor of 6, only 2 optical elements are required to achieved the desired imaging performance. Throughput was enhanced by the use of only 2 reflections. The could all be accomodated within a total instrument length of 1.5m. We would like to acknowledge support from ONR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.; DePoy, D. L.; Marshall, J. L.
Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.; DePoy, D. L.; Marshall, J. L.
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence ofmore » the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
Lehotay, Steven J; Han, Lijun; Sapozhnikova, Yelena
2016-01-01
This study demonstrated the application of an automated high-throughput mini-cartridge solid-phase extraction (mini-SPE) cleanup for the rapid low-pressure gas chromatography-tandem mass spectrometry (LPGC-MS/MS) analysis of pesticides and environmental contaminants in QuEChERS extracts of foods. Cleanup efficiencies and breakthrough volumes using different mini-SPE sorbents were compared using avocado, salmon, pork loin, and kale as representative matrices. Optimum extract load volume was 300 µL for the 45 mg mini-cartridges containing 20/12/12/1 (w/w/w/w) anh. MgSO 4 /PSA (primary secondary amine)/C 18 /CarbonX sorbents used in the final method. In method validation to demonstrate high-throughput capabilities and performance results, 230 spiked extracts of 10 different foods (apple, kiwi, carrot, kale, orange, black olive, wheat grain, dried basil, pork, and salmon) underwent automated mini-SPE cleanup and analysis over the course of 5 days. In all, 325 analyses for 54 pesticides and 43 environmental contaminants (3 analyzed together) were conducted using the 10 min LPGC-MS/MS method without changing the liner or retuning the instrument. Merely, 1 mg equivalent sample injected achieved <5 ng g -1 limits of quantification. With the use of internal standards, method validation results showed that 91 of the 94 analytes including pairs achieved satisfactory results (70-120 % recovery and RSD ≤ 25 %) in the 10 tested food matrices ( n = 160). Matrix effects were typically less than ±20 %, mainly due to the use of analyte protectants, and minimal human review of software data processing was needed due to summation function integration of analyte peaks. This study demonstrated that the automated mini-SPE + LPGC-MS/MS method yielded accurate results in rugged, high-throughput operations with minimal labor and data review.
Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...
2016-06-01
Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
Ko, Jina; Yelleswarapu, Venkata; Singh, Anup; Shah, Nishal
2016-01-01
Microfluidic devices can sort immunomagnetically labeled cells with sensitivity and specificity much greater than that of conventional methods, primarily because the size of microfluidic channels and micro-scale magnets can be matched to that of individual cells. However, these small feature sizes come at the expense of limited throughput (ϕ < 5 mL h−1) and susceptibility to clogging, which have hindered current microfluidic technology from processing relevant volumes of clinical samples, e.g. V > 10 mL whole blood. Here, we report a new approach to micromagnetic sorting that can achieve highly specific cell separation in unprocessed complex samples at a throughput (ϕ > 100 mL h−1) 100× greater than that of conventional microfluidics. To achieve this goal, we have devised a new approach to micromagnetic sorting, the magnetic nickel iron electroformed trap (MagNET), which enables high flow rates by having millions of micromagnetic traps operate in parallel. Our design rotates the conventional microfluidic approach by 90° to form magnetic traps at the edges of pores instead of in channels, enabling millions of the magnetic traps to be incorporated into a centimeter sized device. Unlike previous work, where magnetic structures were defined using conventional microfabrication, we take inspiration from soft lithography and create a master from which many replica electroformed magnetic micropore devices can be economically manufactured. These free-standing 12 µm thick permalloy (Ni80Fe20) films contain micropores of arbitrary shape and position, allowing the device to be tailored for maximal capture efficiency and throughput. We demonstrate MagNET's capabilities by fabricating devices with both circular and rectangular pores and use these devices to rapidly (ϕ = 180 mL h−1) and specifically sort rare tumor cells from white blood cells. PMID:27170379
Recent Advances in Bioprinting and Applications for Biosensing
Dias, Andrew D.; Kingsley, David M.; Corr, David T.
2014-01-01
Future biosensing applications will require high performance, including real-time monitoring of physiological events, incorporation of biosensors into feedback-based devices, detection of toxins, and advanced diagnostics. Such functionality will necessitate biosensors with increased sensitivity, specificity, and throughput, as well as the ability to simultaneously detect multiple analytes. While these demands have yet to be fully realized, recent advances in biofabrication may allow sensors to achieve the high spatial sensitivity required, and bring us closer to achieving devices with these capabilities. To this end, we review recent advances in biofabrication techniques that may enable cutting-edge biosensors. In particular, we focus on bioprinting techniques (e.g., microcontact printing, inkjet printing, and laser direct-write) that may prove pivotal to biosensor fabrication and scaling. Recent biosensors have employed these fabrication techniques with success, and further development may enable higher performance, including multiplexing multiple analytes or cell types within a single biosensor. We also review recent advances in 3D bioprinting, and explore their potential to create biosensors with live cells encapsulated in 3D microenvironments. Such advances in biofabrication will expand biosensor utility and availability, with impact realized in many interdisciplinary fields, as well as in the clinic. PMID:25587413
Listen to the Urgent Sound of Drums: Major Challenges in African Higher Education
ERIC Educational Resources Information Center
Visser, Herman
2008-01-01
African higher education is currently facing tremendous challenges. The pressure and demand for access is huge. This is understandable against the background of traditionally low participation, low success and throughput rates, declining financial contributions from governments and donors, and critical pressures for efficiency, modernization,…
Morschett, Holger; Wiechert, Wolfgang; Oldiges, Marco
2016-02-09
Within the context of microalgal lipid production for biofuels and bulk chemical applications, specialized higher throughput devices for small scale parallelized cultivation are expected to boost the time efficiency of phototrophic bioprocess development. However, the increasing number of possible experiments is directly coupled to the demand for lipid quantification protocols that enable reliably measuring large sets of samples within short time and that can deal with the reduced sample volume typically generated at screening scale. To meet these demands, a dye based assay was established using a liquid handling robot to provide reproducible high throughput quantification of lipids with minimized hands-on-time. Lipid production was monitored using the fluorescent dye Nile red with dimethyl sulfoxide as solvent facilitating dye permeation. The staining kinetics of cells at different concentrations and physiological states were investigated to successfully down-scale the assay to 96 well microtiter plates. Gravimetric calibration against a well-established extractive protocol enabled absolute quantification of intracellular lipids improving precision from ±8 to ±2 % on average. Implementation into an automated liquid handling platform allows for measuring up to 48 samples within 6.5 h, reducing hands-on-time to a third compared to manual operation. Moreover, it was shown that automation enhances accuracy and precision compared to manual preparation. It was revealed that established protocols relying on optical density or cell number for biomass adjustion prior to staining may suffer from errors due to significant changes of the cells' optical and physiological properties during cultivation. Alternatively, the biovolume was used as a measure for biomass concentration so that errors from morphological changes can be excluded. The newly established assay proved to be applicable for absolute quantification of algal lipids avoiding limitations of currently established protocols, namely biomass adjustment and limited throughput. Automation was shown to improve data reliability, as well as experimental throughput simultaneously minimizing the needed hands-on-time to a third. Thereby, the presented protocol meets the demands for the analysis of samples generated by the upcoming generation of devices for higher throughput phototrophic cultivation and thereby contributes to boosting the time efficiency for setting up algae lipid production processes.
Inertial-ordering-assisted droplet microfluidics for high-throughput single-cell RNA-sequencing.
Moon, Hui-Sung; Je, Kwanghwi; Min, Jae-Woong; Park, Donghyun; Han, Kyung-Yeon; Shin, Seung-Ho; Park, Woong-Yang; Yoo, Chang Eun; Kim, Shin-Hyun
2018-02-27
Single-cell RNA-seq reveals the cellular heterogeneity inherent in the population of cells, which is very important in many clinical and research applications. Recent advances in droplet microfluidics have achieved the automatic isolation, lysis, and labeling of single cells in droplet compartments without complex instrumentation. However, barcoding errors occurring in the cell encapsulation process because of the multiple-beads-in-droplet and insufficient throughput because of the low concentration of beads for avoiding multiple-beads-in-a-droplet remain important challenges for precise and efficient expression profiling of single cells. In this study, we developed a new droplet-based microfluidic platform that significantly improved the throughput while reducing barcoding errors through deterministic encapsulation of inertially ordered beads. Highly concentrated beads containing oligonucleotide barcodes were spontaneously ordered in a spiral channel by an inertial effect, which were in turn encapsulated in droplets one-by-one, while cells were simultaneously encapsulated in the droplets. The deterministic encapsulation of beads resulted in a high fraction of single-bead-in-a-droplet and rare multiple-beads-in-a-droplet although the bead concentration increased to 1000 μl -1 , which diminished barcoding errors and enabled accurate high-throughput barcoding. We successfully validated our device with single-cell RNA-seq. In addition, we found that multiple-beads-in-a-droplet, generated using a normal Drop-Seq device with a high concentration of beads, underestimated transcript numbers and overestimated cell numbers. This accurate high-throughput platform can expand the capability and practicality of Drop-Seq in single-cell analysis.
Chatterjee, Anirban; Mirer, Paul L; Zaldivar Santamaria, Elvira; Klapperich, Catherine; Sharon, Andre; Sauer-Budge, Alexis F
2010-06-01
The life science and healthcare communities have been redefining the importance of ribonucleic acid (RNA) through the study of small molecule RNA (in RNAi/siRNA technologies), micro RNA (in cancer research and stem cell research), and mRNA (gene expression analysis for biologic drug targets). Research in this field increasingly requires efficient and high-throughput isolation techniques for RNA. Currently, several commercial kits are available for isolating RNA from cells. Although the quality and quantity of RNA yielded from these kits is sufficiently good for many purposes, limitations exist in terms of extraction efficiency from small cell populations and the ability to automate the extraction process. Traditionally, automating a process decreases the cost and personnel time while simultaneously increasing the throughput and reproducibility. As the RNA field matures, new methods for automating its extraction, especially from low cell numbers and in high throughput, are needed to achieve these improvements. The technology presented in this article is a step toward this goal. The method is based on a solid-phase extraction technology using a porous polymer monolith (PPM). A novel cell lysis approach and a larger binding surface throughout the PPM extraction column ensure a high yield from small starting samples, increasing sensitivity and reducing indirect costs in cell culture and sample storage. The method ensures a fast and simple procedure for RNA isolation from eukaryotic cells, with a high yield both in terms of quality and quantity. The technique is amenable to automation and streamlined workflow integration, with possible miniaturization of the sample handling process making it suitable for high-throughput applications.
High-throughput sequence alignment using Graphics Processing Units
Schatz, Michael C; Trapnell, Cole; Delcher, Arthur L; Varshney, Amitabh
2007-01-01
Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs) in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU. PMID:18070356
Yennawar, Neela H; Fecko, Julia A; Showalter, Scott A; Bevilacqua, Philip C
2016-01-01
Many labs have conventional calorimeters where denaturation and binding experiments are setup and run one at a time. While these systems are highly informative to biopolymer folding and ligand interaction, they require considerable manual intervention for cleaning and setup. As such, the throughput for such setups is limited typically to a few runs a day. With a large number of experimental parameters to explore including different buffers, macromolecule concentrations, temperatures, ligands, mutants, controls, replicates, and instrument tests, the need for high-throughput automated calorimeters is on the rise. Lower sample volume requirements and reduced user intervention time compared to the manual instruments have improved turnover of calorimetry experiments in a high-throughput format where 25 or more runs can be conducted per day. The cost and efforts to maintain high-throughput equipment typically demands that these instruments be housed in a multiuser core facility. We describe here the steps taken to successfully start and run an automated biological calorimetry facility at Pennsylvania State University. Scientists from various departments at Penn State including Chemistry, Biochemistry and Molecular Biology, Bioengineering, Biology, Food Science, and Chemical Engineering are benefiting from this core facility. Samples studied include proteins, nucleic acids, sugars, lipids, synthetic polymers, small molecules, natural products, and virus capsids. This facility has led to higher throughput of data, which has been leveraged into grant support, attracting new faculty hire and has led to some exciting publications. © 2016 Elsevier Inc. All rights reserved.
Ion channel drug discovery and research: the automated Nano-Patch-Clamp technology.
Brueggemann, A; George, M; Klau, M; Beckler, M; Steindl, J; Behrends, J C; Fertig, N
2004-01-01
Unlike the genomics revolution, which was largely enabled by a single technological advance (high throughput sequencing), rapid advancement in proteomics will require a broader effort to increase the throughput of a number of key tools for functional analysis of different types of proteins. In the case of ion channels -a class of (membrane) proteins of great physiological importance and potential as drug targets- the lack of adequate assay technologies is felt particularly strongly. The available, indirect, high throughput screening methods for ion channels clearly generate insufficient information. The best technology to study ion channel function and screen for compound interaction is the patch clamp technique, but patch clamping suffers from low throughput, which is not acceptable for drug screening. A first step towards a solution is presented here. The nano patch clamp technology, which is based on a planar, microstructured glass chip, enables automatic whole cell patch clamp measurements. The Port-a-Patch is an automated electrophysiology workstation, which uses planar patch clamp chips. This approach enables high quality and high content ion channel and compound evaluation on a one-cell-at-a-time basis. The presented automation of the patch process and its scalability to an array format are the prerequisites for any higher throughput electrophysiology instruments.
USDA-ARS?s Scientific Manuscript database
We conducted genomic sequencing to identify viruses associated with mosaic disease of an apple tree using the high-throughput sequencing (HTS) Illumina RNA-seq platform. The objective was to examine if rapid identification and characterization of viruses could be effectively achieved by RNA-seq anal...
Overcoming HERG affinity in the discovery of the CCR5 antagonist maraviroc.
Price, David A; Armour, Duncan; de Groot, Marcel; Leishman, Derek; Napier, Carolyn; Perros, Manos; Stammen, Blanda L; Wood, Anthony
2006-09-01
The discovery of maraviroc 17 is described with particular reference to the generation of high selectivity over affinity for the HERG potassium channel. This was achieved through the use of a high throughput binding assay for the HERG channel that is known to show an excellent correlation with functional effects.
40 CFR Table 3 to Subpart Eeee of... - Operating Limits-High Throughput Transfer Racks
Code of Federal Regulations, 2011 CFR
2011-07-01
... compliance with the emission limit. 5. An adsorption system with adsorbent regeneration to comply with an.... Maintain the total regeneration stream mass flow during the adsorption bed regeneration cycle greater than..., achieve and maintain the temperature of the adsorption bed after regeneration less than or equal to the...
Holographic Compact Disk Read-Only Memories
NASA Technical Reports Server (NTRS)
Liu, Tsuen-Hsi
1996-01-01
Compact disk read-only memories (CD-ROMs) of proposed type store digital data in volume holograms instead of in surface differentially reflective elements. Holographic CD-ROM consist largely of parts similar to those used in conventional CD-ROMs. However, achieves 10 or more times data-storage capacity and throughput by use of wavelength-multiplexing/volume-hologram scheme.
Laissez-Faire : Fully Asymmetric Backscatter Communication
Hu, Pan; Zhang, Pengyu; Ganesan, Deepak
2016-01-01
Backscatter provides dual-benefits of energy harvesting and low-power communication, making it attractive to a broad class of wireless sensors. But the design of a protocol that enables extremely power-efficient radios for harvesting-based sensors as well as high-rate data transfer for data-rich sensors presents a conundrum. In this paper, we present a new fully asymmetric backscatter communication protocol where nodes blindly transmit data as and when they sense. This model enables fully flexible node designs, from extraordinarily power-efficient backscatter radios that consume barely a few micro-watts to high-throughput radios that can stream at hundreds of Kbps while consuming a paltry tens of micro-watts. The challenge, however, lies in decoding concurrent streams at the reader, which we achieve using a novel combination of time-domain separation of interleaved signal edges, and phase-domain separation of colliding transmissions. We provide an implementation of our protocol, LF-Backscatter, and show that it can achieve an order of magnitude or more improvement in throughput, latency and power over state-of-art alternatives. PMID:28286885
Qiu, Guanglei; Zhang, Sui; Srinivasa Raghavan, Divya Shankari; Das, Subhabrata; Ting, Yen-Peng
2016-11-01
This work uncovers an important feature of the forward osmosis membrane bioreactor (FOMBR) process: the decoupling of contaminants retention time (CRT) and hydraulic retention time (HRT). Based on this concept, the capability of the hybrid microfiltration-forward osmosis membrane bioreactor (MF-FOMBR) in achieving high through-put treatment of municipal wastewater with enhanced phosphorus recovery was explored. High removal of TOC and NH4(+)-N (90% and 99%, respectively) was achieved with HRTs down to 47min, with the treatment capacity increased by an order of magnitude. Reduced HRT did not affect phosphorus removal and recovery. As a result, the phosphorus recovery capacity was also increased by the same order. Reduced HRT resulted in increased system loading rates and thus elevated concentrations of mixed liquor suspended solids and increased membrane fouling. 454-pyrosequecing suggested the thriving of Bacteroidetes and Proteobacteria (especially Sphingobacteriales Flavobacteriales and Thiothrix members), as well as the community succession and dynamics of ammonium oxidizing and nitrite oxidizing bacteria. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kalejs, J. P.
1994-06-01
This report describes the impact of the technical achievements made in the first 18 months of the three year PVMaT program at Mobil Solar on lowering the manufacturing costs of its photovoltaic polycrystalline silicon-based modules. Manufacturing cost decreases are being achieved through a reduction of silicon material utilization, increases in productivity and yield in crystal growth, and through improvements in the laser cutting process for EFG wafers. The yield, productivity, and throughput advances made possible by these technical achievements are shown to be able to enhance future market share growth for Mobil Solar products as a consequence of significant reductions in a number of direct manufacturing cost elements in EFG wafer and module production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, D.; Edwards, T.
High-level waste (HLW) throughput (i.e., the amount of waste processed per unit of time) is primarily a function of two critical parameters: waste loading (WL) and melt rate. For the Defense Waste Processing Facility (DWPF), increasing HLW throughput would significantly reduce the overall mission life cycle costs for the Department of Energy (DOE). Significant increases in waste throughput have been achieved at DWPF since initial radioactive operations began in 1996. Key technical and operational initiatives that supported increased waste throughput included improvements in facility attainment, the Chemical Processing Cell (CPC) flowsheet, process control models and frit formulations. As a resultmore » of these key initiatives, DWPF increased WLs from a nominal 28% for Sludge Batch 2 (SB2) to {approx}34 to 38% for SB3 through SB6 while maintaining or slightly improving canister fill times. Although considerable improvements in waste throughput have been obtained, future contractual waste loading targets are nominally 40%, while canister production rates are also expected to increase (to a rate of 325 to 400 canisters per year). Although implementation of bubblers have made a positive impact on increasing melt rate for recent sludge batches targeting WLs in the mid30s, higher WLs will ultimately make the feeds to DWPF more challenging to process. Savannah River Remediation (SRR) recently requested the Savannah River National Laboratory (SRNL) to perform a paper study assessment using future sludge projections to evaluate whether the current Process Composition Control System (PCCS) algorithms would provide projected operating windows to allow future contractual WL targets to be met. More specifically, the objective of this study was to evaluate future sludge batch projections (based on Revision 16 of the HLW Systems Plan) with respect to projected operating windows using current PCCS models and associated constraints. Based on the assessments, the waste loading interval over which a glass system (i.e., a projected sludge composition with a candidate frit) is predicted to be acceptable can be defined (i.e., the projected operating window) which will provide insight into the ability to meet future contractual WL obligations. In this study, future contractual WL obligations are assumed to be 40%, which is the goal after all flowsheet enhancements have been implemented to support DWPF operations. For a system to be considered acceptable, candidate frits must be identified that provide access to at least 40% WL while accounting for potential variation in the sludge resulting from differences in batch-to-batch transfers into the Sludge Receipt and Adjustment Tank (SRAT) and/or analytical uncertainties. In more general terms, this study will assess whether or not the current glass formulation strategy (based on the use of the Nominal and Variation Stage assessments) and current PCCS models will allow access to compositional regions required to targeted higher WLs for future operations. Some of the key questions to be considered in this study include: (1) If higher WLs are attainable with current process control models, are the models valid in these compositional regions? If the higher WL glass regions are outside current model development or validation ranges, is there existing data that could be used to demonstrate model applicability (or lack thereof)? If not, experimental data may be required to revise current models or serve as validation data with the existing models. (2) Are there compositional trends in frit space that are required by the PCCS models to obtain access to these higher WLs? If so, are there potential issues with the compositions of the associated frits (e.g., limitations on the B{sub 2}O{sub 3} and/or Li{sub 2}O concentrations) as they are compared to model development/validation ranges or to the term 'borosilicate' glass? If limitations on the frit compositional range are realized, what is the impact of these restrictions on other glass properties such as the ability to suppress nepheline formation or influence melt rate? The model based assessments being performed make the assumption that the process control models are applicable over the glass compositional regions being evaluated. Although the glass compositional region of interest is ultimately defined by the specific frit, sludge, and WL interval used, there is no prescreening of these compositional regions with respect to the model development or validation ranges which is consistent with current DWPF operations.« less
An evaluation of MPI message rate on hybrid-core processors
Barrett, Brian W.; Brightwell, Ron; Grant, Ryan; ...
2014-11-01
Power and energy concerns are motivating chip manufacturers to consider future hybrid-core processor designs that may combine a small number of traditional cores optimized for single-thread performance with a large number of simpler cores optimized for throughput performance. This trend is likely to impact the way in which compute resources for network protocol processing functions are allocated and managed. In particular, the performance of MPI match processing is critical to achieving high message throughput. In this paper, we analyze the ability of simple and more complex cores to perform MPI matching operations for various scenarios in order to gain insightmore » into how MPI implementations for future hybrid-core processors should be designed.« less
High-throughput ultraviolet photoacoustic microscopy with multifocal excitation
NASA Astrophysics Data System (ADS)
Imai, Toru; Shi, Junhui; Wong, Terence T. W.; Li, Lei; Zhu, Liren; Wang, Lihong V.
2018-03-01
Ultraviolet photoacoustic microscopy (UV-PAM) is a promising intraoperative tool for surgical margin assessment (SMA), one that can provide label-free histology-like images with high resolution. In this study, using a microlens array and a one-dimensional (1-D) array ultrasonic transducer, we developed a high-throughput multifocal UV-PAM (MF-UV-PAM). Our new system achieved a 1.6 ± 0.2 μm lateral resolution and produced images 40 times faster than the previously developed point-by-point scanning UV-PAM. MF-UV-PAM provided a readily comprehensible photoacoustic image of a mouse brain slice with specific absorption contrast in ˜16 min, highlighting cell nuclei. Individual cell nuclei could be clearly resolved, showing its practical potential for intraoperative SMA.
A bioinformatics roadmap for the human vaccines project.
Scheuermann, Richard H; Sinkovits, Robert S; Schenkelberg, Theodore; Koff, Wayne C
2017-06-01
Biomedical research has become a data intensive science in which high throughput experimentation is producing comprehensive data about biological systems at an ever-increasing pace. The Human Vaccines Project is a new public-private partnership, with the goal of accelerating development of improved vaccines and immunotherapies for global infectious diseases and cancers by decoding the human immune system. To achieve its mission, the Project is developing a Bioinformatics Hub as an open-source, multidisciplinary effort with the overarching goal of providing an enabling infrastructure to support the data processing, analysis and knowledge extraction procedures required to translate high throughput, high complexity human immunology research data into biomedical knowledge, to determine the core principles driving specific and durable protective immune responses.
A Genetic-Based Scheduling Algorithm to Minimize the Makespan of the Grid Applications
NASA Astrophysics Data System (ADS)
Entezari-Maleki, Reza; Movaghar, Ali
Task scheduling algorithms in grid environments strive to maximize the overall throughput of the grid. In order to maximize the throughput of the grid environments, the makespan of the grid tasks should be minimized. In this paper, a new task scheduling algorithm is proposed to assign tasks to the grid resources with goal of minimizing the total makespan of the tasks. The algorithm uses the genetic approach to find the suitable assignment within grid resources. The experimental results obtained from applying the proposed algorithm to schedule independent tasks within grid environments demonstrate the applicability of the algorithm in achieving schedules with comparatively lower makespan in comparison with other well-known scheduling algorithms such as, Min-min, Max-min, RASA and Sufferage algorithms.
2013-01-01
Following recent trends in environmental microbiology, food microbiology has benefited from the advances in molecular biology and adopted novel strategies to detect, identify, and monitor microbes in food. An in-depth study of the microbial diversity in food can now be achieved by using high-throughput sequencing (HTS) approaches after direct nucleic acid extraction from the sample to be studied. In this review, the workflow of applying culture-independent HTS to food matrices is described. The current scenario and future perspectives of HTS uses to study food microbiota are presented, and the decision-making process leading to the best choice of working conditions to fulfill the specific needs of food research is described. PMID:23475615
Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor
2016-02-24
In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption.
Temesi, David G; Martin, Scott; Smith, Robin; Jones, Christopher; Middleton, Brian
2010-06-30
Screening assays capable of performing quantitative analysis on hundreds of compounds per week are used to measure metabolic stability during early drug discovery. Modern orthogonal acceleration time-of-flight (OATOF) mass spectrometers equipped with analogue-to-digital signal capture (ADC) now offer performance levels suitable for many applications normally supported by triple quadruple instruments operated in multiple reaction monitoring (MRM) mode. Herein the merits of MRM and OATOF with ADC detection are compared for more than 1000 compounds screened in rat and/or cryopreserved human hepatocytes over a period of 3 months. Statistical comparison of a structurally diverse subset indicated good agreement for the two detection methods. The overall success rate was higher using OATOF detection and data acquisition time was reduced by around 20%. Targeted metabolites of diazepam were detected in samples from a CLint determination performed at 1 microM. Data acquisition by positive and negative ion mode switching can be achieved on high-performance liquid chromatography (HPLC) peak widths as narrow as 0.2 min (at base), thus enabling a more comprehensive first pass analysis with fast HPLC gradients. Unfortunately, most existing OATOF instruments lack the software tools necessary to rapidly convert the huge amounts of raw data into quantified results. Software with functionality similar to open access triple quadrupole systems is needed for OATOF to truly compete in a high-throughput screening environment. Copyright 2010 John Wiley & Sons, Ltd.
Design, Implementation, and Verification of the Reliable Multicast Protocol. Thesis
NASA Technical Reports Server (NTRS)
Montgomery, Todd L.
1995-01-01
This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development.
Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor
2016-01-01
In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption. PMID:26927104
Optical multicast system for data center networks.
Samadi, Payman; Gupta, Varun; Xu, Junjie; Wang, Howard; Zussman, Gil; Bergman, Keren
2015-08-24
We present the design and experimental evaluation of an Optical Multicast System for Data Center Networks, a hardware-software system architecture that uniquely integrates passive optical splitters in a hybrid network architecture for faster and simpler delivery of multicast traffic flows. An application-driven control plane manages the integrated optical and electronic switched traffic routing in the data plane layer. The control plane includes a resource allocation algorithm to optimally assign optical splitters to the flows. The hardware architecture is built on a hybrid network with both Electronic Packet Switching (EPS) and Optical Circuit Switching (OCS) networks to aggregate Top-of-Rack switches. The OCS is also the connectivity substrate of splitters to the optical network. The optical multicast system implementation requires only commodity optical components. We built a prototype and developed a simulation environment to evaluate the performance of the system for bulk multicasting. Experimental and numerical results show simultaneous delivery of multicast flows to all receivers with steady throughput. Compared to IP multicast that is the electronic counterpart, optical multicast performs with less protocol complexity and reduced energy consumption. Compared to peer-to-peer multicast methods, it achieves at minimum an order of magnitude higher throughput for flows under 250 MB with significantly less connection overheads. Furthermore, for delivering 20 TB of data containing only 15% multicast flows, it reduces the total delivery energy consumption by 50% and improves latency by 55% compared to a data center with a sole non-blocking EPS network.
The high throughput virtual slit enables compact, inexpensive Raman spectral imagers
NASA Astrophysics Data System (ADS)
Gooding, Edward; Deutsch, Erik R.; Huehnerhoff, Joseph; Hajian, Arsen R.
2018-02-01
Raman spectral imaging is increasingly becoming the tool of choice for field-based applications such as threat, narcotics and hazmat detection; air, soil and water quality monitoring; and material ID. Conventional fiber-coupled point source Raman spectrometers effectively interrogate a small sample area and identify bulk samples via spectral library matching. However, these devices are very slow at mapping over macroscopic areas. In addition, the spatial averaging performed by instruments that collect binned spectra, particularly when used in combination with orbital raster scanning, tends to dilute the spectra of trace particles in a mixture. Our design, employing free space line illumination combined with area imaging, reveals both the spectral and spatial content of heterogeneous mixtures. This approach is well suited to applications such as detecting explosives and narcotics trace particle detection in fingerprints. The patented High Throughput Virtual Slit1 is an innovative optical design that enables compact, inexpensive handheld Raman spectral imagers. HTVS-based instruments achieve significantly higher spectral resolution than can be obtained with conventional designs of the same size. Alternatively, they can be used to build instruments with comparable resolution to large spectrometers, but substantially smaller size, weight and unit cost, all while maintaining high sensitivity. When used in combination with laser line imaging, this design eliminates sample photobleaching and unwanted photochemistry while greatly enhancing mapping speed, all with high selectivity and sensitivity. We will present spectral image data and discuss applications that are made possible by low cost HTVS-enabled instruments.
Opportunistic data locality for end user data analysis
NASA Astrophysics Data System (ADS)
Fischer, M.; Heidecker, C.; Kuehn, E.; Quast, G.; Giffels, M.; Schnepf, M.; Heiss, A.; Petzold, A.
2017-10-01
With the increasing data volume of LHC Run2, user analyses are evolving towards increasing data throughput. This evolution translates to higher requirements for efficiency and scalability of the underlying analysis infrastructure. We approach this issue with a new middleware to optimise data access: a layer of coordinated caches transparently provides data locality for high-throughput analyses. We demonstrated the feasibility of this approach with a prototype used for analyses of the CMS working groups at KIT. In this paper, we present our experience both with the approach in general, and our prototype in specific.
Spitzer, James D; Hupert, Nathaniel; Duckart, Jonathan; Xiong, Wei
2007-01-01
Community-based mass prophylaxis is a core public health operational competency, but staffing needs may overwhelm the local trained health workforce. Just-in-time (JIT) training of emergency staff and computer modeling of workforce requirements represent two complementary approaches to address this logistical problem. Multnomah County, Oregon, conducted a high-throughput point of dispensing (POD) exercise to test JIT training and computer modeling to validate POD staffing estimates. The POD had 84% non-health-care worker staff and processed 500 patients per hour. Post-exercise modeling replicated observed staff utilization levels and queue formation, including development and amelioration of a large medical evaluation queue caused by lengthy processing times and understaffing in the first half-hour of the exercise. The exercise confirmed the feasibility of using JIT training for high-throughput antibiotic dispensing clinics staffed largely by nonmedical professionals. Patient processing times varied over the course of the exercise, with important implications for both staff reallocation and future POD modeling efforts. Overall underutilization of staff revealed the opportunity for greater efficiencies and even higher future throughputs.
Ramanathan, Ragu; Ghosal, Anima; Ramanathan, Lakshmi; Comstock, Kate; Shen, Helen; Ramanathan, Dil
2018-05-01
Evaluation of HPLC-high-resolution mass spectrometry (HPLC-HRMS) full scan with polarity switching for increasing throughput of human in vitro cocktail drug-drug interaction assay. Microsomal incubates were analyzed using a high resolution and high mass accuracy Q-Exactive mass spectrometer to collect integrated qualitative and quantitative (qual/quant) data. Within assay, positive-to-negative polarity switching HPLC-HRMS method allowed quantification of eight and two probe compounds in the positive and negative ionization modes, respectively, while monitoring for LOR and its metabolites. LOR-inhibited CYP2C19 and showed higher activity for CYP2D6, CYP2E1 and CYP3A4. Overall, LC-HRMS-based nontargeted full scan quantitation allowed to improve the throughput of the in vitro cocktail drug-drug interaction assay.
NASA Astrophysics Data System (ADS)
Watanabe, A.; Furukawa, H.
2018-04-01
The resolution of multichannel Fourier transform (McFT) spectroscopy is insufficient for many applications despite its extreme advantage of high throughput. We propose an improved configuration to realise both performance using a two-dimensional area sensor. For the spectral resolution, we obtained the interferogram of a larger optical path difference by shifting the area sensor without altering any optical components. The non-linear phase error of the interferometer was successfully corrected using a phase-compensation calculation. Warping compensation was also applied to realise a higher throughput to accumulate the signal between vertical pixels. Our approach significantly improved the resolution and signal-to-noise ratio by factors of 1.7 and 34, respectively. This high-resolution and high-sensitivity McFT spectrometer will be useful for detecting weak light signals such as those in non-invasive diagnosis.
Collision Resolution Scheme with Offset for Improved Performance of Heterogeneous WLAN
NASA Astrophysics Data System (ADS)
Upadhyay, Raksha; Vyavahare, Prakash D.; Tokekar, Sanjiv
2016-03-01
CSMA/CA based DCF of 802.11 MAC layer employs best effort delivery model, in which all stations compete for channel access with same priority. Heterogeneous conditions result in unfairness among stations and degradation in throughput, therefore, providing different priorities to different applications for required quality of service in heterogeneous networks is challenging task. This paper proposes a collision resolution scheme with a novel concept of introducing offset, which is suitable for heterogeneous networks. Selection of random value by a station for its contention with offset results in reduced probability of collision. Expression for the optimum value of the offset is also derived. Results show that proposed scheme, when applied to heterogeneous networks, has improved throughput and fairness than conventional scheme. Results show that proposed scheme also exhibits higher throughput and fairness with reduced delay in homogeneous networks.
Average Throughput Performance of Myopic Policy in Energy Harvesting Wireless Sensor Networks.
Gul, Omer Melih; Demirekler, Mubeccel
2017-09-26
This paper considers a single-hop wireless sensor network where a fusion center collects data from M energy harvesting wireless sensors. The harvested energy is stored losslessly in an infinite-capacity battery at each sensor. In each time slot, the fusion center schedules K sensors for data transmission over K orthogonal channels. The fusion center does not have direct knowledge on the battery states of sensors, or the statistics of their energy harvesting processes. The fusion center only has information of the outcomes of previous transmission attempts. It is assumed that the sensors are data backlogged, there is no battery leakage and the communication is error-free. An energy harvesting sensor can transmit data to the fusion center whenever being scheduled only if it has enough energy for data transmission. We investigate average throughput of Round-Robin type myopic policy both analytically and numerically under an average reward (throughput) criterion. We show that Round-Robin type myopic policy achieves optimality for some class of energy harvesting processes although it is suboptimal for a broad class of energy harvesting processes.
Experimental Study of an Advanced Concept of Moderate-resolution Holographic Spectrographs
NASA Astrophysics Data System (ADS)
Muslimov, Eduard; Valyavin, Gennady; Fabrika, Sergei; Musaev, Faig; Galazutdinov, Gazinur; Pavlycheva, Nadezhda; Emelianov, Eduard
2018-07-01
We present the results of an experimental study of an advanced moderate-resolution spectrograph based on a cascade of narrow-band holographic gratings. The main goal of the project is to achieve a moderately high spectral resolution with R up to 5000 simultaneously in the 4300–6800 Å visible spectral range on a single standard CCD, together with an increased throughput. The experimental study consisted of (1) resolution and image quality tests performed using the solar spectrum, and (2) a total throughput test performed for a number of wavelengths using a calibrated lab monochromator. The measured spectral resolving power reaches values over R > 4000 while the experimental throughput is as high as 55%, which agrees well with the modeling results. Comparing the obtained characteristics of the spectrograph under consideration with the best existing spectrographs, we conclude that the used concept can be considered as a very competitive and cheap alternative to the existing spectrographs of the given class. We propose several astrophysical applications for the instrument and discuss the prospect of creating its full-scale version.
Embedded Hyperchaotic Generators: A Comparative Analysis
NASA Astrophysics Data System (ADS)
Sadoudi, Said; Tanougast, Camel; Azzaz, Mohamad Salah; Dandache, Abbas
In this paper, we present a comparative analysis of FPGA implementation performances, in terms of throughput and resources cost, of five well known autonomous continuous hyperchaotic systems. The goal of this analysis is to identify the embedded hyperchaotic generator which leads to designs with small logic area cost, satisfactory throughput rates, low power consumption and low latency required for embedded applications such as secure digital communications between embedded systems. To implement the four-dimensional (4D) chaotic systems, we use a new structural hardware architecture based on direct VHDL description of the forth order Runge-Kutta method (RK-4). The comparative analysis shows that the hyperchaotic Lorenz generator provides attractive performances compared to that of others. In fact, its hardware implementation requires only 2067 CLB-slices, 36 multipliers and no block RAMs, and achieves a throughput rate of 101.6 Mbps, at the output of the FPGA circuit, at a clock frequency of 25.315 MHz with a low latency time of 316 ns. Consequently, these good implementation performances offer to the embedded hyperchaotic Lorenz generator the advantage of being the best candidate for embedded communications applications.
A compact imaging spectroscopic system for biomolecular detections on plasmonic chips.
Lo, Shu-Cheng; Lin, En-Hung; Wei, Pei-Kuen; Tsai, Wan-Shao
2016-10-17
In this study, we demonstrate a compact imaging spectroscopic system for high-throughput detection of biomolecular interactions on plasmonic chips, based on a curved grating as the key element of light diffraction and light focusing. Both the curved grating and the plasmonic chips are fabricated on flexible plastic substrates using a gas-assisted thermal-embossing method. A fiber-coupled broadband light source and a camera are included in the system. Spectral resolution within 1 nm is achieved in sensing environmental index solutions and protein bindings. The detected sensitivities of the plasmonic chip are comparable with a commercial spectrometer. An extra one-dimensional scanning stage enables high-throughput detection of protein binding on a designed plasmonic chip consisting of several nanoslit arrays with different periods. The detected resonance wavelengths match well with the grating equation under an air environment. Wavelength shifts between 1 and 9 nm are detected for antigens of various concentrations binding with antibodies. A simple, mass-productive and cost-effective method has been demonstrated on the imaging spectroscopic system for real-time, label-free, highly sensitive and high-throughput screening of biomolecular interactions.
Efficient mouse genome engineering by CRISPR-EZ technology.
Modzelewski, Andrew J; Chen, Sean; Willis, Brandon J; Lloyd, K C Kent; Wood, Joshua A; He, Lin
2018-06-01
CRISPR/Cas9 technology has transformed mouse genome editing with unprecedented precision, efficiency, and ease; however, the current practice of microinjecting CRISPR reagents into pronuclear-stage embryos remains rate-limiting. We thus developed CRISPR ribonucleoprotein (RNP) electroporation of zygotes (CRISPR-EZ), an electroporation-based technology that outperforms pronuclear and cytoplasmic microinjection in efficiency, simplicity, cost, and throughput. In C57BL/6J and C57BL/6N mouse strains, CRISPR-EZ achieves 100% delivery of Cas9/single-guide RNA (sgRNA) RNPs, facilitating indel mutations (insertions or deletions), exon deletions, point mutations, and small insertions. In a side-by-side comparison in the high-throughput KnockOut Mouse Project (KOMP) pipeline, CRISPR-EZ consistently outperformed microinjection. Here, we provide an optimized protocol covering sgRNA synthesis, embryo collection, RNP electroporation, mouse generation, and genotyping strategies. Using CRISPR-EZ, a graduate-level researcher with basic embryo-manipulation skills can obtain genetically modified mice in 6 weeks. Altogether, CRISPR-EZ is a simple, economic, efficient, and high-throughput technology that is potentially applicable to other mammalian species.
Yun, Kyungwon; Lee, Hyunjae; Bang, Hyunwoo; Jeon, Noo Li
2016-02-21
This study proposes a novel way to achieve high-throughput image acquisition based on a computer-recognizable micro-pattern implemented on a microfluidic device. We integrated the QR code, a two-dimensional barcode system, onto the microfluidic device to simplify imaging of multiple ROIs (regions of interest). A standard QR code pattern was modified to arrays of cylindrical structures of polydimethylsiloxane (PDMS). Utilizing the recognition of the micro-pattern, the proposed system enables: (1) device identification, which allows referencing additional information of the device, such as device imaging sequences or the ROIs and (2) composing a coordinate system for an arbitrarily located microfluidic device with respect to the stage. Based on these functionalities, the proposed method performs one-step high-throughput imaging for data acquisition in microfluidic devices without further manual exploration and locating of the desired ROIs. In our experience, the proposed method significantly reduced the time for the preparation of an acquisition. We expect that the method will innovatively improve the prototype device data acquisition and analysis.
Amsden, Jason J; Herr, Philip J; Landry, David M W; Kim, William; Vyas, Raul; Parker, Charles B; Kirley, Matthew P; Keil, Adam D; Gilchrist, Kristin H; Radauscher, Erich J; Hall, Stephen D; Carlson, James B; Baldasaro, Nicholas; Stokes, David; Di Dona, Shane T; Russell, Zachary E; Grego, Sonia; Edwards, Steven J; Sperline, Roger P; Denton, M Bonner; Stoner, Brian R; Gehm, Michael E; Glass, Jeffrey T
2018-02-01
Despite many potential applications, miniature mass spectrometers have had limited adoption in the field due to the tradeoff between throughput and resolution that limits their performance relative to laboratory instruments. Recently, a solution to this tradeoff has been demonstrated by using spatially coded apertures in magnetic sector mass spectrometers, enabling throughput and signal-to-background improvements of greater than an order of magnitude with no loss of resolution. This paper describes a proof of concept demonstration of a cycloidal coded aperture miniature mass spectrometer (C-CAMMS) demonstrating use of spatially coded apertures in a cycloidal sector mass analyzer for the first time. C-CAMMS also incorporates a miniature carbon nanotube (CNT) field emission electron ionization source and a capacitive transimpedance amplifier (CTIA) ion array detector. Results confirm the cycloidal mass analyzer's compatibility with aperture coding. A >10× increase in throughput was achieved without loss of resolution compared with a single slit instrument. Several areas where additional improvement can be realized are identified. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Amsden, Jason J.; Herr, Philip J.; Landry, David M. W.; Kim, William; Vyas, Raul; Parker, Charles B.; Kirley, Matthew P.; Keil, Adam D.; Gilchrist, Kristin H.; Radauscher, Erich J.; Hall, Stephen D.; Carlson, James B.; Baldasaro, Nicholas; Stokes, David; Di Dona, Shane T.; Russell, Zachary E.; Grego, Sonia; Edwards, Steven J.; Sperline, Roger P.; Denton, M. Bonner; Stoner, Brian R.; Gehm, Michael E.; Glass, Jeffrey T.
2018-02-01
Despite many potential applications, miniature mass spectrometers have had limited adoption in the field due to the tradeoff between throughput and resolution that limits their performance relative to laboratory instruments. Recently, a solution to this tradeoff has been demonstrated by using spatially coded apertures in magnetic sector mass spectrometers, enabling throughput and signal-to-background improvements of greater than an order of magnitude with no loss of resolution. This paper describes a proof of concept demonstration of a cycloidal coded aperture miniature mass spectrometer (C-CAMMS) demonstrating use of spatially coded apertures in a cycloidal sector mass analyzer for the first time. C-CAMMS also incorporates a miniature carbon nanotube (CNT) field emission electron ionization source and a capacitive transimpedance amplifier (CTIA) ion array detector. Results confirm the cycloidal mass analyzer's compatibility with aperture coding. A >10× increase in throughput was achieved without loss of resolution compared with a single slit instrument. Several areas where additional improvement can be realized are identified.
Budavari, Tamas; Langmead, Ben; Wheelan, Sarah J.; Salzberg, Steven L.; Szalay, Alexander S.
2015-01-01
When computing alignments of DNA sequences to a large genome, a key element in achieving high processing throughput is to prioritize locations in the genome where high-scoring mappings might be expected. We formulated this task as a series of list-processing operations that can be efficiently performed on graphics processing unit (GPU) hardware.We followed this approach in implementing a read aligner called Arioc that uses GPU-based parallel sort and reduction techniques to identify high-priority locations where potential alignments may be found. We then carried out a read-by-read comparison of Arioc’s reported alignments with the alignments found by several leading read aligners. With simulated reads, Arioc has comparable or better accuracy than the other read aligners we tested. With human sequencing reads, Arioc demonstrates significantly greater throughput than the other aligners we evaluated across a wide range of sensitivity settings. The Arioc software is available at https://github.com/RWilton/Arioc. It is released under a BSD open-source license. PMID:25780763
Experimental Evaluation of Adaptive Modulation and Coding in MIMO WiMAX with Limited Feedback
NASA Astrophysics Data System (ADS)
Mehlführer, Christian; Caban, Sebastian; Rupp, Markus
2007-12-01
We evaluate the throughput performance of an OFDM WiMAX (IEEE 802.16-2004, Section 8.3) transmission system with adaptive modulation and coding (AMC) by outdoor measurements. The standard compliant AMC utilizes a 3-bit feedback for SISO and Alamouti coded MIMO transmissions. By applying a 6-bit feedback and spatial multiplexing with individual AMC on the two transmit antennas, the data throughput can be increased significantly for large SNR values. Our measurements show that at small SNR values, a single antenna transmission often outperforms an Alamouti transmission. We found that this effect is caused by the asymmetric behavior of the wireless channel and by poor channel knowledge in the two-transmit-antenna case. Our performance evaluation is based on a measurement campaign employing the Vienna MIMO testbed. The measurement scenarios include typical outdoor-to-indoor NLOS, outdoor-to-outdoor NLOS, as well as outdoor-to-indoor LOS connections. We found that in all these scenarios, the measured throughput is far from its achievable maximum; the loss is mainly caused by a too simple convolutional coding.
A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard
NASA Astrophysics Data System (ADS)
Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid
2005-07-01
The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.
Average Throughput Performance of Myopic Policy in Energy Harvesting Wireless Sensor Networks
Demirekler, Mubeccel
2017-01-01
This paper considers a single-hop wireless sensor network where a fusion center collects data from M energy harvesting wireless sensors. The harvested energy is stored losslessly in an infinite-capacity battery at each sensor. In each time slot, the fusion center schedules K sensors for data transmission over K orthogonal channels. The fusion center does not have direct knowledge on the battery states of sensors, or the statistics of their energy harvesting processes. The fusion center only has information of the outcomes of previous transmission attempts. It is assumed that the sensors are data backlogged, there is no battery leakage and the communication is error-free. An energy harvesting sensor can transmit data to the fusion center whenever being scheduled only if it has enough energy for data transmission. We investigate average throughput of Round-Robin type myopic policy both analytically and numerically under an average reward (throughput) criterion. We show that Round-Robin type myopic policy achieves optimality for some class of energy harvesting processes although it is suboptimal for a broad class of energy harvesting processes. PMID:28954420
Performance Evaluation of IEEE 802.11ah Networks With High-Throughput Bidirectional Traffic.
Šljivo, Amina; Kerkhove, Dwight; Tian, Le; Famaey, Jeroen; Munteanu, Adrian; Moerman, Ingrid; Hoebeke, Jeroen; De Poorter, Eli
2018-01-23
So far, existing sub-GHz wireless communication technologies focused on low-bandwidth, long-range communication with large numbers of constrained devices. Although these characteristics are fine for many Internet of Things (IoT) applications, more demanding application requirements could not be met and legacy Internet technologies such as Transmission Control Protocol/Internet Protocol (TCP/IP) could not be used. This has changed with the advent of the new IEEE 802.11ah Wi-Fi standard, which is much more suitable for reliable bidirectional communication and high-throughput applications over a wide area (up to 1 km). The standard offers great possibilities for network performance optimization through a number of physical- and link-layer configurable features. However, given that the optimal configuration parameters depend on traffic patterns, the standard does not dictate how to determine them. Such a large number of configuration options can lead to sub-optimal or even incorrect configurations. Therefore, we investigated how two key mechanisms, Restricted Access Window (RAW) grouping and Traffic Indication Map (TIM) segmentation, influence scalability, throughput, latency and energy efficiency in the presence of bidirectional TCP/IP traffic. We considered both high-throughput video streaming traffic and large-scale reliable sensing traffic and investigated TCP behavior in both scenarios when the link layer introduces long delays. This article presents the relations between attainable throughput per station and attainable number of stations, as well as the influence of RAW, TIM and TCP parameters on both. We found that up to 20 continuously streaming IP-cameras can be reliably connected via IEEE 802.11ah with a maximum average data rate of 160 kbps, whereas 10 IP-cameras can achieve average data rates of up to 255 kbps over 200 m. Up to 6960 stations transmitting every 60 s can be connected over 1 km with no lost packets. The presented results enable the fine tuning of RAW and TIM parameters for throughput-demanding reliable applications (i.e., video streaming, firmware updates) on one hand, and very dense low-throughput reliable networks with bidirectional traffic on the other hand.
Performance Evaluation of IEEE 802.11ah Networks With High-Throughput Bidirectional Traffic
Kerkhove, Dwight; Tian, Le; Munteanu, Adrian; De Poorter, Eli
2018-01-01
So far, existing sub-GHz wireless communication technologies focused on low-bandwidth, long-range communication with large numbers of constrained devices. Although these characteristics are fine for many Internet of Things (IoT) applications, more demanding application requirements could not be met and legacy Internet technologies such as Transmission Control Protocol/Internet Protocol (TCP/IP) could not be used. This has changed with the advent of the new IEEE 802.11ah Wi-Fi standard, which is much more suitable for reliable bidirectional communication and high-throughput applications over a wide area (up to 1 km). The standard offers great possibilities for network performance optimization through a number of physical- and link-layer configurable features. However, given that the optimal configuration parameters depend on traffic patterns, the standard does not dictate how to determine them. Such a large number of configuration options can lead to sub-optimal or even incorrect configurations. Therefore, we investigated how two key mechanisms, Restricted Access Window (RAW) grouping and Traffic Indication Map (TIM) segmentation, influence scalability, throughput, latency and energy efficiency in the presence of bidirectional TCP/IP traffic. We considered both high-throughput video streaming traffic and large-scale reliable sensing traffic and investigated TCP behavior in both scenarios when the link layer introduces long delays. This article presents the relations between attainable throughput per station and attainable number of stations, as well as the influence of RAW, TIM and TCP parameters on both. We found that up to 20 continuously streaming IP-cameras can be reliably connected via IEEE 802.11ah with a maximum average data rate of 160 kbps, whereas 10 IP-cameras can achieve average data rates of up to 255 kbps over 200 m. Up to 6960 stations transmitting every 60 s can be connected over 1 km with no lost packets. The presented results enable the fine tuning of RAW and TIM parameters for throughput-demanding reliable applications (i.e., video streaming, firmware updates) on one hand, and very dense low-throughput reliable networks with bidirectional traffic on the other hand. PMID:29360798
Demonstration of electronic design automation flow for massively parallel e-beam lithography
NASA Astrophysics Data System (ADS)
Brandt, Pieter; Belledent, Jérôme; Tranquillin, Céline; Figueiro, Thiago; Meunier, Stéfanie; Bayle, Sébastien; Fay, Aurélien; Milléquant, Matthieu; Icard, Beatrice; Wieland, Marco
2014-07-01
For proximity effect correction in 5 keV e-beam lithography, three elementary building blocks exist: dose modulation, geometry (size) modulation, and background dose addition. Combinations of these three methods are quantitatively compared in terms of throughput impact and process window (PW). In addition, overexposure in combination with negative bias results in PW enhancement at the cost of throughput. In proximity effect correction by over exposure (PEC-OE), the entire layout is set to fixed dose and geometry sizes are adjusted. In PEC-dose to size (DTS) both dose and geometry sizes are locally optimized. In PEC-background (BG), a background is added to correct the long-range part of the point spread function. In single e-beam tools (Gaussian or Shaped-beam), throughput heavily depends on the number of shots. In raster scan tools such as MAPPER Lithography's FLX 1200 (MATRIX platform) this is not the case and instead of pattern density, the maximum local dose on the wafer is limiting throughput. The smallest considered half-pitch is 28 nm, which may be considered the 14-nm node for Metal-1 and the 10-nm node for the Via-1 layer, achieved in a single exposure with e-beam lithography. For typical 28-nm-hp Metal-1 layouts, it was shown that dose latitudes (size of process window) of around 10% are realizable with available PEC methods. For 28-nm-hp Via-1 layouts this is even higher at 14% and up. When the layouts do not reach the highest densities (up to 10∶1 in this study), PEC-BG and PEC-OE provide the capability to trade throughput for dose latitude. At the highest densities, PEC-DTS is required for proximity correction, as this method adjusts both geometry edges and doses and will reduce the dose at the densest areas. For 28-nm-hp lines critical dimension (CD), hole&dot (CD) and line ends (edge placement error), the data path errors are typically 0.9, 1.0 and 0.7 nm (3σ) and below, respectively. There is not a clear data path performance difference between the investigated PEC methods. After the simulations, the methods were successfully validated in exposures on a MAPPER pre-alpha tool. A 28-nm half pitch Metal-1 and Via-1 layouts show good performance in resist that coincide with the simulation result. Exposures of soft-edge stitched layouts show that beam-to-beam position errors up to ±7 nm specified for FLX 1200 show no noticeable impact on CD. The research leading to these results has been performed in the frame of the industrial collaborative consortium IMAGINE.
Subnuclear foci quantification using high-throughput 3D image cytometry
NASA Astrophysics Data System (ADS)
Wadduwage, Dushan N.; Parrish, Marcus; Choi, Heejin; Engelward, Bevin P.; Matsudaira, Paul; So, Peter T. C.
2015-07-01
Ionising radiation causes various types of DNA damages including double strand breaks (DSBs). DSBs are often recognized by DNA repair protein ATM which forms gamma-H2AX foci at the site of the DSBs that can be visualized using immunohistochemistry. However most of such experiments are of low throughput in terms of imaging and image analysis techniques. Most of the studies still use manual counting or classification. Hence they are limited to counting a low number of foci per cell (5 foci per nucleus) as the quantification process is extremely labour intensive. Therefore we have developed a high throughput instrumentation and computational pipeline specialized for gamma-H2AX foci quantification. A population of cells with highly clustered foci inside nuclei were imaged, in 3D with submicron resolution, using an in-house developed high throughput image cytometer. Imaging speeds as high as 800 cells/second in 3D were achieved by using HiLo wide-field depth resolved imaging and a remote z-scanning technique. Then the number of foci per cell nucleus were quantified using a 3D extended maxima transform based algorithm. Our results suggests that while most of the other 2D imaging and manual quantification studies can count only up to about 5 foci per nucleus our method is capable of counting more than 100. Moreover we show that 3D analysis is significantly superior compared to the 2D techniques.
Real-time distributed scheduling algorithm for supporting QoS over WDM networks
NASA Astrophysics Data System (ADS)
Kam, Anthony C.; Siu, Kai-Yeung
1998-10-01
Most existing or proposed WDM networks employ circuit switching, typically with one session having exclusive use of one entire wavelength. Consequently they are not suitable for data applications involving bursty traffic patterns. The MIT AON Consortium has developed an all-optical LAN/MAN testbed which provides time-slotted WDM service and employs fast-tunable transceivers in each optical terminal. In this paper, we explore extensions of this service to achieve fine-grained statistical multiplexing with different virtual circuits time-sharing the wavelengths in a fair manner. In particular, we develop a real-time distributed protocol for best-effort traffic over this time-slotted WDM service with near-optical fairness and throughput characteristics. As an additional design feature, our protocol supports the allocation of guaranteed bandwidths to selected connections. This feature acts as a first step towards supporting integrated services and quality-of-service guarantees over WDM networks. To achieve high throughput, our approach is based on scheduling transmissions, as opposed to collision- based schemes. Our distributed protocol involves one MAN scheduler and several LAN schedulers (one per LAN) in a master-slave arrangement. Because of propagation delays and limits on control channel capacities, all schedulers are designed to work with partial, delayed traffic information. Our distributed protocol is of the `greedy' type to ensure fast execution in real-time in response to dynamic traffic changes. It employs a hybrid form of rate and credit control for resource allocation. We have performed extensive simulations, which show that our protocol allocates resources (transmitters, receivers, wavelengths) fairly with high throughput, and supports bandwidth guarantees.
Adapting the γ-H2AX assay for automated processing in human lymphocytes. 1. Technological aspects.
Turner, Helen C; Brenner, David J; Chen, Youhua; Bertucci, Antonella; Zhang, Jian; Wang, Hongliang; Lyulko, Oleksandra V; Xu, Yanping; Shuryak, Igor; Schaefer, Julia; Simaan, Nabil; Randers-Pehrson, Gerhard; Yao, Y Lawrence; Amundson, Sally A; Garty, Guy
2011-03-01
The immunofluorescence-based detection of γ-H2AX is a reliable and sensitive method for quantitatively measuring DNA double-strand breaks (DSBs) in irradiated samples. Since H2AX phosphorylation is highly linear with radiation dose, this well-established biomarker is in current use in radiation biodosimetry. At the Center for High-Throughput Minimally Invasive Radiation Biodosimetry, we have developed a fully automated high-throughput system, the RABIT (Rapid Automated Biodosimetry Tool), that can be used to measure γ-H2AX yields from fingerstick-derived samples of blood. The RABIT workstation has been designed to fully automate the γ-H2AX immunocytochemical protocol, from the isolation of human blood lymphocytes in heparin-coated PVC capillaries to the immunolabeling of γ-H2AX protein and image acquisition to determine fluorescence yield. High throughput is achieved through the use of purpose-built robotics, lymphocyte handling in 96-well filter-bottomed plates, and high-speed imaging. The goal of the present study was to optimize and validate the performance of the RABIT system for the reproducible and quantitative detection of γ-H2AX total fluorescence in lymphocytes in a multiwell format. Validation of our biodosimetry platform was achieved by the linear detection of a dose-dependent increase in γ-H2AX fluorescence in peripheral blood samples irradiated ex vivo with γ rays over the range 0 to 8 Gy. This study demonstrates for the first time the optimization and use of our robotically based biodosimetry workstation to successfully quantify γ-H2AX total fluorescence in irradiated peripheral lymphocytes.
Large area silicon sheet by EFG
NASA Technical Reports Server (NTRS)
1981-01-01
A multiple growth run with three 10 cm cartridges was carried out with the best throughput rates and time percentage of simultaneous three ribbon growth achieved to date in this system. Growth speeds were between 3.2 and 3.6 cm/minute on all three cartridges and simultaneous full width growth of three ribbons was achieved 47 percent of the time over the eight hour duration of the experiment. Improvements in instrumentation and in the main zone temperature uniformity were two factors that have led to more reproducible growth conditions in the multiple ribbon furnace.
Liu, Shaorong; Gao, Lin; Pu, Qiaosheng; Lu, Joann J; Wang, Xingjia
2006-02-01
We have recently developed a new process to create cross-linked polyacrylamide (CPA) coatings on capillary walls to suppress protein-wall interactions. Here, we demonstrate CPA-coated capillaries for high-efficiency (>2 x 10(6) plates per meter) protein separations by capillary zone electrophoresis (CZE). Because CPA virtually eliminates electroosmotic flow, positive and negative proteins cannot be analyzed in a single run. A "one-sample-two-separation" approach is developed to achieve a comprehensive protein analysis. High throughput is achieved through a multiplexed CZE system.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... An adsorption system with adsorbent regeneration to comply with an emission limit in table 2 to this.... Maintain the total regeneration stream mass flow during the adsorption bed regeneration cycle greater than..., achieve and maintain the temperature of the adsorption bed after regeneration less than or equal to the...
Code of Federal Regulations, 2014 CFR
2014-07-01
.... An adsorption system with adsorbent regeneration to comply with an emission limit in table 2 to this.... Maintain the total regeneration stream mass flow during the adsorption bed regeneration cycle greater than..., achieve and maintain the temperature of the adsorption bed after regeneration less than or equal to the...
Code of Federal Regulations, 2012 CFR
2012-07-01
.... An adsorption system with adsorbent regeneration to comply with an emission limit in table 2 to this.... Maintain the total regeneration stream mass flow during the adsorption bed regeneration cycle greater than..., achieve and maintain the temperature of the adsorption bed after regeneration less than or equal to the...
High sensitive and throughput screening of Aflatoxin using MALDI-TOF-TOF-PSD-MS/MS
USDA-ARS?s Scientific Manuscript database
We have achieved sensitive and efficient detection of aflatoxin B1(AFB1) through matrix-assisted laser desorption/ionization time-of-flight-time-of-flight mass spectrometry (MALDI-TOF-TOF) and post-source decay (PSD) tandem mass spectrometry (MS/MS) using an acetic acid – a-cyano-4-hydroxycinnamic a...
Optical Filter Assembly for Interplanetary Optical Communications
NASA Technical Reports Server (NTRS)
Chen, Yijiang; Hemmati, Hamid
2013-01-01
Ground-based, narrow-band, high throughput optical filters are required for optical links from deep space. We report on the development of a tunable filter assembly that operates at telecommunication window of 1550 nanometers. Low insertion loss of 0.5 decibels and bandwidth of 90 picometers over a 2000 nanometers operational range of detectors has been achieved.
Microfluidics for effective concentration and sorting of waterborne protozoan pathogens.
Jimenez, M; Bridle, H
2016-07-01
We report on an inertial focussing based microfluidics technology for concentrating waterborne protozoa, achieving a 96% recovery rate of Cryptosporidium parvum and 86% for Giardia lamblia at a throughput (mL/min) capable of replacing centrifugation. The approach can easily be extended to other parasites and also bacteria. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loisel, G., E-mail: gploise@sandia.gov; Lake, P.; Gard, P.
2016-11-15
At Sandia National Laboratories, the x-ray generator Manson source model 5 was upgraded from 10 to 25 kV. The purpose of the upgrade is to drive higher characteristics photon energies with higher throughput. In this work we present characterization studies for the source size and the x-ray intensity when varying the source voltage for a series of K-, L-, and M-shell lines emitted from Al, Y, and Au elements composing the anode. We used a 2-pinhole camera to measure the source size and an energy dispersive detector to monitor the spectral content and intensity of the x-ray source. As themore » voltage increases, the source size is significantly reduced and line intensity is increased for the three materials. We can take advantage of the smaller source size and higher source throughput to effectively calibrate the suite of Z Pulsed Power Facility crystal spectrometers.« less
Prioritized retransmission in slotted all-optical packet-switched networks
NASA Astrophysics Data System (ADS)
Ghaffar Pour Rahbar, Akbar; Yang, Oliver
2006-12-01
We consider an all-optical slotted packet-switched network interconnected by a number of bufferless all-optical switches with contention-based operation. One approach to reduce the cost of the expensive contention resolution hardware could be retransmission in which each ingress switch keeps a copy of the transmitted traffic in the electronic buffer and retransmits whenever required. The conventional retransmission technique may need a higher number of retransmissions until traffic passes through the network. This in turn may lead to a retransmission at a higher layer and reduce the network throughput. In this paper, we propose and analyze a simple but effective prioritized retransmission technique in which dropped traffic is prioritized when retransmitted from ingress switches so that the core switch can process them with a higher priority. We present the analysis of both techniques in multifiber network architecture and verify it via simulation to demonstrate that our proposed algorithm can limit the number of retransmissions significantly and can improve TCP throughput better than the conventional retransmission technique.
NASA Technical Reports Server (NTRS)
Prater, Tracie
2016-01-01
Selective Laser Melting (SLM) is a powder bed fusion additive manufacturing process used increasingly in the aerospace industry to reduce the cost, weight, and fabrication time for complex propulsion components. SLM stands poised to revolutionize propulsion manufacturing, but there are a number of technical questions that must be addressed in order to achieve rapid, efficient fabrication and ensure adequate performance of parts manufactured using this process in safety-critical flight applications. Previous optimization studies for SLM using the Concept Laser M1 and M2 machines at NASA Marshall Space Flight Center have centered on machine default parameters. The objective of this work is to characterize the impact of higher throughput parameters (a previously unexplored region of the manufacturing operating envelope for this application) on material consolidation. In phase I of this work, density blocks were analyzed to explore the relationship between build parameters (laser power, scan speed, hatch spacing, and layer thickness) and material consolidation (assessed in terms of as-built density and porosity). Phase II additionally considers the impact of post-processing, specifically hot isostatic pressing and heat treatment, as well as deposition pattern on material consolidation in the same higher energy parameter regime considered in the phase I work. Density and microstructure represent the "first-gate" metrics for determining the adequacy of the SLM process in this parameter range and, as a critical initial indicator of material quality, will factor into a follow-on DOE that assesses the impact of these parameters on mechanical properties. This work will contribute to creating a knowledge base (understanding material behavior in all ranges of the AM equipment operating envelope) that is critical to transitioning AM from the custom low rate production sphere it currently occupies to the world of mass high rate production, where parts are fabricated at a rapid rate with confidence that they will meet or exceed all stringent functional requirements for spaceflight hardware. These studies will also provide important data on the sensitivity of material consolidation to process parameters that will inform the design and development of future flight articles using SLM.
Automated image alignment for 2D gel electrophoresis in a high-throughput proteomics pipeline.
Dowsey, Andrew W; Dunn, Michael J; Yang, Guang-Zhong
2008-04-01
The quest for high-throughput proteomics has revealed a number of challenges in recent years. Whilst substantial improvements in automated protein separation with liquid chromatography and mass spectrometry (LC/MS), aka 'shotgun' proteomics, have been achieved, large-scale open initiatives such as the Human Proteome Organization (HUPO) Brain Proteome Project have shown that maximal proteome coverage is only possible when LC/MS is complemented by 2D gel electrophoresis (2-DE) studies. Moreover, both separation methods require automated alignment and differential analysis to relieve the bioinformatics bottleneck and so make high-throughput protein biomarker discovery a reality. The purpose of this article is to describe a fully automatic image alignment framework for the integration of 2-DE into a high-throughput differential expression proteomics pipeline. The proposed method is based on robust automated image normalization (RAIN) to circumvent the drawbacks of traditional approaches. These use symbolic representation at the very early stages of the analysis, which introduces persistent errors due to inaccuracies in modelling and alignment. In RAIN, a third-order volume-invariant B-spline model is incorporated into a multi-resolution schema to correct for geometric and expression inhomogeneity at multiple scales. The normalized images can then be compared directly in the image domain for quantitative differential analysis. Through evaluation against an existing state-of-the-art method on real and synthetically warped 2D gels, the proposed analysis framework demonstrates substantial improvements in matching accuracy and differential sensitivity. High-throughput analysis is established through an accelerated GPGPU (general purpose computation on graphics cards) implementation. Supplementary material, software and images used in the validation are available at http://www.proteomegrid.org/rain/.
High-Throughput Quantitative Lipidomics Analysis of Nonesterified Fatty Acids in Plasma by LC-MS.
Christinat, Nicolas; Morin-Rivron, Delphine; Masoodi, Mojgan
2017-01-01
Nonesterified fatty acids are important biological molecules which have multiple functions such as energy storage, gene regulation, or cell signaling. Comprehensive profiling of nonesterified fatty acids in biofluids can facilitate studying and understanding their roles in biological systems. For these reasons, we have developed and validated a high-throughput, nontargeted lipidomics method coupling liquid chromatography to high-resolution mass spectrometry for quantitative analysis of nonesterified fatty acids. Sufficient chromatographic separation is achieved to separate positional isomers such as polyunsaturated and branched-chain species and quantify a wide range of nonesterified fatty acids in human plasma samples. However, this method is not limited only to these fatty acid species and offers the possibility to perform untargeted screening of additional nonesterified fatty acid species.
Shi, Lei; Zhang, Jianjun; Shi, Yi; Ding, Xu; Wei, Zhenchun
2015-01-14
We consider the base station placement problem for wireless sensor networks with successive interference cancellation (SIC) to improve throughput. We build a mathematical model for SIC. Although this model cannot be solved directly, it enables us to identify a necessary condition for SIC on distances from sensor nodes to the base station. Based on this relationship, we propose to divide the feasible region of the base station into small pieces and choose a point within each piece for base station placement. The point with the largest throughput is identified as the solution. The complexity of this algorithm is polynomial. Simulation results show that this algorithm can achieve about 25% improvement compared with the case that the base station is placed at the center of the network coverage area when using SIC.
Investigation Of In-Line Monitoring Options At H Canyon/HB Line For Plutonium Oxide Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sexton, L.
2015-10-14
H Canyon and HB Line have a production goal of 1 MT per year of plutonium oxide feedstock for the MOX facility by FY17 (AFS-2 mission). In order to meet this goal, steps will need to be taken to improve processing efficiency. One concept for achieving this goal is to implement in-line process monitoring at key measurement points within the facilities. In-line monitoring during operations has the potential to increase throughput and efficiency while reducing costs associated with laboratory sample analysis. In the work reported here, we mapped the plutonium oxide process, identified key measurement points, investigated alternate technologies thatmore » could be used for in-line analysis, and initiated a throughput benefit analysis.« less
Hegab, Hanaa M.; ElMekawy, Ahmed; Stakenborg, Tim
2013-01-01
Microbial fermentation process development is pursuing a high production yield. This requires a high throughput screening and optimization of the microbial strains, which is nowadays commonly achieved by applying slow and labor-intensive submerged cultivation in shake flasks or microtiter plates. These methods are also limited towards end-point measurements, low analytical data output, and control over the fermentation process. These drawbacks could be overcome by means of scaled-down microfluidic microbioreactors (μBR) that allow for online control over cultivation data and automation, hence reducing cost and time. This review goes beyond previous work not only by providing a detailed update on the current μBR fabrication techniques but also the operation and control of μBRs is compared to large scale fermentation reactors. PMID:24404006
Wang, H; Wu, Y; Zhao, Y; Sun, W; Ding, L; Guo, B; Chen, B
2012-08-01
Desorption corona beam ionisation (DCBI), the relatively novel ambient mass spectrometry (MS) technique, was utilised to screen for illicit additives in weight-loss food. The five usually abused chemicals - fenfluramine, N-di-desmethyl sibutramine, N-mono-desmethyl sibutramine, sibutramine and phenolphthalein - were detected with the proposed DCBI-MS method. Fast single-sample and high-throughput analysis was demonstrated. Semi-quantification was accomplished based on peak areas in the ion chromatograms. Four illicit additives were identified and semi-quantified in commercial samples. As there was no tedious sample pre-treatment compared with conventional HPLC methods, high-throughput analysis was achieved with DCBI. The results proved that DCBI-MS is a powerful tool for the rapid screening of illicit additives in weight-loss dietary supplements.
Multispot single-molecule FRET: High-throughput analysis of freely diffusing molecules
Panzeri, Francesco
2017-01-01
We describe an 8-spot confocal setup for high-throughput smFRET assays and illustrate its performance with two characteristic experiments. First, measurements on a series of freely diffusing doubly-labeled dsDNA samples allow us to demonstrate that data acquired in multiple spots in parallel can be properly corrected and result in measured sample characteristics consistent with those obtained with a standard single-spot setup. We then take advantage of the higher throughput provided by parallel acquisition to address an outstanding question about the kinetics of the initial steps of bacterial RNA transcription. Our real-time kinetic analysis of promoter escape by bacterial RNA polymerase confirms results obtained by a more indirect route, shedding additional light on the initial steps of transcription. Finally, we discuss the advantages of our multispot setup, while pointing potential limitations of the current single laser excitation design, as well as analysis challenges and their solutions. PMID:28419142
Information-based management mode based on value network analysis for livestock enterprises
NASA Astrophysics Data System (ADS)
Liu, Haoqi; Lee, Changhoon; Han, Mingming; Su, Zhongbin; Padigala, Varshinee Anu; Shen, Weizheng
2018-01-01
With the development of computer and IT technologies, enterprise management has gradually become information-based management. Moreover, due to poor technical competence and non-uniform management, most breeding enterprises show a lack of organisation in data collection and management. In addition, low levels of efficiency result in increasing production costs. This paper adopts 'struts2' in order to construct an information-based management system for standardised and normalised management within the process of production in beef cattle breeding enterprises. We present a radio-frequency identification system by studying multiple-tag anti-collision via a dynamic grouping ALOHA algorithm. This algorithm is based on the existing ALOHA algorithm and uses an improved packet dynamic of this algorithm, which is characterised by a high-throughput rate. This new algorithm can reach a throughput 42% higher than that of the general ALOHA algorithm. With a change in the number of tags, the system throughput is relatively stable.
Shot-noise limited throughput of soft x-ray ptychography for nanometrology applications
NASA Astrophysics Data System (ADS)
Koek, Wouter; Florijn, Bastiaan; Bäumer, Stefan; Kruidhof, Rik; Sadeghian, Hamed
2018-03-01
Due to its potential for high resolution and three-dimensional imaging, soft x-ray ptychography has received interest for nanometrology applications. We have analyzed the measurement time per unit area when using soft x-ray ptychography for various nanometrology applications including mask inspection and wafer inspection, and are thus able to predict (order of magnitude) throughput figures. Here we show that for a typical measurement system, using a typical sampling strategy, and when aiming for 10-15 nm resolution, it is expected that a wafer-based topology (2.5D) measurement takes approximately 4 minutes per μm2 , and a full three-dimensional measurement takes roughly 6 hours per μm2 . Due to their much higher reflectivity EUV masks can be measured considerably faster; a measurement speed of 0.1 seconds per μm2 is expected. However, such speeds do not allow for full wafer or mask inspection at industrially relevant throughput.
WFC3 TV3 Testing: IR Channel Blue Leaks
NASA Astrophysics Data System (ADS)
Brown, Thomas R.
2008-03-01
A new IR detector (IR4; FPA165) is housed in WFC3 during the current campaign of thermal vacuum (TV) ground testing at GSFC. As part of these tests, we measured the IR channel throughput. Compared to the previous IR detectors, IR4 has much higher quantum efficiency at all wavelengths, particularly in the optical. The total throughput for the IR channel is still low in the optical, due to the opacity of the IR filters at these wavelengths, but there is a small wavelength region (~710-830 nm) where these filters do not offer as much blocking as needed to meet Contract End Item specifications. For this reason, the throughput measurements were extended into the blue to quantify the amount of blue leak in the narrow and medium IR bandpasses where a few percent of the measured flux could come from optical photons when observing hot sources. The results are tabulated here.
BBMerge – Accurate paired shotgun read merging via overlap
Bushnell, Brian; Rood, Jonathan; Singer, Esther
2017-10-26
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
High speed true random number generator with a new structure of coarse-tuning PDL in FPGA
NASA Astrophysics Data System (ADS)
Fang, Hongzhen; Wang, Pengjun; Cheng, Xu; Zhou, Keji
2018-03-01
A metastability-based TRNG (true random number generator) is presented in this paper, and implemented in FPGA. The metastable state of a D flip-flop is tunable through a two-stage PDL (programmable delay line). With the proposed coarse-tuning PDL structure, the TRNG core does not require extra placement and routing to ensure its entropy. Furthermore, the core needs fewer stages of coarse-tuning PDL at higher operating frequency, and thus saves more resources in FPGA. The designed TRNG achieves 25 Mbps @ 100 MHz throughput after proper post-processing, which is several times higher than other previous TRNGs based on FPGA. Moreover, the robustness of the system is enhanced with the adoption of a feedback system. The quality of the designed TRNG is verified by NIST (National Institute of Standards and Technology) and also accepted by class P1 of the AIS-20/31 test suite. Project supported by the S&T Plan of Zhejiang Provincial Science and Technology Department (No. 2016C31078), the National Natural Science Foundation of China (Nos. 61574041, 61474068, 61234002), and the K.C. Wong Magna Fund in Ningbo University, China.
Exceptionally fast water desalination at complete salt rejection by pristine graphyne monolayers.
Xue, Minmin; Qiu, Hu; Guo, Wanlin
2013-12-20
Desalination that produces clean freshwater from seawater holds the promise of solving the global water shortage for drinking, agriculture and industry. However, conventional desalination technologies such as reverse osmosis and thermal distillation involve large amounts of energy consumption, and the semipermeable membranes widely used in reverse osmosis face the challenge to provide a high throughput at high salt rejection. Here we find by comprehensive molecular dynamics simulations and first principles modeling that pristine graphyne, one of the graphene-like one-atom-thick carbon allotropes, can achieve 100% rejection of nearly all ions in seawater including Na(+), Cl(-), Mg(2+), K(+) and Ca(2+), at an exceptionally high water permeability about two orders of magnitude higher than those for commercial state-of-the-art reverse osmosis membranes at a salt rejection of ~98.5%. This complete ion rejection by graphyne, independent of the salt concentration and the operating pressure, is revealed to be originated from the significantly higher energy barriers for ions than for water. This intrinsic specialty of graphyne should provide a new possibility for the efforts to alleviate the global shortage of freshwater and other environmental problems.
Deep-UV-sensitive high-frame-rate backside-illuminated CCD camera developments
NASA Astrophysics Data System (ADS)
Dawson, Robin M.; Andreas, Robert; Andrews, James T.; Bhaskaran, Mahalingham; Farkas, Robert; Furst, David; Gershstein, Sergey; Grygon, Mark S.; Levine, Peter A.; Meray, Grazyna M.; O'Neal, Michael; Perna, Steve N.; Proefrock, Donald; Reale, Michael; Soydan, Ramazan; Sudol, Thomas M.; Swain, Pradyumna K.; Tower, John R.; Zanzucchi, Pete
2002-04-01
New applications for ultra-violet imaging are emerging in the fields of drug discovery and industrial inspection. High throughput is critical for these applications where millions of drug combinations are analyzed in secondary screenings or high rate inspection of small feature sizes over large areas is required. Sarnoff demonstrated in1990 a back illuminated, 1024 X 1024, 18 um pixel, split-frame-transfer device running at > 150 frames per second with high sensitivity in the visible spectrum. Sarnoff designed, fabricated and delivered cameras based on these CCDs and is now extending this technology to devices with higher pixel counts and higher frame rates through CCD architectural enhancements. The high sensitivities obtained in the visible spectrum are being pushed into the deep UV to support these new medical and industrial inspection applications. Sarnoff has achieved measured quantum efficiencies > 55% at 193 nm, rising to 65% at 300 nm, and remaining almost constant out to 750 nm. Optimization of the sensitivity is being pursued to tailor the quantum efficiency for particular wavelengths. Characteristics of these high frame rate CCDs and cameras will be described and results will be presented demonstrating high UV sensitivity down to 150 nm.
BBMerge – Accurate paired shotgun read merging via overlap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bushnell, Brian; Rood, Jonathan; Singer, Esther
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
ac electroosmotic pumping induced by noncontact external electrodes.
Wang, Shau-Chun; Chen, Hsiao-Ping; Chang, Hsueh-Chia
2007-09-21
Electroosmotic (EO) pumps based on dc electroosmosis is plagued by bubble generation and other electrochemical reactions at the electrodes at voltages beyond 1 V for electrolytes. These disadvantages limit their throughput and offset their portability advantage over mechanical syringe or pneumatic pumps. ac electroosmotic pumps at high frequency (>100 kHz) circumvent the bubble problem by inducing polarization and slip velocity on embedded electrodes,1 but they require complex electrode designs to produce a net flow. We report a new high-throughput ac EO pump design based on induced-polarization on the entire channel surface instead of just on the electrodes. Like dc EO pumps, our pump electrodes are outside of the load section and form a cm-long pump unit consisting of three circular reservoirs (3 mm in diameter) connected by a 1x1 mm channel. The field-induced polarization can produce an effective Zeta potential exceeding 1 V and an ac slip velocity estimated as 1 mmsec or higher, both one order of magnitude higher than earlier dc and ac pumps, giving rise to a maximum throughput of 1 mulsec. Polarization over the entire channel surface, quadratic scaling with respect to the field and high voltage at high frequency without electrode bubble generation are the reasons why the current pump is superior to earlier dc and ac EO pumps.
Kittelmann, Jörg; Ottens, Marcel; Hubbuch, Jürgen
2015-04-15
High-throughput batch screening technologies have become an important tool in downstream process development. Although continuative miniaturization saves time and sample consumption, there is yet no screening process described in the 384-well microplate format. Several processes are established in the 96-well dimension to investigate protein-adsorbent interactions, utilizing between 6.8 and 50 μL resin per well. However, as sample consumption scales with resin volumes and throughput scales with experiments per microplate, they are limited in costs and saved time. In this work, a new method for in-well resin quantification by optical means, applicable in the 384-well format, and resin volumes as small as 0.1 μL is introduced. A HTS batch isotherm process is described, utilizing this new method in combination with optical sample volume quantification for screening of isotherm parameters in 384-well microplates. Results are qualified by confidence bounds determined by bootstrap analysis and a comprehensive Monte Carlo study of error propagation. This new approach opens the door to a variety of screening processes in the 384-well format on HTS stations, higher quality screening data and an increase in throughput. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yamada, Yusuke; Hiraki, Masahiko; Sasajima, Kumiko; Matsugaki, Naohiro; Igarashi, Noriyuki; Amano, Yasushi; Warizaya, Masaichi; Sakashita, Hitoshi; Kikuchi, Takashi; Mori, Takeharu; Toyoshima, Akio; Kishimoto, Shunji; Wakatsuki, Soichi
2010-06-01
Recent advances in high-throughput techniques for macromolecular crystallography have highlighted the importance of structure-based drug design (SBDD), and the demand for synchrotron use by pharmaceutical researchers has increased. Thus, in collaboration with Astellas Pharma Inc., we have constructed a new high-throughput macromolecular crystallography beamline, AR-NE3A, which is dedicated to SBDD. At AR-NE3A, a photon flux up to three times higher than those at existing high-throughput beams at the Photon Factory, AR-NW12A and BL-5A, can be realized at the same sample positions. Installed in the experimental hutch are a high-precision diffractometer, fast-readout, high-gain CCD detector, and sample exchange robot capable of handling more than two hundred cryo-cooled samples stored in a Dewar. To facilitate high-throughput data collection required for pharmaceutical research, fully automated data collection and processing systems have been developed. Thus, sample exchange, centering, data collection, and data processing are automatically carried out based on the user's pre-defined schedule. Although Astellas Pharma Inc. has a priority access to AR-NE3A, the remaining beam time is allocated to general academic and other industrial users.
Nasir, Hina; Javaid, Nadeem; Sher, Muhammad; Qasim, Umar; Khan, Zahoor Ali; Alrajeh, Nabil; Niaz, Iftikhar Azim
2016-01-01
This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs); performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE) efficient depth based routing and Enhanced-ACE (E-ACE) are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ). E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment. PMID:27420061
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koshelev, Irina; Huang, Rong; Graber, Timothy
2009-09-02
The IMCA-CAT bending-magnet beamline was upgraded with a collimating mirror in order to achieve the energy resolution required to conduct high-quality multi- and single-wavelength anomalous diffraction (MAD/SAD) experiments without sacrificing beamline flux throughput. Following the upgrade, the bending-magnet beamline achieves a flux of 8 x 10{sup 11} photons s{sup -1} at 1 {angstrom} wavelength, at a beamline aperture of 1.5 mrad (horizontal) x 86 {mu}rad (vertical), with energy resolution (limited mostly by the intrinsic resolution of the monochromator optics) {delta}E/E = 1.5 x 10{sup -4} (at 10 kV). The beamline operates in a dynamic range of 7.5-17.5 keV and deliversmore » to the sample focused beam of size (FWHM) 240 {micro}m (horizontally) x 160 {micro}m (vertically). The performance of the 17-BM beamline optics and its deviation from ideally shaped optics is evaluated in the context of the requirements imposed by the needs of protein crystallography experiments. An assessment of flux losses is given in relation to the (geometric) properties of major beamline components.« less
NASA Astrophysics Data System (ADS)
Song, Z.; Wang, Y.; Kuang, J.
2018-05-01
Field Programmable Gate Arrays (FPGAs) made with 28 nm and more advanced process technology have great potentials for implementation of high precision time-to-digital convertors (TDC), because the delay cells in the tapped delay line (TDL) used for time interpolation are getting smaller and smaller. However, the bubble problems in the TDL status are becoming more complicated, which make it difficult to achieve TDCs on these chips with a high time precision. In this paper, we are proposing a novel decomposition encoding scheme, which not only can solve the bubble problem easily, but also has a high encoding efficiency. The potential of these chips to realize TDC can be fully released with the scheme. In a Xilinx Kintex-7 FPGA chip, we implemented a TDC system with 256 TDC channels, which doubles the number of TDC channels that our previous technique could achieve. Performances of all these TDC channels are evaluated. The average RMS time precision among them is 10.23 ps in the time-interval measurement range of (0–10 ns), and their measurement throughput reaches 277 M measures per second.
On Maximizing the Throughput of Packet Transmission under Energy Constraints.
Wu, Weiwei; Dai, Guangli; Li, Yan; Shan, Feng
2018-06-23
More and more Internet of Things (IoT) wireless devices have been providing ubiquitous services over the recent years. Since most of these devices are powered by batteries, a fundamental trade-off to be addressed is the depleted energy and the achieved data throughput in wireless data transmission. By exploiting the rate-adaptive capacities of wireless devices, most existing works on energy-efficient data transmission try to design rate-adaptive transmission policies to maximize the amount of transmitted data bits under the energy constraints of devices. Such solutions, however, cannot apply to scenarios where data packets have respective deadlines and only integrally transmitted data packets contribute. Thus, this paper introduces a notion of weighted throughput, which measures how much total value of data packets are successfully and integrally transmitted before their own deadlines. By designing efficient rate-adaptive transmission policies, this paper aims to make the best use of the energy and maximize the weighted throughput. What is more challenging but with practical significance, we consider the fading effect of wireless channels in both offline and online scenarios. In the offline scenario, we develop an optimal algorithm that computes the optimal solution in pseudo-polynomial time, which is the best possible solution as the problem undertaken is NP-hard. In the online scenario, we propose an efficient heuristic algorithm based on optimal properties derived for the optimal offline solution. Simulation results validate the efficiency of the proposed algorithm.
Hardcastle, Thomas J
2016-01-15
High-throughput data are now commonplace in biological research. Rapidly changing technologies and application mean that novel methods for detecting differential behaviour that account for a 'large P, small n' setting are required at an increasing rate. The development of such methods is, in general, being done on an ad hoc basis, requiring further development cycles and a lack of standardization between analyses. We present here a generalized method for identifying differential behaviour within high-throughput biological data through empirical Bayesian methods. This approach is based on our baySeq algorithm for identification of differential expression in RNA-seq data based on a negative binomial distribution, and in paired data based on a beta-binomial distribution. Here we show how the same empirical Bayesian approach can be applied to any parametric distribution, removing the need for lengthy development of novel methods for differently distributed data. Comparisons with existing methods developed to address specific problems in high-throughput biological data show that these generic methods can achieve equivalent or better performance. A number of enhancements to the basic algorithm are also presented to increase flexibility and reduce computational costs. The methods are implemented in the R baySeq (v2) package, available on Bioconductor http://www.bioconductor.org/packages/release/bioc/html/baySeq.html. tjh48@cam.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebert, Jon Llyod
This Small Business Innovative Research (SBIR) Phase I project will demonstrate the feasibility of an innovative temperature control technology for Metal-Organic Chemical Vapor Deposition (MOCVD) process used in the fabrication of Multi-Quantum Well (MQW) LEDs. The proposed control technology has the strong potential to improve both throughput and performance quality of the manufactured LED. The color of the light emitted by an LED is a strong function of the substrate temperature during the deposition process. Hence, accurate temperature control of the MOCVD process is essential for ensuring that the LED performance matches the design specification. The Gallium Nitride (GaN) epitaxymore » process involves depositing multiple layers at different temperatures. Much of the recipe time is spent ramping from one process temperature to another, adding significant overhead to the production time. To increase throughput, the process temperature must transition over a range of several hundred degrees Centigrade many times with as little overshoot and undershoot as possible, in the face of several sources of process disturbance such as changing emissivities. Any throughput increase achieved by faster ramping must also satisfy the constraint of strict temperature uniformity across the carrier so that yield is not affected. SC Solutions is a leading supplier of embedded real-time temperature control technology for MOCVD systems used in LED manufacturing. SC’s Multiple Input Multiple Output (MIMO) temperature controllers use physics-based models to achieve the performance demanded by our customers. However, to meet DOE’s ambitious goals of cost reduction of LED products, a new generation of temperature controllers has to be developed. SC believes that the proposed control technology will be made feasible by the confluence of mathematical formulation as a convex optimization problem, new efficient and scalable algorithms, and the increase in computational power available for real-time control.« less
The opportunity and challenge of spin coat based nanoimprint lithography
NASA Astrophysics Data System (ADS)
Jung, Wooyung; Cho, Jungbin; Choi, Eunhyuk; Lim, Yonghyun; Bok, Cheolkyu; Tsuji, Masatoshi; Kobayashi, Kei; Kono, Takuya; Nakasugi, Tetsuro
2017-03-01
Since multi patterning with spacer was introduced in NAND flash memory1, multi patterning with spacer has been a promising solution to overcome the resolution limit. However, the increase in process cost of multi patterning with spacer must be a serious burden to device manufacturers as half pitch of patterns gets smaller.2, 3 Even though Nano Imprint Lithography (NIL) has been considered as one of strong candidates to avoid cost issue of multi patterning with spacer, there are still negative viewpoints; template damage induced from particles between template and wafer, overlay degradation induced from shear force between template and wafer, and throughput loss induced from dispensing and spreading resist droplet. Jet and Flash Imprint Lithography (J-FIL4, 5, 6) has contributed to throughput improvement, but still has these above problems. J-FIL consists of 5 steps; dispense of resist droplets on wafer, imprinting template on wafer, filling the gap between template and wafer with resist, UV curing, and separation of template from wafer. If dispensing resist droplets by inkjet is replaced with coating resist at spin coater, additional progress in NIL can be achieved. Template damage from particle can be suppressed by thick resist which is spin-coated at spin coater and covers most of particles on wafer, shear force between template and wafer can be minimized with thick resist, and finally additional throughput enhancement can be achieved by skipping dispense of resist droplets on wafer. On the other hand, spin-coat-based NIL has side effect such as pattern collapse which comes from high separation energy of resist. It is expected that pattern collapse can be improved by the development of resist with low separation energy.
3D imaging of optically cleared tissue using a simplified CLARITY method and on-chip microscopy
Zhang, Yibo; Shin, Yoonjung; Sung, Kevin; Yang, Sam; Chen, Harrison; Wang, Hongda; Teng, Da; Rivenson, Yair; Kulkarni, Rajan P.; Ozcan, Aydogan
2017-01-01
High-throughput sectioning and optical imaging of tissue samples using traditional immunohistochemical techniques can be costly and inaccessible in resource-limited areas. We demonstrate three-dimensional (3D) imaging and phenotyping in optically transparent tissue using lens-free holographic on-chip microscopy as a low-cost, simple, and high-throughput alternative to conventional approaches. The tissue sample is passively cleared using a simplified CLARITY method and stained using 3,3′-diaminobenzidine to target cells of interest, enabling bright-field optical imaging and 3D sectioning of thick samples. The lens-free computational microscope uses pixel super-resolution and multi-height phase recovery algorithms to digitally refocus throughout the cleared tissue and obtain a 3D stack of complex-valued images of the sample, containing both phase and amplitude information. We optimized the tissue-clearing and imaging system by finding the optimal illumination wavelength, tissue thickness, sample preparation parameters, and the number of heights of the lens-free image acquisition and implemented a sparsity-based denoising algorithm to maximize the imaging volume and minimize the amount of the acquired data while also preserving the contrast-to-noise ratio of the reconstructed images. As a proof of concept, we achieved 3D imaging of neurons in a 200-μm-thick cleared mouse brain tissue over a wide field of view of 20.5 mm2. The lens-free microscope also achieved more than an order-of-magnitude reduction in raw data compared to a conventional scanning optical microscope imaging the same sample volume. Being low cost, simple, high-throughput, and data-efficient, we believe that this CLARITY-enabled computational tissue imaging technique could find numerous applications in biomedical diagnosis and research in low-resource settings. PMID:28819645
Schaufele, Fred
2013-01-01
Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Megan; Nordmeyer, Robert A.; Cornell, Earl
2009-10-02
To facilitate a direct interface between protein separation by PAGE and protein identification by mass spectrometry, we developed a multichannel system that continuously collects fractions as protein bands migrate off the bottom of gel electrophoresis columns. The device was constructed using several short linear gel columns, each of a different percent acrylamide, to achieve a separation power similar to that of a long gradient gel. A Counter Free-Flow elution technique then allows continuous and simultaneous fraction collection from multiple channels at low cost. We demonstrate that rapid, high-resolution separation of a complex protein mixture can be achieved on this systemmore » using SDS-PAGE. In a 2.5 h electrophoresis run, for example, each sample was separated and eluted into 48-96 fractions over a mass range of 10-150 kDa; sample recovery rates were 50percent or higher; each channel was loaded with up to 0.3 mg of protein in 0.4 mL; and a purified band was eluted in two to three fractions (200 L/fraction). Similar results were obtained when running native gel electrophoresis, but protein aggregation limited the loading capacity to about 50 g per channel and reduced resolution.« less
Large-scale parallel genome assembler over cloud computing environment.
Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong
2017-06-01
The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.
High pressure inactivation of Brettanomyces bruxellensis in red wine.
van Wyk, Sanelle; Silva, Filipa V M
2017-05-01
Brettanomyces bruxellensis ("Brett") is a major spoilage concern for the wine industry worldwide, leading to undesirable sensory properties. Sulphur dioxide, is currently the preferred method for wine preservation. However, due to its negative effects on consumers, the use of new alternative non-thermal technologies are increasingly being investigated. The aim of this study was to determine and model the effect of high pressure processing (HPP) conditions and yeast strain on the inactivation of "Brett" in Cabernet Sauvignon wine. Processing at 200 MPa for 3 min resulted in 5.8 log reductions. However higher pressure is recommended to achieve high throughput in the wine industry, for example >6.0 log reductions were achieved after 400 MPa for 5 s. The inactivation of B. bruxellensis is pressure and time dependent, with increased treatment time and pressure leading to increased yeast inactivation. It was also found that yeast strain had a significant effect on HPP inactivation, with AWRI 1499 being the most resistant strain. The Weibull model successfully described the HPP "Brett" inactivation. HPP is a viable alternative for the inactivation of B. bruxellensis in wine, with the potential to reduce the industry's reliance on sulphur dioxide. Copyright © 2016 Elsevier Ltd. All rights reserved.
Elsworth, Brendan; Charnaud, Sarah C.; Sanders, Paul R.; Crabb, Brendan S.; Gilson, Paul R.
2014-01-01
Drug discovery is a key part of malaria control and eradication strategies, and could benefit from sensitive and affordable assays to quantify parasite growth and to help identify the targets of potential anti-malarial compounds. Bioluminescence, achieved through expression of exogenous luciferases, is a powerful tool that has been applied in studies of several aspects of parasite biology and high throughput growth assays. We have expressed the new reporter NanoLuc (Nluc) luciferase in Plasmodium falciparum and showed it is at least 100 times brighter than the commonly used firefly luciferase. Nluc brightness was explored as a means to achieve a growth assay with higher sensitivity and lower cost. In addition we attempted to develop other screening assays that may help interrogate libraries of inhibitory compounds for their mechanism of action. To this end parasites were engineered to express Nluc in the cytoplasm, the parasitophorous vacuole that surrounds the intraerythrocytic parasite or exported to the red blood cell cytosol. As proof-of-concept, these parasites were used to develop functional screening assays for quantifying the effects of Brefeldin A, an inhibitor of protein secretion, and Furosemide, an inhibitor of new permeation pathways used by parasites to acquire plasma nutrients. PMID:25392998
Starshade orbital maneuver study for WFIRST
NASA Astrophysics Data System (ADS)
Soto, Gabriel; Sinha, Amlan; Savransky, Dmitry; Delacroix, Christian; Garrett, Daniel
2017-09-01
The Wide Field Infrared Survey Telescope (WFIRST) mission, scheduled for launch in the mid-2020s will perform exoplanet science via both direct imaging and a microlensing survey. An internal coronagraph is planned to perform starlight suppression for exoplanet imaging, but an external starshade could be used to achieve the required high contrasts with potentially higher throughput. This approach would require a separately-launched occulter spacecraft to be positioned at exact distances from the telescope along the line of sight to a target star system. We present a detailed study to quantify the Δv requirements and feasibility of deploying this additional spacecraft as a means of exoplanet imaging. The primary focus of this study is the fuel use of the occulter while repositioning between targets. Based on its design, the occulter is given an offset distance from the nominal WFIRST halo orbit. Target star systems and look vectors are generated using Exoplanet Open-Source Imaging Simulator (EXOSIMS); a boundary value problem is then solved between successive targets. On average, 50 observations are achievable with randomly selected targets given a 30-day transfer time. Individual trajectories can be optimized for transfer time as well as fuel usage to be used in mission scheduling. Minimizing transfer time reduces the total mission time by up to 4.5 times in some simulations before expending the entire fuel budget. Minimizing Δv can generate starshade missions that achieve over 100 unique observations within the designated mission lifetime of WFIRST.
Zhao, Meng-Meng; Du, Shan-Shan; Li, Qiu-Hong; Chen, Tao; Qiu, Hui; Wu, Qin; Chen, Shan-Shan; Zhou, Ying; Zhang, Yuan; Hu, Yang; Su, Yi-Liang; Shen, Li; Zhang, Fen; Weng, Dong; Li, Hui-Ping
2017-02-01
This study aims to use high throughput 16SrRNA gene sequencing to examine the bacterial profile of lymph node biopsy samples of patients with sarcoidosis and to further verify the association between Propionibacterium acnes (P. acnes) and sarcoidosis. A total of 36 mediastinal lymph node biopsy specimens were collected from 17 cases of sarcoidosis, 8 tuberculosis (TB group), and 11 non-infectious lung diseases (control group). The V4 region of the bacterial 16SrRNA gene in the specimens was amplified and sequenced using the high throughput sequencing platform MiSeq, and bacterial profile was established. The data analysis software QIIME and Metastats were used to compare bacterial relative abundance in the three patient groups. Overall, 545 genera were identified; 38 showed significantly lower and 29 had significantly higher relative abundance in the sarcoidosis group than in the TB and control groups (P < 0.01). P. acnes 16SrRNA was exclusively found in all the 17 samples of the sarcoidosis group, whereas was not detected in the TB and control groups. The relative abundance of P. acnes in the sarcoidosis group (0.16% ± 0. 11%) was significantly higher than that in the TB (Metastats analysis: P = 0.0010, q = 0.0044) and control groups (Metastats analysis: P = 0.0010, q = 0.0038). The relative abundance of P. granulosum was only 0.0022% ± 0. 0044% in the sarcoidosis group. P. granulosum 16SrRNA was not detected in the other two groups. High throughput 16SrRNA gene sequencing appears to be a useful tool to investigate the bacterial profile of sarcoidosis specimens. The results suggest that P. acnes may be involved in sarcoidosis development.
Centimeter-scale MEMS scanning mirrors for high power laser application
NASA Astrophysics Data System (ADS)
Senger, F.; Hofmann, U.; v. Wantoch, T.; Mallas, C.; Janes, J.; Benecke, W.; Herwig, Patrick; Gawlitza, P.; Ortega-Delgado, M.; Grune, C.; Hannweber, J.; Wetzig, A.
2015-02-01
A higher achievable scan speed and the capability to integrate two scan axes in a very compact device are fundamental advantages of MEMS scanning mirrors over conventional galvanometric scanners. There is a growing demand for biaxial high speed scanning systems complementing the rapid progress of high power lasers for enabling the development of new high throughput manufacturing processes. This paper presents concept, design, fabrication and test of biaxial large aperture MEMS scanning mirrors (LAMM) with aperture sizes up to 20 mm for use in high-power laser applications. To keep static and dynamic deformation of the mirror acceptably low all MEMS mirrors exhibit full substrate thickness of 725 μm. The LAMM-scanners are being vacuum packaged on wafer-level based on a stack of 4 wafers. Scanners with aperture sizes up to 12 mm are designed as a 4-DOF-oscillator with amplitude magnification applying electrostatic actuation for driving a motor-frame. As an example a 7-mm-scanner is presented that achieves an optical scan angle of 32 degrees at 3.2 kHz. LAMM-scanners with apertures sizes of 20 mm are designed as passive high-Q-resonators to be externally excited by low-cost electromagnetic or piezoelectric drives. Multi-layer dielectric coatings with a reflectivity higher than 99.9 % have enabled to apply cw-laser power loads of more than 600 W without damaging the MEMS mirror. Finally, a new excitation concept for resonant scanners is presented providing advantageous shaping of intensity profiles of projected laser patterns without modulating the laser. This is of interest in lighting applications such as automotive laser headlights.
Sobhani, R; McVicker, R; Spangenberg, C; Rosso, D
2012-01-01
In regions characterized by water scarcity, such as coastal Southern California, groundwater containing chromophoric dissolved organic matter is a viable source of water supply. In the coastal aquifer of Orange County in California, seawater intrusion driven by coastal groundwater pumping increased the concentration of bromide in extracted groundwater from 0.4 mg l⁻¹ in 2000 to over 0.8 mg l⁻¹ in 2004. Bromide, a precursor to bromate formation is regulated by USEPA and the California Department of Health as a potential carcinogen and therefore must be reduced to a level below 10 μg l⁻¹. This paper compares two processes for treatment of highly coloured groundwater: nanofiltration and ozone injection coupled with biologically activated carbon. The requirement for bromate removal decreased the water production in the ozonation process to compensate for increased maintenance requirements, and required the adoption of catalytic carbon with associated increase in capital and operating costs per unit volume. However, due to the absence of oxidant addition in nanofiltration processes, this process is not affected by bromide. We performed a process analysis and a comparative economic analysis of capital and operating costs for both technologies. Our results show that for the case studied in coastal Southern California, nanofiltration has higher throughput and lower specific capital and operating cost, when compared to ozone injection with biologically activate carbon. Ozone injection with biologically activated carbon, compared to nanofiltration, has 14% higher capital cost and 12% higher operating costs per unit water produced while operating at the initial throughput. Due to reduced ozone concentration required to accommodate for bromate reduction, the ozonation process throughput is reduced and the actual cost increase (per unit water produced) is 68% higher for capital cost and 30% higher for operations. Copyright © 2011 Elsevier Ltd. All rights reserved.
Wu, Han; Chen, Xinlian; Gao, Xinghua; Zhang, Mengying; Wu, Jinbo; Wen, Weijia
2018-04-03
High-throughput measurements can be achieved using droplet-based assays. In this study, we exploited the principles of wetting behavior and capillarity to guide liquids sliding along a solid surface with hybrid wettability. Oil-covered droplet arrays with uniformly sized and regularly shaped picoliter droplets were successfully generated on hydrophilic-in-hydrophobic patterned substrates. More than ten thousand 31-pL droplets were generated in 5 s without any sophisticated instruments. Covering the droplet arrays with oil during generation not only isolated the droplets from each other but also effectively prevented droplet evaporation. The oil-covered droplet arrays could be stored for more than 2 days with less than 35% volume loss. Single microspheres, microbial cells, or mammalian cells were successfully captured in the droplets. We demonstrate that Escherichia coli could be encapsulated at a certain number (1-4) and cultured for 3 days in droplets. Cell population and morphology were dynamically tracked within individual droplets. Our droplet array generation method enables high-throughput processing and is facile, efficient, and low-cost; in addition, the prepared droplet arrays have enormous potential for applications in chemical and biological assays.
Mathews Griner, Lesley A.; Guha, Rajarshi; Shinn, Paul; Young, Ryan M.; Keller, Jonathan M.; Liu, Dongbo; Goldlust, Ian S.; Yasgar, Adam; McKnight, Crystal; Boxer, Matthew B.; Duveau, Damien Y.; Jiang, Jian-Kang; Michael, Sam; Mierzwa, Tim; Huang, Wenwei; Walsh, Martin J.; Mott, Bryan T.; Patel, Paresma; Leister, William; Maloney, David J.; Leclair, Christopher A.; Rai, Ganesha; Jadhav, Ajit; Peyser, Brian D.; Austin, Christopher P.; Martin, Scott E.; Simeonov, Anton; Ferrer, Marc; Staudt, Louis M.; Thomas, Craig J.
2014-01-01
The clinical development of drug combinations is typically achieved through trial-and-error or via insight gained through a detailed molecular understanding of dysregulated signaling pathways in a specific cancer type. Unbiased small-molecule combination (matrix) screening represents a high-throughput means to explore hundreds and even thousands of drug–drug pairs for potential investigation and translation. Here, we describe a high-throughput screening platform capable of testing compounds in pairwise matrix blocks for the rapid and systematic identification of synergistic, additive, and antagonistic drug combinations. We use this platform to define potential therapeutic combinations for the activated B-cell–like subtype (ABC) of diffuse large B-cell lymphoma (DLBCL). We identify drugs with synergy, additivity, and antagonism with the Bruton’s tyrosine kinase inhibitor ibrutinib, which targets the chronic active B-cell receptor signaling that characterizes ABC DLBCL. Ibrutinib interacted favorably with a wide range of compounds, including inhibitors of the PI3K-AKT-mammalian target of rapamycin signaling cascade, other B-cell receptor pathway inhibitors, Bcl-2 family inhibitors, and several components of chemotherapy that is the standard of care for DLBCL. PMID:24469833
High-throughput measurement of polymer film thickness using optical dyes
NASA Astrophysics Data System (ADS)
Grunlan, Jaime C.; Mehrabi, Ali R.; Ly, Tien
2005-01-01
Optical dyes were added to polymer solutions in an effort to create a technique for high-throughput screening of dry polymer film thickness. Arrays of polystyrene films, cast from a toluene solution, containing methyl red or solvent green were used to demonstrate the feasibility of this technique. Measurements of the peak visible absorbance of each film were converted to thickness using the Beer-Lambert relationship. These absorbance-based thickness calculations agreed within 10% of thickness measured using a micrometer for polystyrene films that were 10-50 µm. At these thicknesses it is believed that the absorbance values are actually more accurate. At least for this solvent-based system, thickness was shown to be accurately measured in a high-throughput manner that could potentially be applied to other equivalent systems. Similar water-based films made with poly(sodium 4-styrenesulfonate) dyed with malachite green oxalate or congo red did not show the same level of agreement with the micrometer measurements. Extensive phase separation between polymer and dye resulted in inflated absorbance values and calculated thickness that was often more than 25% greater than that measured with the micrometer. Only at thicknesses below 15 µm could reasonable accuracy be achieved for the water-based films.
Automatic poisson peak harvesting for high throughput protein identification.
Breen, E J; Hopwood, F G; Williams, K L; Wilkins, M R
2000-06-01
High throughput identification of proteins by peptide mass fingerprinting requires an efficient means of picking peaks from mass spectra. Here, we report the development of a peak harvester to automatically pick monoisotopic peaks from spectra generated on matrix-assisted laser desorption/ionisation time of flight (MALDI-TOF) mass spectrometers. The peak harvester uses advanced mathematical morphology and watershed algorithms to first process spectra to stick representations. Subsequently, Poisson modelling is applied to determine which peak in an isotopically resolved group represents the monoisotopic mass of a peptide. We illustrate the features of the peak harvester with mass spectra of standard peptides, digests of gel-separated bovine serum albumin, and with Escherictia coli proteins prepared by two-dimensional polyacrylamide gel electrophoresis. In all cases, the peak harvester proved effective in its ability to pick similar monoisotopic peaks as an experienced human operator, and also proved effective in the identification of monoisotopic masses in cases where isotopic distributions of peptides were overlapping. The peak harvester can be operated in an interactive mode, or can be completely automated and linked through to peptide mass fingerprinting protein identification tools to achieve high throughput automated protein identification.
TreeMAC: Localized TDMA MAC protocol for real-time high-data-rate sensor networks
Song, W.-Z.; Huang, R.; Shirazi, B.; Husent, R.L.
2009-01-01
Earlier sensor network MAC protocols focus on energy conservation in low-duty cycle applications, while some recent applications involve real-time high-data-rate signals. This motivates us to design an innovative localized TDMA MAC protocol to achieve high throughput and low congestion in data collection sensor networks, besides energy conservation. TreeMAC divides a time cycle into frames and frame into slots. Parent determines children's frame assigmnent based on their relative bandwidth demand, and each node calculates its own slot assignment based on its hop-count to the sink. This innovative 2-dimensional frame-slot assignment algorithm has the following nice theory properties. Firstly, given any node, at any time slot, there is at most one active sender in its neighborhood (includ ing itself). Secondly, the packet scheduling with TreelMAC is bufferless, which therefore minimizes the probability of network congestion. Thirdly, the data throughput to gateway is at least 1/3 of the optimum assuming reliable links. Our experiments on a 24 node test bed demonstrate that TreeMAC protocol significantly improves network throughput and energy efficiency, by comparing to the TinyOS's default CSMA MAC protocol and a recent TDMA MAC protocol Funneling-MAC[8]. ?? 2009 IEEE.
Adaptive limited feedback for interference alignment in MIMO interference channels.
Zhang, Yang; Zhao, Chenglin; Meng, Juan; Li, Shibao; Li, Li
2016-01-01
It is very important that the radar sensor network has autonomous capabilities such as self-managing, etc. Quite often, MIMO interference channels are applied to radar sensor networks, and for self-managing purpose, interference management in MIMO interference channels is critical. Interference alignment (IA) has the potential to dramatically improve system throughput by effectively mitigating interference in multi-user networks at high signal-to-noise (SNR). However, the implementation of IA predominantly relays on perfect and global channel state information (CSI) at all transceivers. A large amount of CSI has to be fed back to all transmitters, resulting in a proliferation of feedback bits. Thus, IA with limited feedback has been introduced to reduce the sum feedback overhead. In this paper, by exploiting the advantage of heterogeneous path loss, we first investigate the throughput of IA with limited feedback in interference channels while each user transmits multi-streams simultaneously, then we get the upper bound of sum rate in terms of the transmit power and feedback bits. Moreover, we propose a dynamic feedback scheme via bit allocation to reduce the throughput loss due to limited feedback. Simulation results demonstrate that the dynamic feedback scheme achieves better performance in terms of sum rate.
Automatic Segmentation of High-Throughput RNAi Fluorescent Cellular Images
Yan, Pingkum; Zhou, Xiaobo; Shah, Mubarak; Wong, Stephen T. C.
2010-01-01
High-throughput genome-wide RNA interference (RNAi) screening is emerging as an essential tool to assist biologists in understanding complex cellular processes. The large number of images produced in each study make manual analysis intractable; hence, automatic cellular image analysis becomes an urgent need, where segmentation is the first and one of the most important steps. In this paper, a fully automatic method for segmentation of cells from genome-wide RNAi screening images is proposed. Nuclei are first extracted from the DNA channel by using a modified watershed algorithm. Cells are then extracted by modeling the interaction between them as well as combining both gradient and region information in the Actin and Rac channels. A new energy functional is formulated based on a novel interaction model for segmenting tightly clustered cells with significant intensity variance and specific phenotypes. The energy functional is minimized by using a multiphase level set method, which leads to a highly effective cell segmentation method. Promising experimental results demonstrate that automatic segmentation of high-throughput genome-wide multichannel screening can be achieved by using the proposed method, which may also be extended to other multichannel image segmentation problems. PMID:18270043
High-Throughput Fabrication of Flexible and Transparent All-Carbon Nanotube Electronics.
Chen, Yong-Yang; Sun, Yun; Zhu, Qian-Bing; Wang, Bing-Wei; Yan, Xin; Qiu, Song; Li, Qing-Wen; Hou, Peng-Xiang; Liu, Chang; Sun, Dong-Ming; Cheng, Hui-Ming
2018-05-01
This study reports a simple and effective technique for the high-throughput fabrication of flexible all-carbon nanotube (CNT) electronics using a photosensitive dry film instead of traditional liquid photoresists. A 10 in. sized photosensitive dry film is laminated onto a flexible substrate by a roll-to-roll technology, and a 5 µm pattern resolution of the resulting CNT films is achieved for the construction of flexible and transparent all-CNT thin-film transistors (TFTs) and integrated circuits. The fabricated TFTs exhibit a desirable electrical performance including an on-off current ratio of more than 10 5 , a carrier mobility of 33 cm 2 V -1 s -1 , and a small hysteresis. The standard deviations of on-current and mobility are, respectively, 5% and 2% of the average value, demonstrating the excellent reproducibility and uniformity of the devices, which allows constructing a large noise margin inverter circuit with a voltage gain of 30. This study indicates that a photosensitive dry film is very promising for the low-cost, fast, reliable, and scalable fabrication of flexible and transparent CNT-based integrated circuits, and opens up opportunities for future high-throughput CNT-based printed electronics.
A centrifuge CO2 pellet cleaning system
NASA Technical Reports Server (NTRS)
Foster, C. A.; Fisher, P. W.; Nelson, W. D.; Schechter, D. E.
1995-01-01
An advanced turbine/CO2 pellet accelerator is being evaluated as a depaint technology at Oak Ridge National Laboratory (ORNL). The program, sponsored by Warner Robins Air Logistics Center (ALC), Robins Air Force Base, Georgia, has developed a robot-compatible apparatus that efficiently accelerates pellets of dry ice with a high-speed rotating wheel. In comparison to the more conventional compressed air 'sandblast' pellet accelerators, the turbine system can achieve higher pellet speeds, has precise speed control, and is more than ten times as efficient. A preliminary study of the apparatus as a depaint technology has been undertaken. Depaint rates of military epoxy/urethane paint systems on 2024 and 7075 aluminum panels as a function of pellet speed and throughput have been measured. In addition, methods of enhancing the strip rate by combining infra-red heat lamps with pellet blasting and by combining the use of environmentally benign solvents with the pellet blasting have also been studied. The design and operation of the apparatus will be discussed along with data obtained from the depaint studies.
Electronic spectra from TDDFT and machine learning in chemical space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramakrishnan, Raghunathan; Hartmann, Mia; Tapavicza, Enrico
Due to its favorable computational efficiency, time-dependent (TD) density functional theory (DFT) enables the prediction of electronic spectra in a high-throughput manner across chemical space. Its predictions, however, can be quite inaccurate. We resolve this issue with machine learning models trained on deviations of reference second-order approximate coupled-cluster (CC2) singles and doubles spectra from TDDFT counterparts, or even from DFT gap. We applied this approach to low-lying singlet-singlet vertical electronic spectra of over 20 000 synthetically feasible small organic molecules with up to eight CONF atoms. The prediction errors decay monotonously as a function of training set size. For amore » training set of 10 000 molecules, CC2 excitation energies can be reproduced to within +/- 0.1 eV for the remaining molecules. Analysis of our spectral database via chromophore counting suggests that even higher accuracies can be achieved. Based on the evidence collected, we discuss open challenges associated with data-driven modeling of high-lying spectra and transition intensities.« less
Wu, Fengfeng; Jin, Yamei; Li, Dandan; Zhou, Yuyi; Guo, Lunan; Zhang, Mengyue; Xu, Xueming; Yang, Na
2017-06-01
To improve the economic value of lignocellulosic biomasses, an innovative electrofluidic technology has been applied to the efficient hydrolysis of corncob. The system combines fluidic reactors and induced voltages via magnetoelectric coupling effect. The excitation voltage had a positive impact on reducing sugar content (RSC). But, the increase of voltage frequency at 400-700Hz caused a slight decline of the RSC. Higher temperature limits the electrical effect on the hydrolysis at 70-80°C. The energy efficiency increased under the addition of metallic ions and series of in-phase induced voltage to promote hydrolysis. In addition, the 4-series system with in-phase and reverse-phase induced voltages under the synchronous magnetic flux, exhibited a significant influence on the RSC with a maximum increase of 56%. High throughput could be achieved by increasing series in a compact system. Electrofluid hydrolysis avoids electrochemical reaction, electrode corrosion, and sample contamination. Copyright © 2017 Elsevier Ltd. All rights reserved.
Interval Management: Development and Implementation of an Airborne Spacing Concept
NASA Technical Reports Server (NTRS)
Barmore, Bryan E.; Penhallegon, William J.; Weitz, Lesley A.; Bone, Randall S.; Levitt, Ian; Flores Kriegsfeld, Julia A.; Arbuckle, Doug; Johnson, William C.
2016-01-01
Interval Management is a suite of ADS-B-enabled applications that allows the air traffic controller to instruct a flight crew to achieve and maintain a desired spacing relative to another aircraft. The flight crew, assisted by automation, manages the speed of their aircraft to deliver more precise inter-aircraft spacing than is otherwise possible, which increases traffic throughput at the same or higher levels of safety. Interval Management has evolved from a long history of research and is now seen as a core NextGen capability. With avionics standards recently published, completion of an Investment Analysis Readiness Decision by the FAA, and multiple flight tests planned, Interval Management will soon be part of everyday use in the National Airspace System. Second generation, Advanced Interval Management capabilities are being planned to provide a wider range of operations and improved performance and benefits. This paper briefly reviews the evolution of Interval Management and describes current development and deployment plans. It also reviews concepts under development as the next generation of applications.
Shi, Lei; Zhang, Jianjun; Shi, Yi; Ding, Xu; Wei, Zhenchun
2015-01-01
We consider the base station placement problem for wireless sensor networks with successive interference cancellation (SIC) to improve throughput. We build a mathematical model for SIC. Although this model cannot be solved directly, it enables us to identify a necessary condition for SIC on distances from sensor nodes to the base station. Based on this relationship, we propose to divide the feasible region of the base station into small pieces and choose a point within each piece for base station placement. The point with the largest throughput is identified as the solution. The complexity of this algorithm is polynomial. Simulation results show that this algorithm can achieve about 25% improvement compared with the case that the base station is placed at the center of the network coverage area when using SIC. PMID:25594600
Shape Memory Micro- and Nanowire Libraries for the High-Throughput Investigation of Scaling Effects.
Oellers, Tobias; König, Dennis; Kostka, Aleksander; Xie, Shenqie; Brugger, Jürgen; Ludwig, Alfred
2017-09-11
The scaling behavior of Ti-Ni-Cu shape memory thin-film micro- and nanowires of different geometry is investigated with respect to its influence on the martensitic transformation properties. Two processes for the high-throughput fabrication of Ti-Ni-Cu micro- to nanoscale thin film wire libraries and the subsequent investigation of the transformation properties are reported. The libraries are fabricated with compositional and geometrical (wire width) variations to investigate the influence of these parameters on the transformation properties. Interesting behaviors were observed: Phase transformation temperatures change in the range from 1 to 72 °C (austenite finish, (A f ), 13 to 66 °C (martensite start, M s ) and the thermal hysteresis from -3.5 to 20 K. It is shown that a vanishing hysteresis can be achieved for special combinations of sample geometry and composition.
Turetschek, Reinhard; Lyon, David; Desalegn, Getinet; Kaul, Hans-Peter; Wienkoop, Stefanie
2016-01-01
The proteomic study of non-model organisms, such as many crop plants, is challenging due to the lack of comprehensive genome information. Changing environmental conditions require the study and selection of adapted cultivars. Mutations, inherent to cultivars, hamper protein identification and thus considerably complicate the qualitative and quantitative comparison in large-scale systems biology approaches. With this workflow, cultivar-specific mutations are detected from high-throughput comparative MS analyses, by extracting sequence polymorphisms with de novo sequencing. Stringent criteria are suggested to filter for confidential mutations. Subsequently, these polymorphisms complement the initially used database, which is ready to use with any preferred database search algorithm. In our example, we thereby identified 26 specific mutations in two cultivars of Pisum sativum and achieved an increased number (17 %) of peptide spectrum matches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.
In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less
The High-Throughput Analyses Era: Are We Ready for the Data Struggle?
D'Argenio, Valeria
2018-03-02
Recent and rapid technological advances in molecular sciences have dramatically increased the ability to carry out high-throughput studies characterized by big data production. This, in turn, led to the consequent negative effect of highlighting the presence of a gap between data yield and their analysis. Indeed, big data management is becoming an increasingly important aspect of many fields of molecular research including the study of human diseases. Now, the challenge is to identify, within the huge amount of data obtained, that which is of clinical relevance. In this context, issues related to data interpretation, sharing and storage need to be assessed and standardized. Once this is achieved, the integration of data from different -omic approaches will improve the diagnosis, monitoring and therapy of diseases by allowing the identification of novel, potentially actionably biomarkers in view of personalized medicine.
A high-throughput screen for single gene activities: isolation of apoptosis inducers.
Albayrak, Timur; Grimm, Stefan
2003-05-16
We describe a novel genetic screen that is performed by transfecting every individual clone of an expression library into a separate population of cells in a high-throughput mode. The screen allows one to achieve a hitherto unattained sensitivity in expression cloning which was exploited in a first read-out to clone apoptosis-inducing genes. This led to the isolation of several genes whose proteins induce distinct phenotypes of apoptosis in 293T cells. One of the isolated genes is the tumor suppressor cytochrome b(L) (cybL), a component of the respiratory chain complex II, that diminishes the activity of this complex for apoptosis induction. This gene is more efficient and specific for causing cell death than a drug with the same activity. These results suggest further applications, both of the isolated genes and the screen.
Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.
2017-04-12
In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less
NASA Astrophysics Data System (ADS)
Mbanjwa, Mesuli B.; Chen, Hao; Fourie, Louis; Ngwenya, Sibusiso; Land, Kevin
2014-06-01
Multiplexed or parallelised droplet microfluidic systems allow for increased throughput in the production of emulsions and microparticles, while maintaining a small footprint and utilising minimal ancillary equipment. The current paper demonstrates the design and fabrication of a multiplexed microfluidic system for producing biocatalytic microspheres. The microfluidic system consists of an array of 10 parallel microfluidic circuits, for simultaneous operation to demonstrate increased production throughput. The flow distribution was achieved using a principle of reservoirs supplying individual microfluidic circuits. The microfluidic devices were fabricated in poly (dimethylsiloxane) (PDMS) using soft lithography techniques. The consistency of the flow distribution was determined by measuring the size variations of the microspheres produced. The coefficient of variation of the particles was determined to be 9%, an indication of consistent particle formation and good flow distribution between the 10 microfluidic circuits.
NASA Astrophysics Data System (ADS)
Regmi, Raju; Mohan, Kavya; Mondal, Partha Pratim
2014-09-01
Visualization of intracellular organelles is achieved using a newly developed high throughput imaging cytometry system. This system interrogates the microfluidic channel using a sheet of light rather than the existing point-based scanning techniques. The advantages of the developed system are many, including, single-shot scanning of specimens flowing through the microfluidic channel at flow rate ranging from micro- to nano- lit./min. Moreover, this opens-up in-vivo imaging of sub-cellular structures and simultaneous cell counting in an imaging cytometry system. We recorded a maximum count of 2400 cells/min at a flow-rate of 700 nl/min, and simultaneous visualization of fluorescently-labeled mitochondrial network in HeLa cells during flow. The developed imaging cytometry system may find immediate application in biotechnology, fluorescence microscopy and nano-medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, F; Titt, U; Patel, D
2015-06-15
Purpose: To design and validate experimental setups for investigation of dose and LET effects in cell kill for protons, helium and carbon ions, in high throughput and high accuracy cell experiments. Methods: Using the Geant4 Monte Carlo toolkit, we designed 3 custom range compensators to simultaneously expose cancer cells to different doses and LETs from selected portions of pristine ion beams from the entrance to points just beyond the Bragg peak. To minimize the spread of LET, we utilized mono-energetic uniformly scanned beams at the HIT facility with support from the DKFZ. Using different entrance doses and LETs, a matrixmore » of cell survival data was acquired leading to a specific RBE matrix. We utilized the standard clonogenic assay for H460 and H1437 lung-cancer cell lines grown in 96-well plates. Using these plates, the data could be acquired in a small number of exposures. The ion specific compensators were located in a horizontal beam, designed to hold two 96-wells plates (12 columns by 8 rows) at an angle of 30o with respect to the beam direction. Results: Using about 20 hours of beam time, a total of about 11,000 wells containing cancer cells could be irradiated. The H460 and H1437 cell lines exhibited a significant dependence on LET when they were exposed to comparable doses. The results were similar for each of the investigated ion species, and indicate the need to incorporate RBE into the ion therapy planning process. Conclusion: The experimental design developed is a viable approach to rapidly acquire large amounts of accurate in-vitro RBE data. We plan to further improve the design to achieve higher accuracy and throughput, thereby facilitating the irradiation of multiple cell types. The results are indicative of the possibility to develop a new degree of freedom (variable RBE) for future clinical ion therapy optimization. Work supported by the Sister Institute Network Fund (SINF), University of Texas MD Anderson Cancer Center.« less
Kang, Yoon-Tae; Kim, Young Jun; Bu, Jiyoon; Cho, Young-Ho; Han, Sae-Won; Moon, Byung-In
2017-09-21
We present a microfluidic device for the capture and release of circulating exosomes from human blood. The exosome-specific dual-patterned immunofiltration (ExoDIF) device is composed of two distinct immuno-patterned layers, and is capable of enhancing the chance of binding between the antibody and exosomes by generating mechanical whirling, thus achieving high-throughput exosome isolation with high specificity. Moreover, follow-up recovery after the immuno-affinity based isolation, via cleavage of a linker, enables further downstream analysis. We verified the performance of the present device using MCF-7 secreted exosomes and found that both the concentration and proportion of exosome-sized vesicles were higher than in the samples obtained from the conventional exosome isolation kit. We then isolated exosomes from the human blood samples with our device to compare the exosome level between cancer patients and healthy donors. Cancer patients show a significantly higher exosome level with higher selectivity when validating the exosome-sized vesicles using both electron microscopy and nanoparticle tracking analysis. The captured exosomes from cancer patients also express abundant cancer-associated antigens, the epithelial cell adhesion molecule (EpCAM) on their surface. Our simple and rapid exosome recovery technique has huge potential to elucidate the function of exosomes in cancer patients and can thus be applied for various exosome-based cancer research studies.
High Speed, Low Cost Fabrication of Gas Diffusion Electrodes for Membrane Electrode Assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeCastro, Emory S.; Tsou, Yu-Min; Liu, Zhenyu
Fabrication of membrane electrode assemblies (MEAs) depends on creating inks or pastes of catalyst and binder, and applying this suspension to either the membrane (catalyst coated membrane) or gas diffusion media (gas diffusion electrode) and respectively laminating either gas diffusion media or gas diffusion electrodes (GDEs) to the membrane. One barrier to cost effective fabrication for either of these approaches is the development of stable and consistent suspensions. This program investigated the fundamental forces that destabilize the suspensions and developed innovative approaches to create new, highly stable formulations. These more concentrated formulations needed fewer application passes, could be coated overmore » longer and wider substrates, and resulted in significantly lower coating defects. In March of 2012 BASF Fuel Cell released a new high temperature product based on these advances, whereby our customers received higher performing, more uniform MEAs resulting in higher stack build yields. Furthermore, these new materials resulted in an “instant” increase in capacity due to higher product yields and material throughput. Although not part of the original scope of this program, these new formulations have also led us to materials that demonstrate equivalent performance with 30% less precious metal in the anode. This program has achieved two key milestones in DOE’s Manufacturing R&D program: demonstration of processes for direct coating of electrodes and continuous in-line measurement for component fabrication.« less
Chabbert, Christophe D; Adjalley, Sophie H; Steinmetz, Lars M; Pelechano, Vicent
2018-01-01
Chromatin immunoprecipitation followed by sequencing (ChIP-Seq) or microarray hybridization (ChIP-on-chip) are standard methods for the study of transcription factor binding sites and histone chemical modifications. However, these approaches only allow profiling of a single factor or protein modification at a time.In this chapter, we present Bar-ChIP, a higher throughput version of ChIP-Seq that relies on the direct ligation of molecular barcodes to chromatin fragments. Bar-ChIP enables the concurrent profiling of multiple DNA-protein interactions and is therefore amenable to experimental scale-up, without the need for any robotic instrumentation.
MoRu/Be multilayers for extreme ultraviolet applications
Bajt, Sasa C.; Wall, Mark A.
2001-01-01
High reflectance, low intrinsic roughness and low stress multilayer systems for extreme ultraviolet (EUV) lithography comprise amorphous layers MoRu and crystalline Be layers. Reflectance greater than 70% has been demonstrated for MoRu/Be multilayers with 50 bilayer pairs. Optical throughput of MoRu/Be multilayers can be 30-40% higher than that of Mo/Be multilayer coatings. The throughput can be improved using a diffusion barrier to make sharper interfaces. A capping layer on the top surface of the multilayer improves the long-term reflectance and EUV radiation stability of the multilayer by forming a very thin native oxide that is water resistant.
ERIC Educational Resources Information Center
Pletcher, Mathew T.; Wiltshire, Tim; Tarantino, Lisa M.; Mayford, Mark; Reijmers, Leon G.; Coats, Jennifer K.
2006-01-01
Targeted mutagenesis in mice has shown that genes from a wide variety of gene families are involved in memory formation. The efficient identification of genes involved in learning and memory could be achieved by random mutagenesis combined with high-throughput phenotyping. Here, we provide the first report of a mutagenesis screen that has…
Weimar, M R; Cheung, J; Dey, D; McSweeney, C; Morrison, M; Kobayashi, Y; Whitman, W B; Carbone, V; Schofield, L R; Ronimus, R S; Cook, G M
2017-08-01
Hydrogenotrophic methanogens typically require strictly anaerobic culturing conditions in glass tubes with overpressures of H 2 and CO 2 that are both time-consuming and costly. To increase the throughput for screening chemical compound libraries, 96-well microtiter plate methods for the growth of a marine (environmental) methanogen Methanococcus maripaludis strain S2 and the rumen methanogen Methanobrevibacter species AbM4 were developed. A number of key parameters (inoculum size, reducing agents for medium preparation, assay duration, inhibitor solvents, and culture volume) were optimized to achieve robust and reproducible growth in a high-throughput microtiter plate format. The method was validated using published methanogen inhibitors and statistically assessed for sensitivity and reproducibility. The Sigma-Aldrich LOPAC library containing 1,280 pharmacologically active compounds and an in-house natural product library (120 compounds) were screened against M. maripaludis as a proof of utility. This screen identified a number of bioactive compounds, and MIC values were confirmed for some of them against M. maripaludis and M. AbM4. The developed method provides a significant increase in throughput for screening compound libraries and can now be used to screen larger compound libraries to discover novel methanogen-specific inhibitors for the mitigation of ruminant methane emissions. IMPORTANCE Methane emissions from ruminants are a significant contributor to global greenhouse gas emissions, and new technologies are required to control emissions in the agriculture technology (agritech) sector. The discovery of small-molecule inhibitors of methanogens using high-throughput phenotypic (growth) screening against compound libraries (synthetic and natural products) is an attractive avenue. However, phenotypic inhibitor screening is currently hindered by our inability to grow methanogens in a high-throughput format. We have developed, optimized, and validated a high-throughput 96-well microtiter plate assay for growing environmental and rumen methanogens. Using this platform, we identified several new inhibitors of methanogen growth, demonstrating the utility of this approach to fast track the development of methanogen-specific inhibitors for controlling ruminant methane emissions. Copyright © 2017 American Society for Microbiology.
Forreryd, Andy; Johansson, Henrik; Albrekt, Ann-Sofie; Lindstedt, Malin
2014-05-16
Allergic contact dermatitis (ACD) develops upon exposure to certain chemical compounds termed skin sensitizers. To reduce the occurrence of skin sensitizers, chemicals are regularly screened for their capacity to induce sensitization. The recently developed Genomic Allergen Rapid Detection (GARD) assay is an in vitro alternative to animal testing for identification of skin sensitizers, classifying chemicals by evaluating transcriptional levels of a genomic biomarker signature. During assay development and biomarker identification, genome-wide expression analysis was applied using microarrays covering approximately 30,000 transcripts. However, the microarray platform suffers from drawbacks in terms of low sample throughput, high cost per sample and time consuming protocols and is a limiting factor for adaption of GARD into a routine assay for screening of potential sensitizers. With the purpose to simplify assay procedures, improve technical parameters and increase sample throughput, we assessed the performance of three high throughput gene expression platforms--nCounter®, BioMark HD™ and OpenArray®--and correlated their performance metrics against our previously generated microarray data. We measured the levels of 30 transcripts from the GARD biomarker signature across 48 samples. Detection sensitivity, reproducibility, correlations and overall structure of gene expression measurements were compared across platforms. Gene expression data from all of the evaluated platforms could be used to classify most of the sensitizers from non-sensitizers in the GARD assay. Results also showed high data quality and acceptable reproducibility for all platforms but only medium to poor correlations of expression measurements across platforms. In addition, evaluated platforms were superior to the microarray platform in terms of cost efficiency, simplicity of protocols and sample throughput. We evaluated the performance of three non-array based platforms using a limited set of transcripts from the GARD biomarker signature. We demonstrated that it was possible to achieve acceptable discriminatory power in terms of separation between sensitizers and non-sensitizers in the GARD assay while reducing assay costs, simplify assay procedures and increase sample throughput by using an alternative platform, providing a first step towards the goal to prepare GARD for formal validation and adaption of the assay for industrial screening of potential sensitizers.
High throughput nanoimprint lithography for semiconductor memory applications
NASA Astrophysics Data System (ADS)
Ye, Zhengmao; Zhang, Wei; Khusnatdinov, Niyaz; Stachowiak, Tim; Irving, J. W.; Longsine, Whitney; Traub, Matthew; Fletcher, Brian; Liu, Weijun
2017-03-01
Imprint lithography is a promising technology for replication of nano-scale features. For semiconductor device applications, Canon deposits a low viscosity resist on a field by field basis using jetting technology. A patterned mask is lowered into the resist fluid which then quickly flows into the relief patterns in the mask by capillary action. Following this filling step, the resist is crosslinked under UV radiation, and then the mask is removed, leaving a patterned resist on the substrate. There are two critical components to meeting throughput requirements for imprint lithography. Using a similar approach to what is already done for many deposition and etch processes, imprint stations can be clustered to enhance throughput. The FPA-1200NZ2C is a four station cluster system designed for high volume manufacturing. For a single station, throughput includes overhead, resist dispense, resist fill time (or spread time), exposure and separation. Resist exposure time and mask/wafer separation are well understood processing steps with typical durations on the order of 0.10 to 0.20 seconds. To achieve a total process throughput of 17 wafers per hour (wph) for a single station, it is necessary to complete the fluid fill step in 1.2 seconds. For a throughput of 20 wph, fill time must be reduced to only one 1.1 seconds. There are several parameters that can impact resist filling. Key parameters include resist drop volume (smaller is better), system controls (which address drop spreading after jetting), Design for Imprint or DFI (to accelerate drop spreading) and material engineering (to promote wetting between the resist and underlying adhesion layer). In addition, it is mandatory to maintain fast filling, even for edge field imprinting. In this paper, we address the improvements made in all of these parameters to first enable a 1.20 second filling process for a device like pattern and have demonstrated this capability for both full fields and edge fields. Non-fill defectivity is well under 1.0 defects/cm2 for both field types. Next, by further reducing drop volume and optimizing drop patterns, a fill time of 1.1 seconds was demonstrated.
NASA Technical Reports Server (NTRS)
Shay, Rick; Swieringa, Kurt A.; Baxley, Brian T.
2012-01-01
Flight deck based Interval Management (FIM) applications using ADS-B are being developed to improve both the safety and capacity of the National Airspace System (NAS). FIM is expected to improve the safety and efficiency of the NAS by giving pilots the technology and procedures to precisely achieve an interval behind the preceding aircraft by a specific point. Concurrently but independently, Optimized Profile Descents (OPD) are being developed to help reduce fuel consumption and noise, however, the range of speeds available when flying an OPD results in a decrease in the delivery precision of aircraft to the runway. This requires the addition of a spacing buffer between aircraft, reducing system throughput. FIM addresses this problem by providing pilots with speed guidance to achieve a precise interval behind another aircraft, even while flying optimized descents. The Interval Management with Spacing to Parallel Dependent Runways (IMSPiDR) human-in-the-loop experiment employed 24 commercial pilots to explore the use of FIM equipment to conduct spacing operations behind two aircraft arriving to parallel runways, while flying an OPD during high-density operations. This paper describes the impact of variations in pilot operations; in particular configuring the aircraft, their compliance with FIM operating procedures, and their response to changes of the FIM speed. An example of the displayed FIM speeds used incorrectly by a pilot is also discussed. Finally, this paper examines the relationship between achieving airline operational goals for individual aircraft and the need for ATC to deliver aircraft to the runway with greater precision. The results show that aircraft can fly an OPD and conduct FIM operations to dependent parallel runways, enabling operational goals to be achieved efficiently while maintaining system throughput.
Shimada, Tsutomu; Kelly, Joan; LaMarr, William A; van Vlies, Naomi; Yasuda, Eriko; Mason, Robert W.; Mackenzie, William; Kubaski, Francyne; Giugliani, Roberto; Chinen, Yasutsugu; Yamaguchi, Seiji; Suzuki, Yasuyuki; Orii, Kenji E.; Fukao, Toshiyuki; Orii, Tadao; Tomatsu, Shunji
2014-01-01
Mucopolysaccharidoses (MPS) are caused by deficiency of one of a group of specific lysosomal enzymes, resulting in excessive accumulation of glycosaminoglycans (GAGs). We previously developed GAG assay methods using liquid chromatography tandem mass spectrometry (LC-MS/MS); however, it takes 4–5 min per sample for analysis. For the large numbers of samples in a screening program, a more rapid process is desirable. The automated high-throughput mass spectrometry (HT-MS/MS) system (RapidFire) integrates a solid phase extraction robot to concentrate and desalt samples prior to direction into the MS/MS without chromatographic separation; thereby allowing each sample to be processed within ten seconds (enabling screening of more than one million samples per year). The aim of this study was to develop a higher throughput system to assay heparan sulfate (HS) using HT-MS/MS, and to compare its reproducibility, sensitivity and specificity with conventional LC-MS/MS. HS levels were measured in blood (plasma and serum) from control subjects and patients with MPS II, III, or IV and in dried blood spots (DBS) from newborn controls and patients with MPS I, II, or III. Results obtained from HT-MS/MS showed 1) that there was a strong correlation of levels of disaccharides derived from HS in blood, between those calculated using conventional LC-MS/MS and HT-MS/MS, 2) that levels of HS in blood were significantly elevated in patients with MPS II and III, but not in MPS IVA, 3) that the level of HS in patients with a severe form of MPS II was higher than that in an attenuated form, 4) that reduction of blood HS level was observed in MPS II patients treated with enzyme replacement therapy or hematopoietic stem cell transplantation, and 5) that levels of HS in newborn DBS were elevated in patients with MPS I, II or III, compared to control newborns. In conclusion, HT-MS/MS provides much higher throughput than LC-MS/MS-based methods with similar sensitivity and specificity in an HS assay, indicating that HT-MS/MS may be feasible for diagnosis, monitoring, and newborn screening of MPS. PMID:25092413
A Formal Messaging Notation for Alaskan Aviation Data
NASA Technical Reports Server (NTRS)
Rios, Joseph L.
2015-01-01
Data exchange is an increasingly important aspect of the National Airspace System. While many data communication channels have become more capable of sending and receiving data at higher throughput rates, there is still a need to use communication channels efficiently with limited throughput. The limitation can be based on technological issues, financial considerations, or both. This paper provides a complete description of several important aviation weather data in Abstract Syntax Notation format. By doing so, data providers can take advantage of Abstract Syntax Notation's ability to encode data in a highly compressed format. When data such as pilot weather reports, surface weather observations, and various weather predictions are compressed in such a manner, it allows for the efficient use of throughput-limited communication channels. This paper provides details on the Abstract Syntax Notation One (ASN.1) implementation for Alaskan aviation data, and demonstrates its use on real-world aviation weather data samples as Alaska has sparse terrestrial data infrastructure and data are often sent via relatively costly satellite channels.
Large-Scale Biomonitoring of Remote and Threatened Ecosystems via High-Throughput Sequencing
Gibson, Joel F.; Shokralla, Shadi; Curry, Colin; Baird, Donald J.; Monk, Wendy A.; King, Ian; Hajibabaei, Mehrdad
2015-01-01
Biodiversity metrics are critical for assessment and monitoring of ecosystems threatened by anthropogenic stressors. Existing sorting and identification methods are too expensive and labour-intensive to be scaled up to meet management needs. Alternately, a high-throughput DNA sequencing approach could be used to determine biodiversity metrics from bulk environmental samples collected as part of a large-scale biomonitoring program. Here we show that both morphological and DNA sequence-based analyses are suitable for recovery of individual taxonomic richness, estimation of proportional abundance, and calculation of biodiversity metrics using a set of 24 benthic samples collected in the Peace-Athabasca Delta region of Canada. The high-throughput sequencing approach was able to recover all metrics with a higher degree of taxonomic resolution than morphological analysis. The reduced cost and increased capacity of DNA sequence-based approaches will finally allow environmental monitoring programs to operate at the geographical and temporal scale required by industrial and regulatory end-users. PMID:26488407
Problems in processing Rheinische Braunkohle (soft coal) (in German)
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Hartmann, G.B.
At Wesseling, difficulties were encountered with the hydrogenation of Rhine brown coal. The hydrogenation reaction was proceeding too rapidly at 600 atm pressure under relatively low temperature and throughput conditions. This caused a build-up of ''caviar'' deposits containing ash and asphalts. This flocculation of asphalt seemed to arise because the rapid reaction produced a liquid medium unable to hold the heavy asphalt particles in suspension. A stronger paraffinic character of the oil was also a result. To obtain practical, problem-free yields, throughput had to be increased (from .4 kg/liter/hr to more than .5), and temperature had to be increased (frommore » 24.0 MV to 24,8 MV). Further, a considerable increase in sludge recycling was recommended. The Wesseling plant was unable to increase the temperature and throughput. However, more sludge was recycled, producing a paste better able to hold higher-molecular-weight particles in suspension. If this were not to solve the ''caviar'' deposit problems, further recommendations were suggested including addition of more heavy oil.« less
ERIC Educational Resources Information Center
Lindberg, Matti
2014-01-01
This study illustrates the differences between Finnish and British graduates in the higher education-to-work transition and related market mechanisms in the year 2000. Specifically, the differences between the Finnish and British students' academic careers and ability to find employment after graduation were evaluated in relation to the Finnish HE…
Reliability and throughput issues for optical wireless and RF wireless systems
NASA Astrophysics Data System (ADS)
Yu, Meng
The fast development of wireless communication technologies has two main trends. On one hand, in point-to-point communications, the demand for higher throughput called for the emergence of wireless broadband techniques including optical wireless (OW). One the other hand, wireless networks are becoming pervasive. New application of wireless networks ask for more flexible system infrastructures beyond the point-to-point prototype to achieve better performance. This dissertation investigates two topics on the reliability and throughput issues of new wireless technologies. The first topic is to study the capacity, and practical forward error control strategies for OW systems. We investigate the performance of OW systems under weak atmospheric turbulence. We first investigate the capacity and power allocation for multi-laser and multi-detector systems. Our results show that uniform power allocation is a practically optimal solution for paralleled channels. We also investigate the performance of Reed Solomon (RS) codes and turbo codes for OW systems. We present RS codes as good candidates for OW systems. The second topic targets user cooperation in wireless networks. We evaluate the relative merits of amplify-forward (AF) and decode-forward (DF) in practical scenarios. Both analysis and simulations show that the overall system performance is critically affected by the quality of the inter-user channel. Following this result, we investigate two schemes to improve the overall system performance. We first investigate the impact of the relay location on the overall system performance and determine the optimal location of relay. A best-selective single-relay 1 system is proposed and evaluated. Through the analysis of the average capacity and outage, we show that a small candidate pool of 3 to 5 relays suffices to reap most of the "geometric" gain available to a selective system. Second, we propose a new user cooperation scheme to provide an effective better inter-user channel. Most user cooperation protocols work in a time sharing manner, where a node forwards others' messages and sends its own message at different sections within a provisioned time slot. In the proposed scheme the two messages are encoded together in a single codework using network coding and transmitted in the given time slot. We also propose a general multiple-user cooperation framework. Under this framework, we show that network coding can achieve better diversity and provide effective better inter-user channels than time sharing. The last part of the dissertation focuses on multi-relay packet transmission. We propose an adaptive and distributive coding scheme for the relay nodes to adaptively cooperate and forward messages. The adaptive scheme shows performance gain over fixed schemes. Then we shift our viewpoint and represent the network as part of encoders and part of decoders.
ac electroosmotic pumping induced by noncontact external electrodes
Wang, Shau-Chun; Chen, Hsiao-Ping; Chang, Hsueh-Chia
2007-01-01
Electroosmotic (EO) pumps based on dc electroosmosis is plagued by bubble generation and other electrochemical reactions at the electrodes at voltages beyond 1 V for electrolytes. These disadvantages limit their throughput and offset their portability advantage over mechanical syringe or pneumatic pumps. ac electroosmotic pumps at high frequency (>100 kHz) circumvent the bubble problem by inducing polarization and slip velocity on embedded electrodes,1 but they require complex electrode designs to produce a net flow. We report a new high-throughput ac EO pump design based on induced-polarization on the entire channel surface instead of just on the electrodes. Like dc EO pumps, our pump electrodes are outside of the load section and form a cm-long pump unit consisting of three circular reservoirs (3 mm in diameter) connected by a 1×1 mm channel. The field-induced polarization can produce an effective Zeta potential exceeding 1 V and an ac slip velocity estimated as 1 mm∕sec or higher, both one order of magnitude higher than earlier dc and ac pumps, giving rise to a maximum throughput of 1 μl∕sec. Polarization over the entire channel surface, quadratic scaling with respect to the field and high voltage at high frequency without electrode bubble generation are the reasons why the current pump is superior to earlier dc and ac EO pumps. PMID:19693362
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamada, Yusuke; Hiraki, Masahiko; Sasajima, Kumiko
2010-06-23
Recent advances in high-throughput techniques for macromolecular crystallography have highlighted the importance of structure-based drug design (SBDD), and the demand for synchrotron use by pharmaceutical researchers has increased. Thus, in collaboration with Astellas Pharma Inc., we have constructed a new high-throughput macromolecular crystallography beamline, AR-NE3A, which is dedicated to SBDD. At AR-NE3A, a photon flux up to three times higher than those at existing high-throughput beams at the Photon Factory, AR-NW12A and BL-5A, can be realized at the same sample positions. Installed in the experimental hutch are a high-precision diffractometer, fast-readout, high-gain CCD detector, and sample exchange robot capable ofmore » handling more than two hundred cryo-cooled samples stored in a Dewar. To facilitate high-throughput data collection required for pharmaceutical research, fully automated data collection and processing systems have been developed. Thus, sample exchange, centering, data collection, and data processing are automatically carried out based on the user's pre-defined schedule. Although Astellas Pharma Inc. has a priority access to AR-NE3A, the remaining beam time is allocated to general academic and other industrial users.« less
Diffraction efficiency of radially-profiled off-plane reflection gratings
NASA Astrophysics Data System (ADS)
Miles, Drew M.; Tutt, James H.; DeRoo, Casey T.; Marlowe, Hannah; Peterson, Thomas J.; McEntaffer, Randall L.; Menz, Benedikt; Burwitz, Vadim; Hartner, Gisela; Laubis, Christian; Scholze, Frank
2015-09-01
Future X-ray missions will require gratings with high throughput and high spectral resolution. Blazed off-plane reflection gratings are capable of meeting these demands. A blazed grating profile optimizes grating efficiency, providing higher throughput to one side of zero-order on the arc of diffraction. This paper presents efficiency measurements made in the 0.3 - 1.5 keV energy band at the Physikalisch-Technische Bundesanstalt (PTB) BESSY II facility for three holographically-ruled gratings, two of which are blazed. Each blazed grating was tested in both the Littrow configuration and anti-Littrow configuration in order to test the alignment sensitivity of these gratings with regard to throughput. This paper outlines the procedure of the grating experiment performed at BESSY II and discuss the resulting efficiency measurements across various energies. Experimental results are generally consistent with theory and demonstrate that the blaze does increase throughput to one side of zero-order. However, the total efficiency of the non-blazed, sinusoidal grating is greater than that of the blazed gratings, which suggests that the method of manufacturing these blazed profiles fails to produce facets with the desired level of precision. Finally, evidence of a successful blaze implementation from first diffraction results of prototype blazed gratings produce via a new fabrication technique at the University of Iowa are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tojo, H.; Hatae, T.; Hamano, T.
2013-09-15
Collection optics for core measurements in a JT-60SA Thomson scattering system were designed. The collection optics will be installed in a limited space and have a wide field of view and wide wavelength range. Two types of the optics are therefore suggested: refraction and reflection types. The reflection system, with a large primary mirror, avoids large chromatic aberrations. Because the size limit of the primary mirror and vignetting due to the secondary mirror affect the total collection throughput, conditions that provide the high throughput are found through an optimization. A refraction system with four lenses forming an Ernostar system ismore » also employed. The use of high-refractive-index glass materials enhances the freedom of the lens curvatures, resulting in suppression of the spherical and coma aberration. Moreover, sufficient throughput can be achieved, even with smaller lenses than that of a previous design given in [H. Tojo, T. Hatae, T. Sakuma, T. Hamano, K. Itami, Y. Aida, S. Suitoh, and D. Fujie, Rev. Sci. Instrum. 81, 10D539 (2010)]. The optical resolutions of the reflection and refraction systems are both sufficient for understanding the spatial structures in plasma. In particular, the spot sizes at the image of the optics are evaluated as ∼0.3 mm and ∼0.4 mm, respectively. The throughput for the two systems, including the pupil size and transmissivity, are also compared. The results show that good measurement accuracy (<10%) even at high electron temperatures (<30 keV) can be expected in the refraction system.« less
Sevenler, Derin; Daaboul, George G; Ekiz Kanik, Fulya; Ünlü, Neşe Lortlar; Ünlü, M Selim
2018-05-21
DNA and protein microarrays are a high-throughput technology that allow the simultaneous quantification of tens of thousands of different biomolecular species. The mediocre sensitivity and limited dynamic range of traditional fluorescence microarrays compared to other detection techniques have been the technology's Achilles' heel and prevented their adoption for many biomedical and clinical diagnostic applications. Previous work to enhance the sensitivity of microarray readout to the single-molecule ("digital") regime have either required signal amplifying chemistry or sacrificed throughput, nixing the platform's primary advantages. Here, we report the development of a digital microarray which extends both the sensitivity and dynamic range of microarrays by about 3 orders of magnitude. This technique uses functionalized gold nanorods as single-molecule labels and an interferometric scanner which can rapidly enumerate individual nanorods by imaging them with a 10× objective lens. This approach does not require any chemical signal enhancement such as silver deposition and scans arrays with a throughput similar to commercial fluorescence scanners. By combining single-nanoparticle enumeration and ensemble measurements of spots when the particles are very dense, this system achieves a dynamic range of about 6 orders of magnitude directly from a single scan. As a proof-of-concept digital protein microarray assay, we demonstrated detection of hepatitis B virus surface antigen in buffer with a limit of detection of 3.2 pg/mL. More broadly, the technique's simplicity and high-throughput nature make digital microarrays a flexible platform technology with a wide range of potential applications in biomedical research and clinical diagnostics.
Tojo, H; Hatae, T; Hamano, T; Sakuma, T; Itami, K
2013-09-01
Collection optics for core measurements in a JT-60SA Thomson scattering system were designed. The collection optics will be installed in a limited space and have a wide field of view and wide wavelength range. Two types of the optics are therefore suggested: refraction and reflection types. The reflection system, with a large primary mirror, avoids large chromatic aberrations. Because the size limit of the primary mirror and vignetting due to the secondary mirror affect the total collection throughput, conditions that provide the high throughput are found through an optimization. A refraction system with four lenses forming an Ernostar system is also employed. The use of high-refractive-index glass materials enhances the freedom of the lens curvatures, resulting in suppression of the spherical and coma aberration. Moreover, sufficient throughput can be achieved, even with smaller lenses than that of a previous design given in [H. Tojo, T. Hatae, T. Sakuma, T. Hamano, K. Itami, Y. Aida, S. Suitoh, and D. Fujie, Rev. Sci. Instrum. 81, 10D539 (2010)]. The optical resolutions of the reflection and refraction systems are both sufficient for understanding the spatial structures in plasma. In particular, the spot sizes at the image of the optics are evaluated as ~0.3 mm and ~0.4 mm, respectively. The throughput for the two systems, including the pupil size and transmissivity, are also compared. The results show that good measurement accuracy (<10%) even at high electron temperatures (<30 keV) can be expected in the refraction system.
NASA Technical Reports Server (NTRS)
Engelhaupt, Darell; Ramsey, Brian
2003-01-01
NASA and the University of Alabama in Huntsville have developed ecologically friendly, versatile nickel and nickel cobalt phosphorous electroplating processes. Solutions show excellent performance with high efficiency for vastly extended throughput. Properties include, clean, low temperature operation (40 - 60 C), high Faradaic efficiency, low stress and high hardness. A variety of alloy and plating speed options are easily achieved from the same chemistry using soluble anodes for metal replacement with only 25% of the phosphorous additions required for electroless nickel. Thick deposits are easily achieved unattended, for electroforming freestanding shapes without buildup of excess orthophosphate or stripping of equipment.
NASA Technical Reports Server (NTRS)
Engelhaupt, Darell; Ramsey, Brian
2004-01-01
NASA and the University of Alabama in Huntsville have developed ecologically friendly, versatile nickel and nickel cobalt phosphorous electroplating processes. Solutions show excellent performance with high efficiency for vastly extended throughput. Properties include, clean, low temperature operation (40 - 60 C), high Faradaic efficiency, low stress and high hardness. A variety of alloy and plating speed options are easily achieved from the same chemistry using soluble anodes for metal replacement with only 25% of the phosphorous additions required for electroless nickel. Thick deposits are easily achieved unattended, for electroforming freestanding shapes without buildup of excess orthophosphate or stripping of equipment.
NASA Astrophysics Data System (ADS)
Murakami, Sunao; Ohtaki, Kenichiro; Matsumoto, Sohei; Inoue, Tomoya
2012-06-01
High-throughput and stable treatments are required to achieve the practical production of chemicals with microreactors. However, the flow maldistribution to the paralleled microchannels has been a critical problem in achieving the productive use of multichannel microreactors for multiphase flow conditions. In this study, we newly designed and fabricated a glass four-channel catalytic packed-bed microreactor for the scale-up of gas-liquid multiphase chemical reactions. We embedded microstructures generating high pressure losses at the upstream side of each packed bed, and experimentally confirmed the efficacy of the microstructures in decreasing the maldistribution of the gas-liquid flow to the parallel microchannels.
Jung, Seung-Ryoung; Han, Rui; Sun, Wei; Jiang, Yifei; Fujimoto, Bryant S; Yu, Jiangbo; Kuo, Chun-Ting; Rong, Yu; Zhou, Xing-Hua; Chiu, Daniel T
2018-05-15
We describe here a flow platform for quantifying the number of biomolecules on individual fluorescent nanoparticles. The platform combines line-confocal fluorescence detection with near nanoscale channels (1-2 μm in width and height) to achieve high single-molecule detection sensitivity and throughput. The number of biomolecules present on each nanoparticle was determined by deconvolving the fluorescence intensity distribution of single-nanoparticle-biomolecule complexes with the intensity distribution of single biomolecules. We demonstrate this approach by quantifying the number of streptavidins on individual semiconducting polymer dots (Pdots); streptavidin was rendered fluorescent using biotin-Alexa647. This flow platform has high-throughput (hundreds to thousands of nanoparticles detected per second) and requires minute amounts of sample (∼5 μL at a dilute concentration of 10 pM). This measurement method is an additional tool for characterizing synthetic or biological nanoparticles.
Firmware Development Improves System Efficiency
NASA Technical Reports Server (NTRS)
Chern, E. James; Butler, David W.
1993-01-01
Most manufacturing processes require physical pointwise positioning of the components or tools from one location to another. Typical mechanical systems utilize either stop-and-go or fixed feed-rate procession to accomplish the task. The first approach achieves positional accuracy but prolongs overall time and increases wear on the mechanical system. The second approach sustains the throughput but compromises positional accuracy. A computer firmware approach has been developed to optimize this point wise mechanism by utilizing programmable interrupt controls to synchronize engineering processes 'on the fly'. This principle has been implemented in an eddy current imaging system to demonstrate the improvement. Software programs were developed that enable a mechanical controller card to transmit interrupts to a system controller as a trigger signal to initiate an eddy current data acquisition routine. The advantages are: (1) optimized manufacturing processes, (2) increased throughput of the system, (3) improved positional accuracy, and (4) reduced wear and tear on the mechanical system.
Generalized type II hybrid ARQ scheme using punctured convolutional coding
NASA Astrophysics Data System (ADS)
Kallel, Samir; Haccoun, David
1990-11-01
A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.
The challenges of sequencing by synthesis.
Fuller, Carl W; Middendorf, Lyle R; Benner, Steven A; Church, George M; Harris, Timothy; Huang, Xiaohua; Jovanovich, Stevan B; Nelson, John R; Schloss, Jeffery A; Schwartz, David C; Vezenov, Dmitri V
2009-11-01
DNA sequencing-by-synthesis (SBS) technology, using a polymerase or ligase enzyme as its core biochemistry, has already been incorporated in several second-generation DNA sequencing systems with significant performance. Notwithstanding the substantial success of these SBS platforms, challenges continue to limit the ability to reduce the cost of sequencing a human genome to $100,000 or less. Achieving dramatically reduced cost with enhanced throughput and quality will require the seamless integration of scientific and technological effort across disciplines within biochemistry, chemistry, physics and engineering. The challenges include sample preparation, surface chemistry, fluorescent labels, optimizing the enzyme-substrate system, optics, instrumentation, understanding tradeoffs of throughput versus accuracy, and read-length/phasing limitations. By framing these challenges in a manner accessible to a broad community of scientists and engineers, we hope to solicit input from the broader research community on means of accelerating the advancement of genome sequencing technology.
Meng Zhang; Peh, Jessie; Hergenrother, Paul J; Cunningham, Brian T
2014-01-01
High throughput screening of protein-small molecule binding interactions using label-free optical biosensors is challenging, as the detected signals are often similar in magnitude to experimental noise. Here, we describe a novel self-referencing external cavity laser (ECL) biosensor approach that achieves high resolution and high sensitivity, while eliminating thermal noise with sub-picometer wavelength accuracy. Using the self-referencing ECL biosensor, we demonstrate detection of binding between small molecules and a variety of immobilized protein targets with binding affinities or inhibition constants in the sub-nanomolar to low micromolar range. The demonstrated ability to perform detection in the presence of several interfering compounds opens the potential for increasing the throughput of the approach. As an example application, we performed a "needle-in-the-haystack" screen for inhibitors against carbonic anhydrase isozyme II (CA II), in which known inhibitors are clearly differentiated from inactive molecules within a compound library.
Optofluidic time-stretch quantitative phase microscopy.
Guo, Baoshan; Lei, Cheng; Wu, Yi; Kobayashi, Hirofumi; Ito, Takuro; Yalikun, Yaxiaer; Lee, Sangwook; Isozaki, Akihiro; Li, Ming; Jiang, Yiyue; Yasumoto, Atsushi; Di Carlo, Dino; Tanaka, Yo; Yatomi, Yutaka; Ozeki, Yasuyuki; Goda, Keisuke
2018-03-01
Innovations in optical microscopy have opened new windows onto scientific research, industrial quality control, and medical practice over the last few decades. One of such innovations is optofluidic time-stretch quantitative phase microscopy - an emerging method for high-throughput quantitative phase imaging that builds on the interference between temporally stretched signal and reference pulses by using dispersive properties of light in both spatial and temporal domains in an interferometric configuration on a microfluidic platform. It achieves the continuous acquisition of both intensity and phase images with a high throughput of more than 10,000 particles or cells per second by overcoming speed limitations that exist in conventional quantitative phase imaging methods. Applications enabled by such capabilities are versatile and include characterization of cancer cells and microalgal cultures. In this paper, we review the principles and applications of optofluidic time-stretch quantitative phase microscopy and discuss its future perspective. Copyright © 2017 Elsevier Inc. All rights reserved.
Droplet barcoding for single cell transcriptomics applied to embryonic stem cells
Klein, Allon M; Mazutis, Linas; Akartuna, Ilke; Tallapragada, Naren; Veres, Adrian; Li, Victor; Peshkin, Leonid; Weitz, David A; Kirschner, Marc W
2015-01-01
Summary It has long been the dream of biologists to map gene expression at the single cell level. With such data one might track heterogeneous cell sub-populations, and infer regulatory relationships between genes and pathways. Recently, RNA sequencing has achieved single cell resolution. What is limiting is an effective way to routinely isolate and process large numbers of individual cells for quantitative in-depth sequencing. We have developed a high-throughput droplet-microfluidic approach for barcoding the RNA from thousands of individual cells for subsequent analysis by next-generation sequencing. The method shows a surprisingly low noise profile and is readily adaptable to other sequencing-based assays. We analyzed mouse embryonic stem cells, revealing in detail the population structure and the heterogeneous onset of differentiation after LIF withdrawal. The reproducibility of these high-throughput single cell data allowed us to deconstruct cell populations and infer gene expression relationships. PMID:26000487
Library Design-Facilitated High-Throughput Sequencing of Synthetic Peptide Libraries.
Vinogradov, Alexander A; Gates, Zachary P; Zhang, Chi; Quartararo, Anthony J; Halloran, Kathryn H; Pentelute, Bradley L
2017-11-13
A methodology to achieve high-throughput de novo sequencing of synthetic peptide mixtures is reported. The approach leverages shotgun nanoliquid chromatography coupled with tandem mass spectrometry-based de novo sequencing of library mixtures (up to 2000 peptides) as well as automated data analysis protocols to filter away incorrect assignments, noise, and synthetic side-products. For increasing the confidence in the sequencing results, mass spectrometry-friendly library designs were developed that enabled unambiguous decoding of up to 600 peptide sequences per hour while maintaining greater than 85% sequence identification rates in most cases. The reliability of the reported decoding strategy was additionally confirmed by matching fragmentation spectra for select authentic peptides identified from library sequencing samples. The methods reported here are directly applicable to screening techniques that yield mixtures of active compounds, including particle sorting of one-bead one-compound libraries and affinity enrichment of synthetic library mixtures performed in solution.
High throughput dual-wavelength temperature distribution imaging via compressive imaging
NASA Astrophysics Data System (ADS)
Yao, Xu-Ri; Lan, Ruo-Ming; Liu, Xue-Feng; Zhu, Ge; Zheng, Fu; Yu, Wen-Kai; Zhai, Guang-Jie
2018-03-01
Thermal imaging is an essential tool in a wide variety of research areas. In this work we demonstrate high-throughput double-wavelength temperature distribution imaging using a modified single-pixel camera without the requirement of a beam splitter (BS). A digital micro-mirror device (DMD) is utilized to display binary masks and split the incident radiation, which eliminates the necessity of a BS. Because the spatial resolution is dictated by the DMD, this thermal imaging system has the advantage of perfect spatial registration between the two images, which limits the need for the pixel registration and fine adjustments. Two bucket detectors, which measures the total light intensity reflected from the DMD, are employed in this system and yield an improvement in the detection efficiency of the narrow-band radiation. A compressive imaging algorithm is utilized to achieve under-sampling recovery. A proof-of-principle experiment was presented to demonstrate the feasibility of this structure.
A Microfluidic Approach for Studying Piezo Channels.
Maneshi, M M; Gottlieb, P A; Hua, S Z
2017-01-01
Microfluidics is an interdisciplinary field intersecting many areas in engineering. Utilizing a combination of physics, chemistry, biology, and biotechnology, along with practical applications for designing devices that use low volumes of fluids to achieve high-throughput screening, is a major goal in microfluidics. Microfluidic approaches allow the study of cells growth and differentiation using a variety of conditions including control of fluid flow that generates shear stress. Recently, Piezo1 channels were shown to respond to fluid shear stress and are crucial for vascular development. This channel is ideal for studying fluid shear stress applied to cells using microfluidic devices. We have developed an approach that allows us to analyze the role of Piezo channels on any given cell and serves as a high-throughput screen for drug discovery. We show that this approach can provide detailed information about the inhibitors of Piezo channels. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Rioux, Norman; Bolcar, Matthew; Liu, Alice; Guyon, Oliver; Stark, Chris; Arenberg, Jon
2016-01-01
Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10^-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance. These efforts are combined through integrated modeling, coronagraph evaluations, and Exo-Earth yield calculations to assess the potential performance of the selected architecture. In addition, we discusses the scalability of this architecture to larger apertures and the technological tall poles to enabling it.
Experiences with the AEROnet/PSCN ATM Prototype
NASA Technical Reports Server (NTRS)
Kurak, Richard S.; Lisotta, Anthony J.; McCabe, James D.; Nothaft, Alfred E.; Russell, Kelly R.; Lasinski, T. A. (Technical Monitor)
1995-01-01
This paper discusses the experience gained by the AEROnet/PSCN networking team in deploying a prototype Asynchronous Transfer Mode (ATM) based network as part of the wide-area network for the Numerical Aerodynamic Simulation (NAS) Program at NASA Ames Research Center. The objectives of this prototype were to test concepts in using ATM over wide-area Internet Protocol (IP) networks and measure end-to-end system performance. This testbed showed that end-to-end ATM over a DS3 reaches approximately 80% of the throughput achieved from a FDDI to DS3 network. The 20% reduction in through-put can be attributed to the overhead associated with running ATM. As a result, we conclude that if the loss in capacity due to ATM overhead is balanced by the reduction in cost of ATM services, as compared to dedicated circuits, then ATM can be a viable alternative.
A mask manufacturer's perspective on maskless lithography
NASA Astrophysics Data System (ADS)
Buck, Peter; Biechler, Charles; Kalk, Franklin
2005-11-01
Maskless Lithography (ML2) is again being considered for use in mainstream CMOS IC manufacturing. Sessions at technical conferences are being devoted to ML2. A multitude of new companies have been formed in the last several years to apply new concepts to breaking the throughput barrier that has in the past prevented ML2 from achieving the cost and cycle time performance necessary to become economically viable, except in rare cases. Has Maskless Lithography's (we used to call it "Direct Write Lithography") time really come? If so, what is the expected impact on the mask manufacturer and does it matter? The lithography tools used today in mask manufacturing are similar in concept to ML2 except for scale, both in throughput and feature size. These mask tools produce highly accurate lithographic images directly from electronic pattern files, perform multi-layer overlay, and mix-n-match across multiple tools, tool types and sites. Mask manufacturers are already accustomed to the ultimate low volume - one substrate per design layer. In order to achieve the economically required throughput, proposed ML2 systems eliminate or greatly reduce some of the functions that are the source of the mask writer's accuracy. Can these ML2 systems meet the demanding lithographic requirements without these functions? ML2 may eliminate the reticle but many of the processes and procedures performed today by the mask manufacturer are still required. Examples include the increasingly complex mask data preparation step and the verification performed to ensure that the pattern on the reticle is accurately representing the design intent. The error sources that are fixed on a reticle are variable with time on an ML2 system. It has been proposed that if ML2 is successful it will become uneconomical to be in the mask business - that ML2, by taking the high profit masks will take all profitability out of mask manufacturing and thereby endanger the entire semiconductor industry. Others suggest that a successful ML2 system solves the mask cost issue and thereby reduces the need and attractiveness of ML2. Are these concerns valid? In this paper we will present a perspective on maskless lithography from the considerable "direct write" experience of a mask manufacturer. We will examine the various business models proposed for ML2 insertion as well as the key technical challenges to achieving simultaneously the throughput and the lithographic quality necessary to become economically viable. We will consider the question of the economic viability of the mask industry in a post-ML2 world and will propose possible models where the mask industry can meaningfully participate.
The 2.5 bit/detected photon demonstration program: Phase 2 and 3 experimental results
NASA Technical Reports Server (NTRS)
Katz, J.
1982-01-01
The experimental program for laboratory demonstration of and energy efficient optical communication channel operating at a rate of 2.5 bits/detected photon is described. Results of the uncoded PPM channel performance are presented. It is indicated that the throughput efficiency can be achieved not only with a Reed-Solomon code as originally predicted, but with a less complex code as well.
Optimization of Supercomputer Use on EADS II System
NASA Technical Reports Server (NTRS)
Ahmed, Ardsher
1998-01-01
The main objective of this research was to optimize supercomputer use to achieve better throughput and utilization of supercomputers and to help facilitate the movement of non-supercomputing (inappropriate for supercomputer) codes to mid-range systems for better use of Government resources at Marshall Space Flight Center (MSFC). This work involved the survey of architectures available on EADS II and monitoring customer (user) applications running on a CRAY T90 system.
A Disk-Based System for Producing and Distributing Science Products from MODIS
NASA Technical Reports Server (NTRS)
Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael
2007-01-01
Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.
Automatic high throughput empty ISO container verification
NASA Astrophysics Data System (ADS)
Chalmers, Alex
2007-04-01
Encouraging results are presented for the automatic analysis of radiographic images of a continuous stream of ISO containers to confirm they are truly empty. A series of image processing algorithms are described that process real-time data acquired during the actual inspection of each container and assigns each to one of the classes "empty", "not empty" or "suspect threat". This research is one step towards achieving fully automated analysis of cargo containers.
2015-10-01
shown in Fig. 1a, the prepolymer mixture was sandwiched between photo mask and glass slide. Microdiscs were fabricated on the glass substrate through...polymerization of the prepolymer mixture and the acrylated silane under UV exposure. To achieve the more stable microdiscs for peptide synthesis, the...composition of prepolymer mixture was changed to PEG (Polyethylene Glycol)-diacrylate, crosslinker, photo initiator, 2-aminoethylmethacrylate, water
Automated segmentation of the actively stained mouse brain using multi-spectral MR microscopy.
Sharief, Anjum A; Badea, Alexandra; Dale, Anders M; Johnson, G Allan
2008-01-01
Magnetic resonance microscopy (MRM) has created new approaches for high-throughput morphological phenotyping of mouse models of diseases. Transgenic and knockout mice serve as a test bed for validating hypotheses that link genotype to the phenotype of diseases, as well as developing and tracking treatments. We describe here a Markov random fields based segmentation of the actively stained mouse brain, as a prerequisite for morphological phenotyping. Active staining achieves higher signal to noise ratio (SNR) thereby enabling higher resolution imaging per unit time than obtained in previous formalin-fixed mouse brain studies. The segmentation algorithm was trained on isotropic 43-mum T1- and T2-weighted MRM images. The mouse brain was segmented into 33 structures, including the hippocampus, amygdala, hypothalamus, thalamus, as well as fiber tracts and ventricles. Probabilistic information used in the segmentation consisted of (a) intensity distributions in the T1- and T2-weighted data, (b) location, and (c) contextual priors for incorporating spatial information. Validation using standard morphometric indices showed excellent consistency between automatically and manually segmented data. The algorithm has been tested on the widely used C57BL/6J strain, as well as on a selection of six recombinant inbred BXD strains, chosen especially for their largely variant hippocampus.
Incorporating High-Throughput Exposure Predictions with ...
We previously integrated dosimetry and exposure with high-throughput screening (HTS) to enhance the utility of ToxCast™ HTS data by translating in vitro bioactivity concentrations to oral equivalent doses (OEDs) required to achieve these levels internally. These OEDs were compared against regulatory exposure estimates, providing an activity-to-exposure ratio (AER) useful for a risk-based ranking strategy. As ToxCast™ efforts expand (i.e., Phase II) beyond food-use pesticides towards a wider chemical domain that lacks exposure and toxicity information, prediction tools become increasingly important. In this study, in vitro hepatic clearance and plasma protein binding were measured to estimate OEDs for a subset of Phase II chemicals. OEDs were compared against high-throughput (HT) exposure predictions generated using probabilistic modeling and Bayesian approaches generated by the U.S. EPA ExpoCast™ program. This approach incorporated chemical-specific use and national production volume data with biomonitoring data to inform the exposure predictions. This HT exposure modeling approach provided predictions for all Phase II chemicals assessed in this study whereas estimates from regulatory sources were available for only 7% of chemicals. Of the 163 chemicals assessed in this study, three or 13 chemicals possessed AERs <1 or <100, respectively. Diverse bioactivities y across a range of assays and concentrations was also noted across the wider chemical space su
High-Throughput Non-Contact Vitrification of Cell-Laden Droplets Based on Cell Printing
NASA Astrophysics Data System (ADS)
Shi, Meng; Ling, Kai; Yong, Kar Wey; Li, Yuhui; Feng, Shangsheng; Zhang, Xiaohui; Pingguan-Murphy, Belinda; Lu, Tian Jian; Xu, Feng
2015-12-01
Cryopreservation is the most promising way for long-term storage of biological samples e.g., single cells and cellular structures. Among various cryopreservation methods, vitrification is advantageous by employing high cooling rate to avoid the formation of harmful ice crystals in cells. Most existing vitrification methods adopt direct contact of cells with liquid nitrogen to obtain high cooling rates, which however causes the potential contamination and difficult cell collection. To address these limitations, we developed a non-contact vitrification device based on an ultra-thin freezing film to achieve high cooling/warming rate and avoid direct contact between cells and liquid nitrogen. A high-throughput cell printer was employed to rapidly generate uniform cell-laden microdroplets into the device, where the microdroplets were hung on one side of the film and then vitrified by pouring the liquid nitrogen onto the other side via boiling heat transfer. Through theoretical and experimental studies on vitrification processes, we demonstrated that our device offers a high cooling/warming rate for vitrification of the NIH 3T3 cells and human adipose-derived stem cells (hASCs) with maintained cell viability and differentiation potential. This non-contact vitrification device provides a novel and effective way to cryopreserve cells at high throughput and avoid the contamination and collection problems.
Lin, Sansan; Fischl, Anthony S; Bi, Xiahui; Parce, Wally
2003-03-01
Phospholipid molecules such as ceramide and phosphoinositides play crucial roles in signal transduction pathways. Lipid-modifying enzymes including sphingomyelinase and phosphoinositide kinases regulate the generation and degradation of these lipid-signaling molecules and are important therapeutic targets in drug discovery. We now report a sensitive and convenient method to separate these lipids using microfluidic chip-based technology. The method takes advantage of the high-separation power of the microchips that separate lipids based on micellar electrokinetic capillary chromatography (MEKC) and the high sensitivity of fluorescence detection. We further exploited the method to develop a homogenous assay to monitor activities of lipid-modifying enzymes. The assay format consists of two steps: an on-plate enzymatic reaction using fluorescently labeled substrates followed by an on-chip MEKC separation of the reaction products from the substrates. The utility of the assay format for high-throughput screening (HTS) is demonstrated using phospholipase A(2) on the Caliper 250 HTS system: throughput of 80min per 384-well plate can be achieved with unattended running time of 5.4h. This enabling technology for assaying lipid-modifying enzymes is ideal for HTS because it avoids the use of radioactive substrates and complicated separation/washing steps and detects both substrate and product simultaneously.
Oh, Kwang Seok; Woo, Seong Ihl
2011-01-01
A chemiluminescence-based analyzer of NOx gas species has been applied for high-throughput screening of a library of catalytic materials. The applicability of the commercial NOx analyzer as a rapid screening tool was evaluated using selective catalytic reduction of NO gas. A library of 60 binary alloys composed of Pt and Co, Zr, La, Ce, Fe or W on Al2O3 substrate was tested for the efficiency of NOx removal using a home-built 64-channel parallel and sequential tubular reactor. The NOx concentrations measured by the NOx analyzer agreed well with the results obtained using micro gas chromatography for a reference catalyst consisting of 1 wt% Pt on γ-Al2O3. Most alloys showed high efficiency at 275 °C, which is typical of Pt-based catalysts for selective catalytic reduction of NO. The screening with NOx analyzer allowed to select Pt-Ce(X) (X=1–3) and Pt–Fe(2) as the optimal catalysts for NOx removal: 73% NOx conversion was achieved with the Pt–Fe(2) alloy, which was much better than the results for the reference catalyst and the other library alloys. This study demonstrates a sequential high-throughput method of practical evaluation of catalysts for the selective reduction of NO. PMID:27877438
2011-01-01
Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584
Improving Data Transfer Throughput with Direct Search Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balaprakash, Prasanna; Morozov, Vitali; Kettimuthu, Rajkumar
2016-01-01
Improving data transfer throughput over high-speed long-distance networks has become increasingly difficult. Numerous factors such as nondeterministic congestion, dynamics of the transfer protocol, and multiuser and multitask source and destination endpoints, as well as interactions among these factors, contribute to this difficulty. A promising approach to improving throughput consists in using parallel streams at the application layer.We formulate and solve the problem of choosing the number of such streams from a mathematical optimization perspective. We propose the use of direct search methods, a class of easy-to-implement and light-weight mathematical optimization algorithms, to improve the performance of data transfers by dynamicallymore » adapting the number of parallel streams in a manner that does not require domain expertise, instrumentation, analytical models, or historic data. We apply our method to transfers performed with the GridFTP protocol, and illustrate the effectiveness of the proposed algorithm when used within Globus, a state-of-the-art data transfer tool, on productionWAN links and servers. We show that when compared to user default settings our direct search methods can achieve up to 10x performance improvement under certain conditions. We also show that our method can overcome performance degradation due to external compute and network load on source end points, a common scenario at high performance computing facilities.« less
CFTLB: a novel cross-layer fault tolerant and load balancing protocol for WMN
NASA Astrophysics Data System (ADS)
Krishnaveni, N. N.; Chitra, K.
2017-12-01
Wireless mesh network (WMN) forms a wireless backbone framework for multi-hop transmission among the routers and clients in the extensible coverage area. To improve the throughput of WMNs with multiple gateways (GWs), several issues related to GW selection, load balancing and frequent link failures due to the presence of dynamic obstacles and channel interference should be addressed. This paper presents a novel cross-layer fault tolerant and load balancing (CFTLB) protocol to overcome the issues in WMN. Initially, the neighbour GW is searched and channel load is calculated. The GW having least channel load is selected which is estimated during the arrival of the new node. The proposed algorithm finds the alternate GWs and calculates the channel availability under high loading scenarios. If the current load in the GW is high, another GW is found and channel availability is calculated. Besides, it initiates the channel switching and establishes the communication with the mesh client effectively. The utilisation of hashing technique in proposed CFTLB verifies the status of the packets and achieves better performance in terms of router average throughput, throughput, average channel access time and lower end-to-end delay, communication overhead and average data loss in the channel compared to the existing protocols.
Bergeron, Vance; Chalfine, Annie; Misset, Benoît; Moules, Vincent; Laudinet, Nicolas; Carlet, Jean; Lina, Bruno
2011-05-01
Evidence has recently emerged indicating that in addition to large airborne droplets, fine aerosol particles can be an important mode of influenza transmission that may have been hitherto underestimated. Furthermore, recent performance studies evaluating airborne infection isolation (AII) rooms designed to house infectious patients have revealed major discrepancies between what is prescribed and what is actually measured. We conducted an experimental study to investigate the use of high-throughput in-room air decontamination units for supplemental protection against airborne contamination in areas that host infectious patients. The study included both intrinsic performance tests of the air-decontamination unit against biological aerosols of particular epidemiologic interest and field tests in a hospital AII room under different ventilation scenarios. The unit tested efficiently eradicated airborne H5N2 influenza and Mycobacterium bovis (a 4- to 5-log single-pass reduction) and, when implemented with a room extractor, reduced the peak contamination levels by a factor of 5, with decontamination rates at least 33% faster than those achieved with the extractor alone. High-throughput in-room air treatment units can provide supplemental control of airborne pathogen levels in patient isolation rooms. Copyright © 2011 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
High-Throughput Non-Contact Vitrification of Cell-Laden Droplets Based on Cell Printing
Shi, Meng; Ling, Kai; Yong, Kar Wey; Li, Yuhui; Feng, Shangsheng; Zhang, Xiaohui; Pingguan-Murphy, Belinda; Lu, Tian Jian; Xu, Feng
2015-01-01
Cryopreservation is the most promising way for long-term storage of biological samples e.g., single cells and cellular structures. Among various cryopreservation methods, vitrification is advantageous by employing high cooling rate to avoid the formation of harmful ice crystals in cells. Most existing vitrification methods adopt direct contact of cells with liquid nitrogen to obtain high cooling rates, which however causes the potential contamination and difficult cell collection. To address these limitations, we developed a non-contact vitrification device based on an ultra-thin freezing film to achieve high cooling/warming rate and avoid direct contact between cells and liquid nitrogen. A high-throughput cell printer was employed to rapidly generate uniform cell-laden microdroplets into the device, where the microdroplets were hung on one side of the film and then vitrified by pouring the liquid nitrogen onto the other side via boiling heat transfer. Through theoretical and experimental studies on vitrification processes, we demonstrated that our device offers a high cooling/warming rate for vitrification of the NIH 3T3 cells and human adipose-derived stem cells (hASCs) with maintained cell viability and differentiation potential. This non-contact vitrification device provides a novel and effective way to cryopreserve cells at high throughput and avoid the contamination and collection problems. PMID:26655688
Modeling and Simulation Reliable Spacecraft On-Board Computing
NASA Technical Reports Server (NTRS)
Park, Nohpill
1999-01-01
The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.
Shih, Tsung-Ting; Hsieh, Cheng-Chuan; Luo, Yu-Ting; Su, Yi-An; Chen, Ping-Hung; Chuang, Yu-Chen; Sun, Yuh-Chang
2016-04-15
Herein, a hyphenated system combining a high-throughput solid-phase extraction (htSPE) microchip with inductively coupled plasma-mass spectrometry (ICP-MS) for rapid determination of trace heavy metals was developed. Rather than performing multiple analyses in parallel for the enhancement of analytical throughput, we improved the processing speed for individual samples by increasing the operation flow rate during SPE procedures. To this end, an innovative device combining a micromixer and a multi-channeled extraction unit was designed. Furthermore, a programmable valve manifold was used to interface the developed microchip and ICP-MS instrumentation in order to fully automate the system, leading to a dramatic reduction in operation time and human error. Under the optimized operation conditions for the established system, detection limits of 1.64-42.54 ng L(-1) for the analyte ions were achieved. Validation procedures demonstrated that the developed method could be satisfactorily applied to the determination of trace heavy metals in natural water. Each analysis could be readily accomplished within just 186 s using the established system. This represents, to the best of our knowledge, an unprecedented speed for the analysis of trace heavy metal ions. Copyright © 2016 Elsevier B.V. All rights reserved.
Ausar, Salvador F; Chan, Judy; Hoque, Warda; James, Olive; Jayasundara, Kavisha; Harper, Kevin
2011-02-01
High throughput screening (HTS) of excipients for proteins in solution can be achieved by several analytical techniques. The screening of stabilizers for proteins adsorbed onto adjuvants, however, may be difficult due to the limited amount of techniques that can measure stability of adsorbed protein in high throughput mode. Here, we demonstrate that extrinsic fluorescence spectroscopy can be successfully applied to study the physical stability of adsorbed antigens at low concentrations in 96-well plates, using a real-time polymerase chain reaction (RT-PCR) instrument. HTS was performed on three adjuvanted pneumococcal proteins as model antigens in the presence of a standard library of stabilizers. Aluminum hydroxide appeared to decrease the stability of all three proteins at relatively high and low pH values, showing a bell-shaped curve as the pH was increased from 5 to 9 with a maximum stability at near neutral pH. Nonspecific stabilizers such as mono- and disaccharides could increase the conformational stability of the antigens. In addition, those excipients that increased the melting temperature of adsorbed antigens could improve antigenicity and chemical stability. To the best of our knowledge, this is the first report describing an HTS technology amenable for low concentration of antigens adsorbed onto aluminum-containing adjuvants. Copyright © 2010 Wiley-Liss, Inc.
Adamski, Mateusz G; Gumann, Patryk; Baird, Alison E
2014-01-01
Over the past decade rapid advances have occurred in the understanding of RNA expression and its regulation. Quantitative polymerase chain reactions (qPCR) have become the gold standard for quantifying gene expression. Microfluidic next generation, high throughput qPCR now permits the detection of transcript copy number in thousands of reactions simultaneously, dramatically increasing the sensitivity over standard qPCR. Here we present a gene expression analysis method applicable to both standard polymerase chain reactions (qPCR) and high throughput qPCR. This technique is adjusted to the input sample quantity (e.g., the number of cells) and is independent of control gene expression. It is efficiency-corrected and with the use of a universal reference sample (commercial complementary DNA (cDNA)) permits the normalization of results between different batches and between different instruments--regardless of potential differences in transcript amplification efficiency. Modifications of the input quantity method include (1) the achievement of absolute quantification and (2) a non-efficiency corrected analysis. When compared to other commonly used algorithms the input quantity method proved to be valid. This method is of particular value for clinical studies of whole blood and circulating leukocytes where cell counts are readily available.
NASA Astrophysics Data System (ADS)
Marenzana, Massimo; Hagen, Charlotte K.; Das Neves Borges, Patricia; Endrizzi, Marco; Szafraniec, Magdalena B.; Ignatyev, Konstantin; Olivo, Alessandro
2012-12-01
Being able to quantitatively assess articular cartilage in three-dimensions (3D) in small rodent animal models, with a simple laboratory set-up, would prove extremely important for the development of pre-clinical research focusing on cartilage pathologies such as osteoarthritis (OA). These models are becoming essential tools for the development of new drugs for OA, a disease affecting up to 1/3 of the population older than 50 years for which there is no cure except prosthetic surgery. However, due to limitations in imaging technology, high-throughput 3D structural imaging has not been achievable in small rodent models, thereby limiting their translational potential and their efficiency as research tools. We show that a simple laboratory system based on coded-aperture x-ray phase contrast imaging (CAXPCi) can correctly visualize the cartilage layer in slices of an excised rat tibia imaged both in air and in saline solution. Moreover, we show that small, surgically induced lesions are also correctly detected by the CAXPCi system, and we support this finding with histopathology examination. Following these successful proof-of-concept results in rat cartilage, we expect that an upgrade of the system to higher resolutions (currently underway) will enable extending the method to the imaging of mouse cartilage as well. From a technological standpoint, by showing the capability of the system to detect cartilage also in water, we demonstrate phase sensitivity comparable to other lab-based phase methods (e.g. grating interferometry). In conclusion, CAXPCi holds a strong potential for being adopted as a routine laboratory tool for non-destructive, high throughput assessment of 3D structural changes in murine articular cartilage, with a possible impact in the field similar to the revolution that conventional microCT brought into bone research.
Kang, Aram; Meadows, Corey W.; Canu, Nicolas; ...
2017-04-05
Isopentenol (or isoprenol, 3-methyl-3-buten-1-ol) is a drop-in biofuel and a precursor for commodity chemicals such as isoprene. Biological production of isopentenol via the mevalonate pathway has been optimized extensively in Escherichia coli, yielding 70% of its theoretical maximum. However, high ATP requirements and isopentenyl diphosphate (IPP) toxicity pose immediate challenges for engineering bacterial strains to overproduce commodities utilizing IPP as an intermediate. To overcome these limitations, we developed an “IPP-bypass” isopentenol pathway using the promiscuous activity of a mevalonate diphosphate decarboxylase (PMD) and demonstrated improved performance under aeration-limited conditions. However, relatively low activity of PMD toward the non-native substrate (mevalonatemore » monophosphate, MVAP) was shown to limit flux through this new pathway. By inhibiting all IPP production from the endogenous non-mevalonate pathway, we developed a high-throughput screening platform that correlated promiscuous PMD activity toward MVAP with cellular growth. Successful identification of mutants that altered PMD activity demonstrated the sensitivity and specificity of the screening platform. Strains with evolved PMD mutants and the novel IPP-bypass pathway increased titers up to 2.4-fold. Further enzymatic characterization of the evolved PMD variants suggested that higher isopentenol titers could be achieved either by altering residues directly interacting with substrate and cofactor or by altering residues on nearby α-helices. These altered residues could facilitate the production of isopentenol by tuning either k cat or K i of PMD for the non-native substrate. The synergistic modification made on PMD for the IPP-bypass mevalonate pathway is expected to significantly facilitate the industrial scale production of isopentenol.« less
Salta, Maria; Dennington, Simon P; Wharton, Julian A
2018-05-10
The use of natural products (NPs) as possible alternative biocidal compounds for use in antifouling coatings has been the focus of research over the past decades. Despite the importance of this field, the efficacy of a given NP against biofilm (mainly bacteria and diatoms) formation is tested with the NP being in solution, while almost no studies test the effect of an NP once incorporated into a coating system. The development of a novel bioassay to assess the activity of NP-containing and biocide-containing coatings against marine biofilm formation has been achieved using a high-throughput microplate reader and highly sensitive confocal laser scanning microscopy (CLSM), as well as nucleic acid staining. Juglone, an isolated NP that has previously shown efficacy against bacterial attachment, was incorporated into a simple coating matrix. Biofilm formation over 48 h was assessed and compared against coatings containing the NP and the commonly used booster biocide, cuprous oxide. Leaching of the NP from the coating was quantified at two time points, 24 h and 48 h, showing evidence of both juglone and cuprous oxide being released. Results from the microplate reader showed that the NP coatings exhibited antifouling efficacy, significantly inhibiting biofilm formation when compared to the control coatings, while NP coatings and the cuprous oxide coatings performed equally well. CLSM results and COMSTAT analysis on biofilm 3D morphology showed comparable results when the NP coatings were tested against the controls, with higher biofilm biovolume and maximum thickness being found on the controls. This new method proved to be repeatable and insightful and we believe it is applicable in antifouling and other numerous applications where interactions between biofilm formation and surfaces is of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Aram; Meadows, Corey W.; Canu, Nicolas
Isopentenol (or isoprenol, 3-methyl-3-buten-1-ol) is a drop-in biofuel and a precursor for commodity chemicals such as isoprene. Biological production of isopentenol via the mevalonate pathway has been optimized extensively in Escherichia coli, yielding 70% of its theoretical maximum. However, high ATP requirements and isopentenyl diphosphate (IPP) toxicity pose immediate challenges for engineering bacterial strains to overproduce commodities utilizing IPP as an intermediate. To overcome these limitations, we developed an “IPP-bypass” isopentenol pathway using the promiscuous activity of a mevalonate diphosphate decarboxylase (PMD) and demonstrated improved performance under aeration-limited conditions. However, relatively low activity of PMD toward the non-native substrate (mevalonatemore » monophosphate, MVAP) was shown to limit flux through this new pathway. By inhibiting all IPP production from the endogenous non-mevalonate pathway, we developed a high-throughput screening platform that correlated promiscuous PMD activity toward MVAP with cellular growth. Successful identification of mutants that altered PMD activity demonstrated the sensitivity and specificity of the screening platform. Strains with evolved PMD mutants and the novel IPP-bypass pathway increased titers up to 2.4-fold. Further enzymatic characterization of the evolved PMD variants suggested that higher isopentenol titers could be achieved either by altering residues directly interacting with substrate and cofactor or by altering residues on nearby α-helices. These altered residues could facilitate the production of isopentenol by tuning either k cat or K i of PMD for the non-native substrate. The synergistic modification made on PMD for the IPP-bypass mevalonate pathway is expected to significantly facilitate the industrial scale production of isopentenol.« less
Chien, Jun-Chau; Ameri, Ali; Yeh, Erh-Chia; Killilea, Alison N; Anwar, Mekhail; Niknejad, Ali M
2018-06-06
This work presents a microfluidics-integrated label-free flow cytometry-on-a-CMOS platform for the characterization of the cytoplasm dielectric properties at microwave frequencies. Compared with MHz impedance cytometers, operating at GHz frequencies offers direct intracellular permittivity probing due to electric fields penetrating through the cellular membrane. To overcome the detection challenges at high frequencies, the spectrometer employs on-chip oscillator-based sensors, which embeds simultaneous frequency generation, electrode excitation, and signal detection capabilities. By employing an injection-locking phase-detection technique, the spectrometer offers state-of-the-art sensitivity, achieving a less than 1 aFrms capacitance detection limit (or 5 ppm in frequency-shift) at a 100 kHz noise filtering bandwidth, enabling high throughput (>1k cells per s), with a measured cellular SNR of more than 28 dB. With CMOS/microfluidics co-design, we distribute four sensing channels at 6.5, 11, 17.5, and 30 GHz in an arrayed format whereas the frequencies are selected to center around the water relaxation frequency at 18 GHz. An issue in the integration of CMOS and microfluidics due to size mismatch is also addressed through introducing a cost-efficient epoxy-molding technique. With 3-D hydrodynamic focusing microfluidics, we perform characterization on four different cell lines including two breast cell lines (MCF-10A and MDA-MB-231) and two leukocyte cell lines (K-562 and THP-1). After normalizing the higher frequency signals to the 6.5 GHz ones, the size-independent dielectric opacity shows a differentiable distribution at 17.5 GHz between normal (0.905 ± 0.160, mean ± std.) and highly metastatic (1.033 ± 0.107) breast cells with p ≪ 0.001.
FPGA wavelet processor design using language for instruction-set architectures (LISA)
NASA Astrophysics Data System (ADS)
Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios
2007-04-01
The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.
Yang, Yang; Fu, Xiaofeng; Qu, Wenhao; Xiao, Yiqun; Shen, Hong-Bin
2018-04-27
Benefiting from high-throughput experimental technologies, whole-genome analysis of microRNAs (miRNAs) has been more and more common to uncover important regulatory roles of miRNAs and identify miRNA biomarkers for disease diagnosis. As a complementary information to the high-throughput experimental data, domain knowledge like the Gene Ontology and KEGG pathway is usually used to guide gene function analysis. However, functional annotation for miRNAs is scarce in the public databases. Till now, only a few methods have been proposed for measuring the functional similarity between miRNAs based on public annotation data, and these methods cover a very limited number of miRNAs, which are not applicable to large-scale miRNA analysis. In this paper, we propose a new method to measure the functional similarity for miRNAs, called miRGOFS, which has two notable features: I) it adopts a new GO semantic similarity metric which considers both common ancestors and descendants of GO terms; II) it computes similarity between GO sets in an asymmetric manner, and weights each GO term by its statistical significance. The miRGOFS-based predictor achieves an F1 of 61.2% on a benchmark data set of miRNA localization, and AUC values of 87.7% and 81.1% on two benchmark sets of miRNA-disease association, respectively. Compared with the existing functional similarity measurements of miRNAs, miRGOFS has the advantages of higher accuracy and larger coverage of human miRNAs (over 1000 miRNAs). http://www.csbio.sjtu.edu.cn/bioinf/MiRGOFS/. yangyang@cs.sjtu.edu.cn or hbshen@sjtu.edu.cn. Supplementary data are available at Bioinformatics online.
Yu, Sheng; Liao, Katherine P; Shaw, Stanley Y; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi
2015-09-01
Analysis of narrative (text) data from electronic health records (EHRs) can improve population-scale phenotyping for clinical and genetic research. Currently, selection of text features for phenotyping algorithms is slow and laborious, requiring extensive and iterative involvement by domain experts. This paper introduces a method to develop phenotyping algorithms in an unbiased manner by automatically extracting and selecting informative features, which can be comparable to expert-curated ones in classification accuracy. Comprehensive medical concepts were collected from publicly available knowledge sources in an automated, unbiased fashion. Natural language processing (NLP) revealed the occurrence patterns of these concepts in EHR narrative notes, which enabled selection of informative features for phenotype classification. When combined with additional codified features, a penalized logistic regression model was trained to classify the target phenotype. The authors applied our method to develop algorithms to identify patients with rheumatoid arthritis and coronary artery disease cases among those with rheumatoid arthritis from a large multi-institutional EHR. The area under the receiver operating characteristic curves (AUC) for classifying RA and CAD using models trained with automated features were 0.951 and 0.929, respectively, compared to the AUCs of 0.938 and 0.929 by models trained with expert-curated features. Models trained with NLP text features selected through an unbiased, automated procedure achieved comparable or slightly higher accuracy than those trained with expert-curated features. The majority of the selected model features were interpretable. The proposed automated feature extraction method, generating highly accurate phenotyping algorithms with improved efficiency, is a significant step toward high-throughput phenotyping. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Kosa, Gergely; Vuoristo, Kiira S; Horn, Svein Jarle; Zimmermann, Boris; Afseth, Nils Kristian; Kohler, Achim; Shapaval, Volha
2018-06-01
Recent developments in molecular biology and metabolic engineering have resulted in a large increase in the number of strains that need to be tested, positioning high-throughput screening of microorganisms as an important step in bioprocess development. Scalability is crucial for performing reliable screening of microorganisms. Most of the scalability studies from microplate screening systems to controlled stirred-tank bioreactors have been performed so far with unicellular microorganisms. We have compared cultivation of industrially relevant oleaginous filamentous fungi and microalga in a Duetz-microtiter plate system to benchtop and pre-pilot bioreactors. Maximal glucose consumption rate, biomass concentration, lipid content of the biomass, biomass, and lipid yield values showed good scalability for Mucor circinelloides (less than 20% differences) and Mortierella alpina (less than 30% differences) filamentous fungi. Maximal glucose consumption and biomass production rates were identical for Crypthecodinium cohnii in microtiter plate and benchtop bioreactor. Most likely due to shear stress sensitivity of this microalga in stirred bioreactor, biomass concentration and lipid content of biomass were significantly higher in the microtiter plate system than in the benchtop bioreactor. Still, fermentation results obtained in the Duetz-microtiter plate system for Crypthecodinium cohnii are encouraging compared to what has been reported in literature. Good reproducibility (coefficient of variation less than 15% for biomass growth, glucose consumption, lipid content, and pH) were achieved in the Duetz-microtiter plate system for Mucor circinelloides and Crypthecodinium cohnii. Mortierella alpina cultivation reproducibility might be improved with inoculation optimization. In conclusion, we have presented suitability of the Duetz-microtiter plate system for the reproducible, scalable, and cost-efficient high-throughput screening of oleaginous microorganisms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Hagyoung; Shin, Seokyoon; Jeon, Hyeongtag, E-mail: hjeon@hanyang.ac.kr
2016-01-15
The authors developed a high throughput (70 Å/min) and scalable space-divided atomic layer deposition (ALD) system for thin film encapsulation (TFE) of flexible organic light-emitting diode (OLED) displays at low temperatures (<100 °C). In this paper, the authors report the excellent moisture barrier properties of Al{sub 2}O{sub 3} films deposited on 2G glass substrates of an industrially relevant size (370 × 470 mm{sup 2}) using the newly developed ALD system. This new ALD system reduced the ALD cycle time to less than 1 s. A growth rate of 0.9 Å/cycle was achieved using trimethylaluminum as an Al source and O{sub 3} as an O reactant. Themore » morphological features and step coverage of the Al{sub 2}O{sub 3} films were investigated using field emission scanning electron microscopy. The chemical composition was analyzed using Auger electron spectroscopy. These deposited Al{sub 2}O{sub 3} films demonstrated a good optical transmittance higher than 95% in the visible region based on the ultraviolet visible spectrometer measurements. Water vapor transmission rate lower than the detection limit of the MOCON test (less than 3.0 × 10{sup −3} g/m{sup 2} day) were obtained for the flexible substrates. Based on these results, Al{sub 2}O{sub 3} deposited using our new high-throughput and scalable spatial ALD is considered a good candidate for preparation of TFE films of flexible OLEDs.« less
Temperature-Ramped 129Xe Spin-Exchange Optical Pumping
2015-01-01
We describe temperature-ramped spin-exchange optical pumping (TR-SEOP) in an automated high-throughput batch-mode 129Xe hyperpolarizer utilizing three key temperature regimes: (i) “hot”—where the 129Xe hyperpolarization rate is maximal, (ii) “warm”—where the 129Xe hyperpolarization approaches unity, and (iii) “cool”—where hyperpolarized 129Xe gas is transferred into a Tedlar bag with low Rb content (<5 ng per ∼1 L dose) suitable for human imaging applications. Unlike with the conventional approach of batch-mode SEOP, here all three temperature regimes may be operated under continuous high-power (170 W) laser irradiation, and hyperpolarized 129Xe gas is delivered without the need for a cryocollection step. The variable-temperature approach increased the SEOP rate by more than 2-fold compared to the constant-temperature polarization rate (e.g., giving effective values for the exponential buildup constant γSEOP of 62.5 ± 3.7 × 10–3 min–1 vs 29.9 ± 1.2 × 10–3 min–1) while achieving nearly the same maximum %PXe value (88.0 ± 0.8% vs 90.1% ± 0.8%, for a 500 Torr (67 kPa) Xe cell loading—corresponding to nuclear magnetic resonance/magnetic resonance imaging (NMR/MRI) enhancements of ∼3.1 × 105 and ∼2.32 × 108 at the relevant fields for clinical imaging and HP 129Xe production of 3 T and 4 mT, respectively); moreover, the intercycle “dead” time was also significantly decreased. The higher-throughput TR-SEOP approach can be implemented without sacrificing the level of 129Xe hyperpolarization or the experimental stability for automation—making this approach beneficial for improving the overall 129Xe production rate in clinical settings. PMID:25008290
Hou, Jian; Chen, Suming; Cao, Changyan; Liu, Huihui; Xiong, Caiqiao; Zhang, Ning; He, Qing; Song, Weiguo; Nie, Zongxiu
2016-08-01
Matrix-assisted laser desorption/ionization mass spectrometry (MALDI MS) is a high-throughput method to achieve fast and accurate identification of lead (Pb) exposure, but is seldom used because of low ionization efficiency and insufficient sensitivity. Nanomaterials applied in MS are a promising technique to overcome the obstacles of MALDI. Flowerlike MgO nanostructures are applied for highly sensitive lead profiling in real samples. They can be used in two ways: (a) MgO is mixed with N-naphthylethylenediamine dihydrochloride (NEDC) as a novel matrix MgO/NEDC; (b) MgO is applied as an absorbent to enrich Pb ions in very dilute solution. The signal intensities of lead by MgO/NEDC were ten times higher than the NEDC matrix. It also shows superior anti-interference ability when analyzing 10 μmol/L Pb ions in the presence of organic substances or interfering metal ions. By applying MgO as adsorbent, the LOD of lead before enrichment is 1 nmol/L. Blood lead test can be achieved using this enrichment process. Besides, MgO can play the role of internal standard to achieve quantitative analysis. Flowerlike MgO nanostructures were applied for highly sensitive lead profiling in real samples. The method is helpful to prevent Pb contamination in a wide range. Further, the combination of MgO with MALDI MS could inspire more nanomaterials being applied in highly sensitive profiling of pollutants. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Wake Vortex Systems Cost/Benefits Analysis
NASA Technical Reports Server (NTRS)
Crisp, Vicki K.
1997-01-01
The goals of cost/benefit assessments are to provide quantitative and qualitative data to aid in the decision-making process. Benefits derived from increased throughput (or decreased delays) used to balance life-cycle costs. Packaging technologies together may provide greater gains (demonstrate higher return on investment).
Laser processing of organic photovoltaic cells with a roll-to-roll manufacturing process
NASA Astrophysics Data System (ADS)
Petsch, Tino; Haenel, Jens; Clair, Maurice; Keiper, Bernd; Scholz, Christian
2011-03-01
Flexible large area organic photovoltaic (OPV) is currently one of the fastest developing areas of organic electronics. New light absorbing polymer blends combined with new transparent conductive materials provide higher power conversion efficiencies while new and improved production methods are developed to achieve higher throughput at reduced cost. A typical OPV is formed by TCO layers as the transparent front contact and polymers as active layer as well as interface layer between active layer and front contact. The several materials have to be patterned in order to allow for a row connection of the solar cell. 3D-Micromac used ultra-short pulsed lasers to evaluate the applicability of various wavelengths for the selective ablation of the indium tin oxide (ITO) layer and the selective ablation of the bulk hetero junction (BHJ) consisting of poly(3-hexylthiophene):phenyl-C61-butyric acid methyl ester (P3HT:PCBM) on top of a Poly(3,4-ethylenedioxythiophene) poly(styrenesulfonate) (PEDOT:PSS) without damaging the ITO. These lasers in combination with high performance galvanometer scanning systems achieve superior scribing quality without damaging the substrate. With scribing speeds of 10 m/s and up it is possible to integrate this technology into a roll-to-roll manufacturing tool. The functionality of an OPV usually also requires an annealing step, especially when using a BHJ for the active layer consisting of P3HT:PCBM, to optimize the layers structure and therewith the efficiency of the solar cell (typically by thermal treatment, e.g. oven). The process of laser annealing was investigated using a short-pulsed laser with a wavelength close to the absorption maximum of the BHJ.
NASA Astrophysics Data System (ADS)
Mousa, MoatazBellah Mahmoud
Atomic Layer Deposition (ALD) is a vapor phase nano-coating process that deposits very uniform and conformal thin film materials with sub-angstrom level thickness control on various substrates. These unique properties made ALD a platform technology for numerous products and applications. However, most of these applications are limited to the lab scale due to the low process throughput relative to the other deposition techniques, which hinders its industrial adoption. In addition to the low throughput, the process development for certain applications usually faces other obstacles, such as: a required new processing mode (e.g., batch vs continuous) or process conditions (e.g., low temperature), absence of an appropriate reactor design for a specific substrate and sometimes the lack of a suitable chemistry. This dissertation studies different aspects of ALD process development for prospect applications in the semiconductor, textiles, and battery industries, as well as novel organic-inorganic hybrid materials. The investigation of a high pressure, low temperature ALD process for metal oxides deposition using multiple process chemistry revealed the vital importance of the gas velocity over the substrate to achieve fast depositions at these challenging processing conditions. Also in this work, two unique high throughput ALD reactor designs are reported. The first is a continuous roll-to-roll ALD reactor for ultra-fast coatings on porous, flexible substrates with very high surface area. While the second reactor is an ALD delivery head that allows for in loco ALD coatings that can be executed under ambient conditions (even outdoors) on large surfaces while still maintaining very high deposition rates. As a proof of concept, part of a parked automobile window was coated using the ALD delivery head. Another process development shown herein is the improvement achieved in the selective synthesis of organic-inorganic materials using an ALD based process called sequential vapor infiltration. Finally, the development of a new ALD chemistry for novel metal deposition is discussed and was used to deposit thin films of tin metal for the first time in literature using an ALD process. The various challenges addressed in this work for the development of different ALD processes help move ALD closer to widespread use and industrial integration.
Novich, Scott D; Eagleman, David M
2015-10-01
Touch receptors in the skin can relay various forms of abstract information, such as words (Braille), haptic feedback (cell phones, game controllers, feedback for prosthetic control), and basic visual information such as edges and shape (sensory substitution devices). The skin can support such applications with ease: They are all low bandwidth and do not require a fine temporal acuity. But what of high-throughput applications? We use sound-to-touch conversion as a motivating example, though others abound (e.g., vision, stock market data). In the past, vibrotactile hearing aids have demonstrated improvement in speech perceptions in the deaf. However, a sound-to-touch sensory substitution device that works with high efficacy and without the aid of lipreading has yet to be developed. Is this because skin simply does not have the capacity to effectively relay high-throughput streams such as sound? Or is this because the spatial and temporal properties of skin have not been leveraged to full advantage? Here, we begin to address these questions with two experiments. First, we seek to determine the best method of relaying information through the skin using an identification task on the lower back. We find that vibrotactile patterns encoding information in both space and time yield the best overall information transfer estimate. Patterns encoded in space and time or "intensity" (the coupled coding of vibration frequency and force) both far exceed performance of only spatially encoded patterns. Next, we determine the vibrotactile two-tacton resolution on the lower back-the distance necessary for resolving two vibrotactile patterns. We find that our vibratory motors conservatively require at least 6 cm of separation to resolve two independent tactile patterns (>80 % correct), regardless of stimulus type (e.g., spatiotemporal "sweeps" versus single vibratory pulses). Six centimeter is a greater distance than the inter-motor distances used in Experiment 1 (2.5 cm), which explains the poor identification performance of spatially encoded patterns. Hence, when using an array of vibrational motors, spatiotemporal sweeps can overcome the limitations of vibrotactile two-tacton resolution. The results provide the first steps toward obtaining a realistic estimate of the skin's achievable throughput, illustrating the best ways to encode data to the skin (using as many dimensions as possible) and how far such interfaces would need to be separated if using multiple arrays in parallel.
Leung, Kaston; Klaus, Anders; Lin, Bill K; Laks, Emma; Biele, Justina; Lai, Daniel; Bashashati, Ali; Huang, Yi-Fei; Aniba, Radhouane; Moksa, Michelle; Steif, Adi; Mes-Masson, Anne-Marie; Hirst, Martin; Shah, Sohrab P; Aparicio, Samuel; Hansen, Carl L
2016-07-26
The genomes of large numbers of single cells must be sequenced to further understanding of the biological significance of genomic heterogeneity in complex systems. Whole genome amplification (WGA) of single cells is generally the first step in such studies, but is prone to nonuniformity that can compromise genomic measurement accuracy. Despite recent advances, robust performance in high-throughput single-cell WGA remains elusive. Here, we introduce droplet multiple displacement amplification (MDA), a method that uses commercially available liquid dispensing to perform high-throughput single-cell MDA in nanoliter volumes. The performance of droplet MDA is characterized using a large dataset of 129 normal diploid cells, and is shown to exceed previously reported single-cell WGA methods in amplification uniformity, genome coverage, and/or robustness. We achieve up to 80% coverage of a single-cell genome at 5× sequencing depth, and demonstrate excellent single-nucleotide variant (SNV) detection using targeted sequencing of droplet MDA product to achieve a median allelic dropout of 15%, and using whole genome sequencing to achieve false and true positive rates of 9.66 × 10(-6) and 68.8%, respectively, in a G1-phase cell. We further show that droplet MDA allows for the detection of copy number variants (CNVs) as small as 30 kb in single cells of an ovarian cancer cell line and as small as 9 Mb in two high-grade serous ovarian cancer samples using only 0.02× depth. Droplet MDA provides an accessible and scalable method for performing robust and accurate CNV and SNV measurements on large numbers of single cells.
Lossless compression techniques for maskless lithography data
NASA Astrophysics Data System (ADS)
Dai, Vito; Zakhor, Avideh
2002-07-01
Future lithography systems must produce more dense chips with smaller feature sizes, while maintaining the throughput of one wafer per sixty seconds per layer achieved by today's optical lithography systems. To achieve this throughput with a direct-write maskless lithography system, using 25 nm pixels for 50 nm feature sizes, requires data rates of about 10 Tb/s. In a previous paper, we presented an architecture which achieves this data rate contingent on consistent 25 to 1 compression of lithography data, and on implementation of a decoder-writer chip with a real-time decompressor fabricated on the same chip as the massively parallel array of lithography writers. In this paper, we examine the compression efficiency of a spectrum of techniques suitable for lithography data, including two industry standards JBIG and JPEG-LS, a wavelet based technique SPIHT, general file compression techniques ZIP and BZIP2, our own 2D-LZ technique, and a simple list-of-rectangles representation RECT. Layouts rasterized both to black-and-white pixels, and to 32 level gray pixels are considered. Based on compression efficiency, JBIG, ZIP, 2D-LZ, and BZIP2 are found to be strong candidates for application to maskless lithography data, in many cases far exceeding the required compression ratio of 25. To demonstrate the feasibility of implementing the decoder-writer chip, we consider the design of a hardware decoder based on ZIP, the simplest of the four candidate techniques. The basic algorithm behind ZIP compression is Lempel-Ziv 1977 (LZ77), and the design parameters of LZ77 decompression are optimized to minimize circuit usage while maintaining compression efficiency.
Lab-on-a-Disc Platform for Automated Chemical Cell Lysis.
Seo, Moo-Jung; Yoo, Jae-Chern
2018-02-26
Chemical cell lysis is an interesting topic in the research to Lab-on-a-Disc (LOD) platforms on account of its perfect compatibility with the centrifugal spin column format. However, standard procedures followed in chemical cell lysis require sophisticated non-contact temperature control as well as the use of pressure resistant valves. These requirements pose a significant challenge thereby making the automation of chemical cell lysis on an LOD extremely difficult to achieve. In this study, an LOD capable of performing fully automated chemical cell lysis is proposed, where a combination of chemical and thermal methods has been used. It comprises a sample inlet, phase change material sheet (PCMS)-based temperature sensor, heating chamber, and pressure resistant valves. The PCMS melts and solidifies at a certain temperature and thus is capable of indicating whether the heating chamber has reached a specific temperature. Compared to conventional cell lysis systems, the proposed system offers advantages of reduced manual labor and a compact structure that can be readily integrated onto an LOD. Experiments using Salmonella typhimurium strains were conducted to confirm the performance of the proposed cell lysis system. The experimental results demonstrate that the proposed system has great potential in realizing chemical cell lysis on an LOD whilst achieving higher throughput in terms of purity and yield of DNA thereby providing a good alternative to conventional cell lysis systems.
Performance and stability of mask process correction for EBM-7000
NASA Astrophysics Data System (ADS)
Saito, Yasuko; Chen, George; Wang, Jen-Shiang; Bai, Shufeng; Howell, Rafael; Li, Jiangwei; Tao, Jun; VanDenBroeke, Doug; Wiley, Jim; Takigawa, Tadahiro; Ohnishi, Takayuki; Kamikubo, Takashi; Hara, Shigehiro; Anze, Hirohito; Hattori, Yoshiaki; Tamamushi, Shuichi
2010-05-01
In order to support complex optical masks today and EUV masks in the near future, it is critical to correct mask patterning errors with a magnitude of up to 20nm over a range of 2000nm at mask scale caused by short range mask process proximity effects. A new mask process correction technology, MPC+, has been developed to achieve the target requirements for the next generation node. In this paper, the accuracy and throughput performance of MPC+ technology is evaluated using the most advanced mask writing tool, the EBM-70001), and high quality mask metrology . The accuracy of MPC+ is achieved by using a new comprehensive mask model. The results of through-pitch and through-linewidth linearity curves and error statistics for multiple pattern layouts (including both 1D and 2D patterns) are demonstrated and show post-correction accuracy of 2.34nm 3σ for through-pitch/through-linewidth linearity. Implementing faster mask model simulation and more efficient correction recipes; full mask area (100cm2) processing run time is less than 7 hours for 32nm half-pitch technology node. From these results, it can be concluded that MPC+ with its higher precision and speed is a practical technology for the 32nm node and future technology generations, including EUV, when used with advance mask writing processes like the EBM-7000.
FPGA Based High Speed Data Acquisition System for Electrical Impedance Tomography
Khan, S; Borsic, A; Manwaring, Preston; Hartov, Alexander; Halter, Ryan
2014-01-01
Electrical Impedance Tomography (EIT) systems are used to image tissue bio-impedance. EIT provides a number of features making it attractive for use as a medical imaging device including the ability to image fast physiological processes (>60 Hz), to meet a range of clinical imaging needs through varying electrode geometries and configurations, to impart only non-ionizing radiation to a patient, and to map the significant electrical property contrasts present between numerous benign and pathological tissues. To leverage these potential advantages for medical imaging, we developed a modular 32 channel data acquisition (DAQ) system using National Instruments’ PXI chassis, along with FPGA, ADC, Signal Generator and Timing and Synchronization modules. To achieve high frame rates, signal demodulation and spectral characteristics of higher order harmonics were computed using dedicated FFT-hardware built into the FPGA module. By offloading the computing onto FPGA, we were able to achieve a reduction in throughput required between the FPGA and PC by a factor of 32:1. A custom designed analog front end (AFE) was used to interface electrodes with our system. Our system is wideband, and capable of acquiring data for input signal frequencies ranging from 100 Hz to 12 MHz. The modular design of both the hardware and software will allow this system to be flexibly configured for the particular clinical application. PMID:24729790
Silicon surface passivation by PEDOT: PSS functionalized by SnO2 and TiO2 nanoparticles
NASA Astrophysics Data System (ADS)
García-Tecedor, M.; Karazhanov, S. Zh; Vásquez, G. C.; Haug, H.; Maestre, D.; Cremades, A.; Taeño, M.; Ramírez-Castellanos, J.; González-Calbet, J. M.; Piqueras, J.; You, C. C.; Marstein, E. S.
2018-01-01
In this paper, we present a study of silicon surface passivation based on the use of spin-coated hybrid composite layers. We investigate both undoped poly(3,4-ethylenedioxythiophene)/poly-(styrenesulfonate) (PEDOT:PSS), as well as PEDOT:PSS functionalized with semiconducting oxide nanomaterials (TiO2 and SnO2). The hybrid compound was deposited at room temperature by spin coating—a potentially lower cost, lower processing time and higher throughput alternative compared with the commonly used vacuum-based techniques. Photoluminescence imaging was used to characterize the electronic properties of the Si/PEDOT:PSS interface. Good surface passivation was achieved by PEDOT:PSS functionalized by semiconducting oxides. We show that control of the concentration of semiconducting oxide nanoparticles in the polymer is crucial in determining the passivation performance. A charge carrier lifetime of about 275 μs has been achieved when using SnO2 nanoparticles at a concentration of 0.5 wt.% as a filler in the composite film. X-ray diffraction (XRD), scanning electron microscopy, high resolution transmission electron microscopy (HRTEM), energy dispersive x-ray in an SEM, and μ-Raman spectroscopy have been used for the morphological, chemical and structural characterization. Finally, a simple model of a photovoltaic device based on PEDOT:PSS functionalized with semiconducting oxide nanoparticles has been fabricated and electrically characterized.
Silicon surface passivation by PEDOT: PSS functionalized by SnO2 and TiO2 nanoparticles.
García-Tecedor, M; Karazhanov, S Zh; Vásquez, G C; Haug, H; Maestre, D; Cremades, A; Taeño, M; Ramírez-Castellanos, J; González-Calbet, J M; Piqueras, J; You, C C; Marstein, E S
2018-01-19
In this paper, we present a study of silicon surface passivation based on the use of spin-coated hybrid composite layers. We investigate both undoped poly(3,4-ethylenedioxythiophene)/poly-(styrenesulfonate) (PEDOT:PSS), as well as PEDOT:PSS functionalized with semiconducting oxide nanomaterials (TiO 2 and SnO 2 ). The hybrid compound was deposited at room temperature by spin coating-a potentially lower cost, lower processing time and higher throughput alternative compared with the commonly used vacuum-based techniques. Photoluminescence imaging was used to characterize the electronic properties of the Si/PEDOT:PSS interface. Good surface passivation was achieved by PEDOT:PSS functionalized by semiconducting oxides. We show that control of the concentration of semiconducting oxide nanoparticles in the polymer is crucial in determining the passivation performance. A charge carrier lifetime of about 275 μs has been achieved when using SnO 2 nanoparticles at a concentration of 0.5 wt.% as a filler in the composite film. X-ray diffraction (XRD), scanning electron microscopy, high resolution transmission electron microscopy (HRTEM), energy dispersive x-ray in an SEM, and μ-Raman spectroscopy have been used for the morphological, chemical and structural characterization. Finally, a simple model of a photovoltaic device based on PEDOT:PSS functionalized with semiconducting oxide nanoparticles has been fabricated and electrically characterized.
Jun, Young Jin; Park, Sung Hyeon; Woo, Seong Ihl
2014-12-08
Combinatorial high-throughput optical screening method was developed to find the optimum composition of highly active Pd-based catalysts at the cathode of the hybrid Li-air battery. Pd alone, which is one-third the cost of Pt, has difficulty in replacing Pt; therefore, the integration of other metals was investigated to improve its performance toward oxygen reduction reaction (ORR). Among the binary Pd-based catalysts, the composition of Pd-Ir derived catalysts had higher performance toward ORR compared to other Pd-based binary combinations. The composition at 88:12 at. % (Pd: Ir) showed the highest activity toward ORR at the cathode of the hybrid Li-air battery. The prepared Pd(88)Ir(12)/C catalyst showed a current density of -2.58 mA cm(-2) at 0.8 V (vs RHE), which was around 30% higher compared to that of Pd/C (-1.97 mA cm(-2)). When the prepared Pd(88)Ir(12)/C catalyst was applied to the hybrid Li-air battery, the polarization of the cell was reduced and the energy efficiency of the cell was about 30% higher than that of the cell with Pd/C.
Microfluidic resonant waveguide grating biosensor system for whole cell sensing
NASA Astrophysics Data System (ADS)
Zaytseva, Natalya; Miller, William; Goral, Vasily; Hepburn, Jerry; Fang, Ye
2011-04-01
We report on a fluidic resonant waveguide grating (RWG) biosensor system that enables medium throughput measurements of cellular responses under microfluidics in a 32-well format. Dynamic mass redistribution assays under microfluidics differentiate the cross-desensitization process between the β2-adrenoceptor agonist epinephrine and the adenylate cyclase activator forskolin mediated signaling. This system opens new possibility to study cellular processes that are otherwise difficult to achieve using conventional RWG configurations.
Gaussian Random Fields Methods for Fork-Join Network with Synchronization Constraints
2014-12-22
substantial efforts were dedicated to the study of the max-plus recursions [21, 3, 12]. More recently, Atar et al. [2] have studied a fork-join...feedback and NES, Atar et al. [2] show that a dynamic priority discipline achieves throughput optimal- ity asymptotically in the conventional heavy...2011) Patient flow in hospitals: a data-based queueing-science perspective. Submitted to Stochastic Systems, 20. [2] R. Atar , A. Mandelbaum and A
Quality Control for Ambient Sampling of PCDD/PCDF from Open Combustion Sources
Both long duration (> 6 h) and high temperature (up to 139o C) sampling efforts were conducted using ambient air sampling methods to determine if either high volume throughput or higher than ambient sampling temperatures resulted in loss of target polychlorinated dibenzodioxins/d...
High throughput on-chip analysis of high-energy charged particle tracks using lensfree imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Wei; Shabbir, Faizan; Gong, Chao
2015-04-13
We demonstrate a high-throughput charged particle analysis platform, which is based on lensfree on-chip microscopy for rapid ion track analysis using allyl diglycol carbonate, i.e., CR-39 plastic polymer as the sensing medium. By adopting a wide-area opto-electronic image sensor together with a source-shifting based pixel super-resolution technique, a large CR-39 sample volume (i.e., 4 cm × 4 cm × 0.1 cm) can be imaged in less than 1 min using a compact lensfree on-chip microscope, which detects partially coherent in-line holograms of the ion tracks recorded within the CR-39 detector. After the image capture, using highly parallelized reconstruction and ion track analysis algorithms running on graphics processingmore » units, we reconstruct and analyze the entire volume of a CR-39 detector within ∼1.5 min. This significant reduction in the entire imaging and ion track analysis time not only increases our throughput but also allows us to perform time-resolved analysis of the etching process to monitor and optimize the growth of ion tracks during etching. This computational lensfree imaging platform can provide a much higher throughput and more cost-effective alternative to traditional lens-based scanning optical microscopes for ion track analysis using CR-39 and other passive high energy particle detectors.« less
Identifying apicoplast-targeting antimalarials using high-throughput compatible approaches
Ekland, Eric H.; Schneider, Jessica; Fidock, David A.
2011-01-01
Malarial parasites have evolved resistance to all previously used therapies, and recent evidence suggests emerging resistance to the first-line artemisinins. To identify antimalarials with novel mechanisms of action, we have developed a high-throughput screen targeting the apicoplast organelle of Plasmodium falciparum. Antibiotics known to interfere with this organelle, such as azithromycin, exhibit an unusual phenotype whereby the progeny of drug-treated parasites die. Our screen exploits this phenomenon by assaying for “delayed death” compounds that exhibit a higher potency after two cycles of intraerythrocytic development compared to one. We report a primary assay employing parasites with an integrated copy of a firefly luciferase reporter gene and a secondary flow cytometry-based assay using a nucleic acid stain paired with a mitochondrial vital dye. Screening of the U.S. National Institutes of Health Clinical Collection identified known and novel antimalarials including kitasamycin. This inexpensive macrolide, used for agricultural applications, exhibited an in vitro IC50 in the 50 nM range, comparable to the 30 nM activity of our control drug, azithromycin. Imaging and pharmacologic studies confirmed kitasamycin action against the apicoplast, and in vivo activity was observed in a murine malaria model. These assays provide the foundation for high-throughput campaigns to identify novel chemotypes for combination therapies to treat multidrug-resistant malaria.—Ekland, E. H., Schneider, J., Fidock, D. A. Identifying apicoplast-targeting antimalarials using high-throughput compatible approaches. PMID:21746861
Simulation and Optimization of an Astrophotonic Reformatter
NASA Astrophysics Data System (ADS)
Anagnos, Th; Harris, R. J.; Corrigan, M. K.; Reeves, A. P.; Townson, M. J.; MacLachlan, D. G.; Thomson, R. R.; Morris, T. J.; Schwab, C.; Quirrenbach, A.
2018-05-01
Image slicing is a powerful technique in astronomy. It allows the instrument designer to reduce the slit width of the spectrograph, increasing spectral resolving power whilst retaining throughput. Conventionally this is done using bulk optics, such as mirrors and prisms, however more recently astrophotonic components known as PLs and photonic reformatters have also been used. These devices reformat the MM input light from a telescope into SM outputs, which can then be re-arranged to suit the spectrograph. The PD is one such device, designed to reduce the dependence of spectrograph size on telescope aperture and eliminate modal noise. We simulate the PD, by optimising the throughput and geometrical design using Soapy and BeamProp. The simulated device shows a transmission between 8 and 20 %, depending upon the type of AO correction applied, matching the experimental results well. We also investigate our idealised model of the PD and show that the barycentre of the slit varies only slightly with time, meaning that the modal noise contribution is very low when compared to conventional fibre systems. We further optimise our model device for both higher throughput and reduced modal noise. This device improves throughput by 6.4 % and reduces the movement of the slit output by 50%, further improving stability. This shows the importance of properly simulating such devices, including atmospheric effects. Our work complements recent work in the field and is essential for optimising future photonic reformatters.
Savino, Maria; Seripa, Davide; Gallo, Antonietta P; Garrubba, Maria; D'Onofrio, Grazia; Bizzarro, Alessandra; Paroni, Giulia; Paris, Francesco; Mecocci, Patrizia; Masullo, Carlo; Pilotto, Alberto; Santini, Stefano A
2011-01-01
Recent studies investigating the single cytochrome P450 (CYP) 2D6 allele *2A reported an association with the response to drug treatments. More genetic data can be obtained, however, by high-throughput based-technologies. Aim of this study is the high-throughput analysis of the CYP2D6 polymorphisms to evaluate its effectiveness in the identification of patient responders/non-responders to CYP2D6-metabolized drugs. An attempt to compare our results with those previously obtained with the standard analysis of CYP2D6 allele *2A was also made. Sixty blood samples from patients treated with CYP2D6-metabolized drugs previously genotyped for the allele CYP2D6*2A, were analyzed for the CYP2D6 polymorphisms with the AutoGenomics INFINITI CYP4502D6-I assay on the AutoGenomics INFINITI analyzer. A higher frequency of mutated alleles in responder than in non-responder patients (75.38 % vs 43.48 %; p = 0.015) was observed. Thus, the presence of a mutated allele of CYP2D6 was associated with a response to CYP2D6-metabolized drugs (OR = 4.044 (1.348 - 12.154). No difference was observed in the distribution of allele *2A (p = 0.320). The high-throughput genetic analysis of the CYP2D6 polymorphisms better discriminate responders/non-responders with respect to the standard analysis of the CYP2D6 allele *2A. A high-throughput genetic assay of the CYP2D6 may be useful to identify patients with different clinical responses to CYP2D6-metabolized drugs.
High-throughput cultivation and screening platform for unicellular phototrophs.
Tillich, Ulrich M; Wolter, Nick; Schulze, Katja; Kramer, Dan; Brödel, Oliver; Frohme, Marcus
2014-09-16
High-throughput cultivation and screening methods allow a parallel, miniaturized and cost efficient processing of many samples. These methods however, have not been generally established for phototrophic organisms such as microalgae or cyanobacteria. In this work we describe and test high-throughput methods with the model organism Synechocystis sp. PCC6803. The required technical automation for these processes was achieved with a Tecan Freedom Evo 200 pipetting robot. The cultivation was performed in 2.2 ml deepwell microtiter plates within a cultivation chamber outfitted with programmable shaking conditions, variable illumination, variable temperature, and an adjustable CO2 atmosphere. Each microtiter-well within the chamber functions as a separate cultivation vessel with reproducible conditions. The automated measurement of various parameters such as growth, full absorption spectrum, chlorophyll concentration, MALDI-TOF-MS, as well as a novel vitality measurement protocol, have already been established and can be monitored during cultivation. Measurement of growth parameters can be used as inputs for the system to allow for periodic automatic dilutions and therefore a semi-continuous cultivation of hundreds of cultures in parallel. The system also allows the automatic generation of mid and long term backups of cultures to repeat experiments or to retrieve strains of interest. The presented platform allows for high-throughput cultivation and screening of Synechocystis sp. PCC6803. The platform should be usable for many phototrophic microorganisms as is, and be adaptable for even more. A variety of analyses are already established and the platform is easily expandable both in quality, i.e. with further parameters to screen for additional targets and in quantity, i.e. size or number of processed samples.
QoS support for end users of I/O-intensive applications using shared storage systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Marion Kei; Zhang, Xuechen; Jiang, Song
2011-01-19
I/O-intensive applications are becoming increasingly common on today's high-performance computing systems. While performance of compute-bound applications can be effectively guaranteed with techniques such as space sharing or QoS-aware process scheduling, it remains a challenge to meet QoS requirements for end users of I/O-intensive applications using shared storage systems because it is difficult to differentiate I/O services for different applications with individual quality requirements. Furthermore, it is difficult for end users to accurately specify performance goals to the storage system using I/O-related metrics such as request latency or throughput. As access patterns, request rates, and the system workload change in time,more » a fixed I/O performance goal, such as bounds on throughput or latency, can be expensive to achieve and may not lead to a meaningful performance guarantees such as bounded program execution time. We propose a scheme supporting end-users QoS goals, specified in terms of program execution time, in shared storage environments. We automatically translate the users performance goals into instantaneous I/O throughput bounds using a machine learning technique, and use dynamically determined service time windows to efficiently meet the throughput bounds. We have implemented this scheme in the PVFS2 parallel file system and have conducted an extensive evaluation. Our results show that this scheme can satisfy realistic end-user QoS requirements by making highly efficient use of the I/O resources. The scheme seeks to balance programs attainment of QoS requirements, and saves as much of the remaining I/O capacity as possible for best-effort programs.« less
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
Duan, Yongbo; Zhai, Chenguang; Li, Hao; Li, Juan; Mei, Wenqian; Gui, Huaping; Ni, Dahu; Song, Fengshun; Li, Li; Zhang, Wanggen; Yang, Jianbo
2012-09-01
A number of Agrobacterium-mediated rice transformation systems have been developed and widely used in numerous laboratories and research institutes. However, those systems generally employ antibiotics like kanamycin and hygromycin, or herbicide as selectable agents, and are used for the small-scale experiments. To address high-throughput production of transgenic rice plants via Agrobacterium-mediated transformation, and to eliminate public concern on antibiotic markers, we developed a comprehensive efficient protocol, covering from explant preparation to the acquisition of low copy events by real-time PCR analysis before transplant to field, for high-throughput production of transgenic plants of Japonica rice varieties Wanjing97 and Nipponbare using Escherichia coli phosphomannose isomerase gene (pmi) as a selectable marker. The transformation frequencies (TF) of Wanjing97 and Nipponbare were achieved as high as 54.8 and 47.5%, respectively, in one round of selection of 7.5 or 12.5 g/L mannose appended with 5 g/L sucrose. High-throughput transformation from inoculation to transplant of low copy events was accomplished within 55-60 days. Moreover, the Taqman assay data from a large number of transformants showed 45.2% in Wanjing97 and 31.5% in Nipponbare as a low copy rate, and the transformants are fertile and follow the Mendelian segregation ratio. This protocol facilitates us to perform genome-wide functional annotation of the open reading frames and utilization of the agronomically important genes in rice under a reduced public concern on selectable markers. We describe a comprehensive protocol for large scale production of transgenic Japonica rice plants using non-antibiotic selectable agent, at simplified, cost- and labor-saving manners.
High-speed Fourier ptychographic microscopy based on programmable annular illuminations.
Sun, Jiasong; Zuo, Chao; Zhang, Jialin; Fan, Yao; Chen, Qian
2018-05-16
High-throughput quantitative phase imaging (QPI) is essential to cellular phenotypes characterization as it allows high-content cell analysis and avoids adverse effects of staining reagents on cellular viability and cell signaling. Among different approaches, Fourier ptychographic microscopy (FPM) is probably the most promising technique to realize high-throughput QPI by synthesizing a wide-field, high-resolution complex image from multiple angle-variably illuminated, low-resolution images. However, the large dataset requirement in conventional FPM significantly limits its imaging speed, resulting in low temporal throughput. Moreover, the underlying theoretical mechanism as well as optimum illumination scheme for high-accuracy phase imaging in FPM remains unclear. Herein, we report a high-speed FPM technique based on programmable annular illuminations (AIFPM). The optical-transfer-function (OTF) analysis of FPM reveals that the low-frequency phase information can only be correctly recovered if the LEDs are precisely located at the edge of the objective numerical aperture (NA) in the frequency space. By using only 4 low-resolution images corresponding to 4 tilted illuminations matching a 10×, 0.4 NA objective, we present the high-speed imaging results of in vitro Hela cells mitosis and apoptosis at a frame rate of 25 Hz with a full-pitch resolution of 655 nm at a wavelength of 525 nm (effective NA = 0.8) across a wide field-of-view (FOV) of 1.77 mm 2 , corresponding to a space-bandwidth-time product of 411 megapixels per second. Our work reveals an important capability of FPM towards high-speed high-throughput imaging of in vitro live cells, achieving video-rate QPI performance across a wide range of scales, both spatial and temporal.
TreeMAC: Localized TDMA MAC protocol for real-time high-data-rate sensor networks
Song, W.-Z.; Huang, R.; Shirazi, B.; LaHusen, R.
2009-01-01
Earlier sensor network MAC protocols focus on energy conservation in low-duty cycle applications, while some recent applications involve real-time high-data-rate signals. This motivates us to design an innovative localized TDMA MAC protocol to achieve high throughput and low congestion in data collection sensor networks, besides energy conservation. TreeMAC divides a time cycle into frames and each frame into slots. A parent node determines the children's frame assignment based on their relative bandwidth demand, and each node calculates its own slot assignment based on its hop-count to the sink. This innovative 2-dimensional frame-slot assignment algorithm has the following nice theory properties. First, given any node, at any time slot, there is at most one active sender in its neighborhood (including itself). Second, the packet scheduling with TreeMAC is bufferless, which therefore minimizes the probability of network congestion. Third, the data throughput to the gateway is at least 1/3 of the optimum assuming reliable links. Our experiments on a 24-node testbed show that TreeMAC protocol significantly improves network throughput, fairness, and energy efficiency compared to TinyOS's default CSMA MAC protocol and a recent TDMA MAC protocol Funneling-MAC. Partial results of this paper were published in Song, Huang, Shirazi and Lahusen [W.-Z. Song, R. Huang, B. Shirazi, and R. Lahusen, TreeMAC: Localized TDMA MAC protocol for high-throughput and fairness in sensor networks, in: The 7th Annual IEEE International Conference on Pervasive Computing and Communications, PerCom, March 2009]. Our new contributions include analyses of the performance of TreeMAC from various aspects. We also present more implementation detail and evaluate TreeMAC from other aspects. ?? 2009 Elsevier B.V.
Mass Transfer Testing of a 12.5-cm Rotor Centrifugal Contactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. H. Meikrantz; T. G. Garn; J. D. Law
2008-09-01
TRUEX mass transfer tests were performed using a single stage commercially available 12.5 cm centrifugal contactor and stable cerium (Ce) and europium (Eu). Test conditions included throughputs ranging from 2.5 to 15 Lpm and rotor speeds of 1750 and 2250 rpm. Ce and Eu extraction forward distribution coefficients ranged from 13 to 19. The first and second stage strip back distributions were 0.5 to 1.4 and .002 to .004, respectively, throughout the dynamic test conditions studied. Visual carryover of aqueous entrainment in all organic phase samples was estimated at < 0.1 % and organic carryover into all aqueous phase samplesmore » was about ten times less. Mass transfer efficiencies of = 98 % for both Ce and Eu in the extraction section were obtained over the entire range of test conditions. The first strip stage mass transfer efficiencies ranged from 75 to 93% trending higher with increasing throughput. Second stage mass transfer was greater than 99% in all cases. Increasing the rotor speed from 1750 to 2250 rpm had no significant effect on efficiency for all throughputs tested.« less
Liu, Chao; Xue, Chundong; Chen, Xiaodong; Shan, Lei; Tian, Yu; Hu, Guoqing
2015-06-16
Viscoelasticity-induced particle migration has recently received increasing attention due to its ability to obtain high-quality focusing over a wide range of flow rates. However, its application is limited to low throughput regime since the particles can defocus as flow rate increases. Using an engineered carrier medium with constant and low viscosity and strong elasticity, the sample flow rates are improved to be 1 order of magnitude higher than those in existing studies. Utilizing differential focusing of particles of different sizes, here, we present sheathless particle/cell separation in simple straight microchannels that possess excellent parallelizability for further throughput enhancement. The present method can be implemented over a wide range of particle/cell sizes and flow rates. We successfully separate small particles from larger particles, MCF-7 cells from red blood cells (RBCs), and Escherichia coli (E. coli) bacteria from RBCs in different straight microchannels. The proposed method could broaden the applications of viscoelastic microfluidic devices to particle/cell separation due to the enhanced sample throughput and simple channel design.
Pfannkoch, Edward A; Stuff, John R; Whitecavage, Jacqueline A; Blevins, John M; Seely, Kathryn A; Moran, Jeffery H
2015-01-01
National Oceanic and Atmospheric Administration (NOAA) Method NMFS-NWFSC-59 2004 is currently used to quantitatively analyze seafood for polycyclic aromatic hydrocarbon (PAH) contamination, especially following events such as the Deepwater Horizon oil rig explosion that released millions of barrels of crude oil into the Gulf of Mexico. This method has limited throughput capacity; hence, alternative methods are necessary to meet analytical demands after such events. Stir bar sorptive extraction (SBSE) is an effective technique to extract trace PAHs in water and the quick, easy, cheap, effective, rugged, and safe (QuEChERS) extraction strategy effectively extracts PAHs from complex food matrices. This study uses SBSE to concentrate PAHs and eliminate matrix interference from QuEChERS extracts of seafood, specifically oysters, fish, and shrimp. This method provides acceptable recovery (65-138%) linear calibrations and is sensitive (LOD = 0.02 ppb, LOQ = 0.06 ppb) while providing higher throughput and maintaining equivalency between NOAA 2004 as determined by analysis of NIST SRM 1974b mussel tissue.
Use of high-throughput mass spectrometry to elucidate host pathogen interactions in Salmonella
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodland, Karin D.; Adkins, Joshua N.; Ansong, Charles
Capabilities in mass spectrometry are evolving rapidly, with recent improvements in sensitivity, data analysis, and most important, from the standpoint of this review, much higher throughput allowing analysis of many samples in a single day. This short review describes how these improvements in mass spectrometry can be used to dissect host-pathogen interactions using Salmonella as a model system. This approach enabled direct identification of the majority of annotated Salmonella proteins, quantitation of expression changes under various in vitro growth conditions, and new insights into virulence and expression of Salmonella proteins within host cell cells. One of the most significant findingsmore » is that a very high percentage of the all annotated genes (>20%) in Salmonella are regulated post-transcriptionally. In addition, new and unexpected interactions have been identified for several Salmonella virulence regulators that involve protein-protein interactions, suggesting additional functions of these regulators in coordinating virulence expression. Overall high throughput mass spectrometry provides a new view of pathogen-host interactions emphasizing the protein products and defining how protein interactions determine the outcome of infection.« less
Architecture Studies Done for High-Rate Duplex Direct Data Distribution (D4) Services
NASA Technical Reports Server (NTRS)
2002-01-01
A study was sponsored to investigate a set of end-to-end system concepts for implementing a high-rate duplex direct data distribution (D4) space-to-ground communications link. The NASA Glenn Research Center is investigating these systems (both commercial and Government) as a possible method of providing a D4 communications service between NASA spacecraft in low Earth orbit and the respective principal investigators using or monitoring instruments aboard these spacecraft. Candidate commercial services were assessed regarding their near-term potential to provide a D4 type of service. The candidates included K-band and V-band geostationary orbit and nongeostationary orbit satellite relay services and direct downlink (D3) services. Internet protocol (IP) networking technologies were evaluated to enable the user-directed distribution and delivery of science data. Four realistic, near-future concepts were analyzed: 1) A duplex direct link (uplink plus downlink communication paths) between a low-Earth-orbit spacecraft and a principal-investigator-based autonomous Earth station; 2) A space-based relay using a future K-band nongeosynchronous-orbit system to handle both the uplink and downlink communication paths; 3) A hybrid link using both direct and relay services to achieve full duplex capability; 4) A dual-mode concept consisting of both a duplex direct link and a space relay duplex link operating independently. The concepts were analyzed in terms of contact time between the NASA spacecraft and the communications service and the achievable data throughput. Throughput estimates for the D4 systems were based on the infusion of advanced communications technology products (single and multibeam K-band phased-arrays and digital modems) being developed by Glenn. Cost estimates were also performed using extrapolated information from both terrestrial and current satellite communications providers. The throughput and cost estimates were used to compare the concepts.
The development of 8 inch roll-to-plate nanoimprint lithography (8-R2P-NIL) system
NASA Astrophysics Data System (ADS)
Lee, Lai Seng; Mohamed, Khairudin; Ooi, Su Guan
2017-07-01
Growth in semiconductor and integrated circuit industry was observed in the past decennium of years for industrial technology which followed Moore's law. The line width of nanostructure to be exposed was influenced by the essential technology of photolithography. Thus, it is crucial to have a low cost and high throughput manufacturing process for nanostructures. Nanoimprint Lithography technique invented by Stephen Y. Chou was considered as major nanolithography process to be used in future integrated circuit and integrated optics. The drawbacks of high imprint pressure, high imprint temperature, air bubbles formation, resist sticking to mold and low throughput of thermal nanoimprint lithography on silicon wafer have yet to be solved. Thus, the objectives of this work is to develop a high throughput, low imprint force, room temperature UV assisted 8 inch roll to plate nanoimprint lithography system capable of imprinting nanostructures on 200 mm silicon wafer using roller imprint with flexible mold. A piece of resist spin coated silicon wafer was placed onto vacuum chuck drives forward by a stepper motor. A quartz roller wrapped with a piece of transparent flexible mold was used as imprint roller. The imprinted nanostructures were cured by 10 W, 365 nm UV LED which situated inside the quartz roller. Heat generated by UV LED was dissipated by micro heat pipe. The flexible mold detaches from imprinted nanostructures in a 'line peeling' pattern and imprint pressure was measured by ultra-thin force sensors. This system has imprinting speed capability ranging from 0.19 mm/s to 5.65 mm/s, equivalent to imprinting capability of 3 to 20 pieces of 8 inch wafers per hour. Speed synchronization between imprint roller and vacuum chuck was achieved by controlling pulse rate supplied to stepper motor which drive the vacuum chuck. The speed different ranging from 2 nm/s to 98 nm/s is achievable. Vacuum chuck height was controlled by stepper motor with displacement of 5 nm/step.
Polystyrene negative resist for high-resolution electron beam lithography
2011-01-01
We studied the exposure behavior of low molecular weight polystyrene as a negative tone electron beam lithography (EBL) resist, with the goal of finding the ultimate achievable resolution. It demonstrated fairly well-defined patterning of a 20-nm period line array and a 15-nm period dot array, which are the densest patterns ever achieved using organic EBL resists. Such dense patterns can be achieved both at 20 and 5 keV beam energies using different developers. In addition to its ultra-high resolution capability, polystyrene is a simple and low-cost resist with easy process control and practically unlimited shelf life. It is also considerably more resistant to dry etching than PMMA. With a low sensitivity, it would find applications where negative resist is desired and throughput is not a major concern. PMID:21749679
Mission Advantages of NEXT: Nasa's Evolutionary Xenon Thruster
NASA Technical Reports Server (NTRS)
Oleson, Steven; Gefert, Leon; Benson, Scott; Patterson, Michael; Noca, Muriel; Sims, Jon
2002-01-01
With the demonstration of the NSTAR propulsion system on the Deep Space One mission, the range of the Discovery class of NASA missions can now be expanded. NSTAR lacks, however, sufficient performance for many of the more challenging Office of Space Science (OSS) missions. Recent studies have shown that NASA's Evolutionary Xenon Thruster (NEXT) ion propulsion system is the best choice for many exciting potential OSS missions including outer planet exploration and inner solar system sample returns. The NEXT system provides the higher power, higher specific impulse, and higher throughput required by these science missions.
Air Traffic Management Technology Demonstration-1 Concept of Operations (ATD-1 ConOps)
NASA Technical Reports Server (NTRS)
Baxley, Brian T.; Johnson, William C.; Swenson, Harry; Robinson, John E.; Prevot, Thomas; Callantine, Todd; Scardina, John; Greene, Michael
2012-01-01
The operational goal of the ATD-1 ConOps is to enable aircraft, using their onboard FMS capabilities, to fly Optimized Profile Descents (OPDs) from cruise to the runway threshold at a high-density airport, at a high throughput rate, using primarily speed control to maintain in-trail separation and the arrival schedule. The three technologies in the ATD-1 ConOps achieve this by calculating a precise arrival schedule, using controller decision support tools to provide terminal controllers with speeds for aircraft to fly to meet times at a particular meter points, and onboard software providing flight crews with speeds for the aircraft to fly to achieve a particular spacing behind preceding aircraft.
Task allocation among multiple intelligent robots
NASA Technical Reports Server (NTRS)
Gasser, L.; Bekey, G.
1987-01-01
Researchers describe the design of a decentralized mechanism for allocating assembly tasks in a multiple robot assembly workstation. Currently, the approach focuses on distributed allocation to explore its feasibility and its potential for adaptability to changing circumstances, rather than for optimizing throughput. Individual greedy robots make their own local allocation decisions using both dynamic allocation policies which propagate through a network of allocation goals, and local static and dynamic constraints describing which robots are elibible for which assembly tasks. Global coherence is achieved by proper weighting of allocation pressures propagating through the assembly plan. Deadlock avoidance and synchronization is achieved using periodic reassessments of local allocation decisions, ageing of allocation goals, and short-term allocation locks on goals.
NASA Technical Reports Server (NTRS)
Kot, R. A.; Oliver, J. D.; Wilson, S. G.
1984-01-01
A monolithic, GaAs, dual mode, quadrature amplitude shift keying and quadrature phase shift keying transceiver with one and two billion bits per second data rate is being considered to achieve a low power, small and ultra high speed communication system for satellite as well as terrestrial purposes. Recent GaAs integrated circuit achievements are surveyed and their constituent device types are evaluated. Design considerations, on an elemental level, of the entire modem are further included for monolithic realization with practical fabrication techniques. Numerous device types, with practical monolithic compatability, are used in the design of functional blocks with sufficient performances for realization of the transceiver.