Equalizer: a scalable parallel rendering framework.
Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato
2009-01-01
Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.
Thin Polymer Films with Continuous Vertically Aligned 1 nm Pores Fabricated by Soft Confinement
Feng, Xunda; Nejati, Siamak; Cowan, Matthew G.; ...
2015-12-03
Membrane separations are critically important in areas ranging from health care and analytical chemistry to bioprocessing and water purification. An ideal nanoporous membrane would consist of a thin film with physically continuous and vertically aligned nanopores and would display a narrow distribution of pore sizes. However, the current state of the art departs considerably from this ideal and is beset by intrinsic trade-offs between permeability and selectivity. We demonstrate an effective and scalable method to fabricate polymer films with ideal membrane morphologies consisting of submicron thickness films with physically continuous and vertically aligned 1 nm pores. The approach is basedmore » on soft confinement to control the orientation of a cross-linkable mesophase in which the pores are produced by self-assembly. The scalability, exceptional ease of fabrication, and potential to create a new class of nanofiltration membranes stand out as compelling aspects.« less
NASA Astrophysics Data System (ADS)
Dunne, Peter W.; Starkey, Chris L.; Gimeno-Fabra, Miquel; Lester, Edward H.
2014-01-01
Continuous flow hydrothermal synthesis offers a cheap, green and highly scalable route for the preparation of inorganic nanomaterials which has predominantly been applied to metal oxide based materials. In this work we report the first continuous flow hydrothermal synthesis of metal sulphide nanomaterials. A wide range of binary metal sulphides, ZnS, CdS, PbS, CuS, Fe(1-x)S and Bi2S3, have been synthesised. By varying the reaction conditions two different mechanisms may be invoked; a growth dominated route which permits the formation of nanostructured sulphide materials, and a nucleation driven process which produces nanoparticles with temperature dependent size control. This offers a new and industrially viable route to a wide range of metal sulphide nanoparticles with facile size and shape control.Continuous flow hydrothermal synthesis offers a cheap, green and highly scalable route for the preparation of inorganic nanomaterials which has predominantly been applied to metal oxide based materials. In this work we report the first continuous flow hydrothermal synthesis of metal sulphide nanomaterials. A wide range of binary metal sulphides, ZnS, CdS, PbS, CuS, Fe(1-x)S and Bi2S3, have been synthesised. By varying the reaction conditions two different mechanisms may be invoked; a growth dominated route which permits the formation of nanostructured sulphide materials, and a nucleation driven process which produces nanoparticles with temperature dependent size control. This offers a new and industrially viable route to a wide range of metal sulphide nanoparticles with facile size and shape control. Electronic supplementary information (ESI) available: Experimental details, refinement procedure, fluorescence spectra of ZnS samples. See DOI: 10.1039/c3nr05749f
Liang, Li; Oline, Stefan N; Kirk, Justin C; Schmitt, Lukas Ian; Komorowski, Robert W; Remondes, Miguel; Halassa, Michael M
2017-01-01
Independently adjustable multielectrode arrays are routinely used to interrogate neuronal circuit function, enabling chronic in vivo monitoring of neuronal ensembles in freely behaving animals at a single-cell, single spike resolution. Despite the importance of this approach, its widespread use is limited by highly specialized design and fabrication methods. To address this, we have developed a Scalable, Lightweight, Integrated and Quick-to-assemble multielectrode array platform. This platform additionally integrates optical fibers with independently adjustable electrodes to allow simultaneous single unit recordings and circuit-specific optogenetic targeting and/or manipulation. In current designs, the fully assembled platforms are scalable from 2 to 32 microdrives, and yet range 1-3 g, light enough for small animals. Here, we describe the design process starting from intent in computer-aided design, parameter testing through finite element analysis and experimental means, and implementation of various applications across mice and rats. Combined, our methods may expand the utility of multielectrode recordings and their continued integration with other tools enabling functional dissection of intact neural circuits.
Dunne, Peter W; Starkey, Chris L; Gimeno-Fabra, Miquel; Lester, Edward H
2014-02-21
Continuous flow hydrothermal synthesis offers a cheap, green and highly scalable route for the preparation of inorganic nanomaterials which has predominantly been applied to metal oxide based materials. In this work we report the first continuous flow hydrothermal synthesis of metal sulphide nanomaterials. A wide range of binary metal sulphides, ZnS, CdS, PbS, CuS, Fe(1-x)S and Bi2S3, have been synthesised. By varying the reaction conditions two different mechanisms may be invoked; a growth dominated route which permits the formation of nanostructured sulphide materials, and a nucleation driven process which produces nanoparticles with temperature dependent size control. This offers a new and industrially viable route to a wide range of metal sulphide nanoparticles with facile size and shape control.
Versatile, High Quality and Scalable Continuous Flow Production of Metal-Organic Frameworks
Rubio-Martinez, Marta; Batten, Michael P.; Polyzos, Anastasios; Carey, Keri-Constanti; Mardel, James I.; Lim, Kok-Seng; Hill, Matthew R.
2014-01-01
Further deployment of Metal-Organic Frameworks in applied settings requires their ready preparation at scale. Expansion of typical batch processes can lead to unsuccessful or low quality synthesis for some systems. Here we report how continuous flow chemistry can be adapted as a versatile route to a range of MOFs, by emulating conditions of lab-scale batch synthesis. This delivers ready synthesis of three different MOFs, with surface areas that closely match theoretical maxima, with production rates of 60 g/h at extremely high space-time yields. PMID:24962145
Continuous Production of Discrete Plasmid DNA-Polycation Nanoparticles Using Flash Nanocomplexation.
Santos, Jose Luis; Ren, Yong; Vandermark, John; Archang, Maani M; Williford, John-Michael; Liu, Heng-Wen; Lee, Jason; Wang, Tza-Huei; Mao, Hai-Quan
2016-12-01
Despite successful demonstration of linear polyethyleneimine (lPEI) as an effective carrier for a wide range of gene medicine, including DNA plasmids, small interfering RNAs, mRNAs, etc., and continuous improvement of the physical properties and biological performance of the polyelectrolyte complex nanoparticles prepared from lPEI and nucleic acids, there still exist major challenges to produce these nanocomplexes in a scalable manner, particularly for lPEI/DNA nanoparticles. This has significantly hindered the progress toward clinical translation of these nanoparticle-based gene medicine. Here the authors report a flash nanocomplexation (FNC) method that achieves continuous production of lPEI/plasmid DNA nanoparticles with narrow size distribution using a confined impinging jet device. The method involves the complex coacervation of negatively charged DNA plasmid and positive charged lPEI under rapid, highly dynamic, and homogeneous mixing conditions, producing polyelectrolyte complex nanoparticles with narrow distribution of particle size and shape. The average number of plasmid DNA packaged per nanoparticles and its distribution are similar between the FNC method and the small-scale batch mixing method. In addition, the nanoparticles prepared by these two methods exhibit similar cell transfection efficiency. These results confirm that FNC is an effective and scalable method that can produce well-controlled lPEI/plasmid DNA nanoparticles. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Continuous Production of Discrete Plasmid DNA-Polycation Nanoparticles Using Flash Nanocomplexation
Santos, Jose Luis; Ren, Yong; Vandermark, John; Archang, Maani M.; Williford, John-Michael; Liu, Heng-wen; Lee, Jason; Wang, Tza-Huei; Mao, Hai-Quan
2016-01-01
Despite successful demonstration of linear polyethyleneimine (lPEI) as an effective carrier for a wide range of gene medicine, including DNA plasmids, small interfering RNAs, mRNAs, etc., and continuous improvement of the physical properties and biological performance of the polyelectrolyte complex nanoparticles prepared from lPEI and nucleic acids, there still exist major challenges to produce these nanocomplexes in a scalable manner, particularly for lPEI/DNA nanoparticles. This has significantly hindered the progress towards clinical translation of these nanoparticle-based gene medicine. Here we report a flash nanocomplexation (FNC) method that achieves continuous production of lPEI/plasmid DNA nanoparticles with narrow size distribution using a confined impinging jet device. The method involves the complex coacervation of negatively charged DNA plasmid and positive charged lPEI under rapid, highly dynamic, and homogeneous mixing conditions, producing polyelectrolyte complex nanoparticles with narrow distribution of particle size and shape. The average number of plasmid DNA packaged per nanoparticles and its distribution are similar between the FNC method and the small-scale batch mixing method. In addition, the nanoparticles prepared by these two methods exhibit similar cell transfection efficiency. These results confirm that FNC is an effective and scalable method that can produce well-controlled lPEI/plasmid DNA nanoparticles. PMID:27717227
Antibody Production in Plants and Green Algae.
Yusibov, Vidadi; Kushnir, Natasha; Streatfield, Stephen J
2016-04-29
Monoclonal antibodies (mAbs) have a wide range of modern applications, including research, diagnostic, therapeutic, and industrial uses. Market demand for mAbs is high and continues to grow. Although mammalian systems, which currently dominate the biomanufacturing industry, produce effective and safe recombinant mAbs, they have a limited manufacturing capacity and high costs. Bacteria, yeast, and insect cell systems are highly scalable and cost effective but vary in their ability to produce appropriate posttranslationally modified mAbs. Plants and green algae are emerging as promising production platforms because of their time and cost efficiencies, scalability, lack of mammalian pathogens, and eukaryotic posttranslational protein modification machinery. So far, plant- and algae-derived mAbs have been produced predominantly as candidate therapeutics for infectious diseases and cancer. These candidates have been extensively evaluated in animal models, and some have shown efficacy in clinical trials. Here, we review ongoing efforts to advance the production of mAbs in plants and algae.
Coordinated Transformation among Community Colleges Lacking a State System
ERIC Educational Resources Information Center
Russell, James Thad
2016-01-01
Community colleges face many challenges in the face of demands for increased student success. Institutions continually seek scalable interventions and initiatives focused on improving student achievement. Effectively implementing sustainable change that moves the needle of student success remains elusive. Facilitating systemic, scalable change…
A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system
NASA Astrophysics Data System (ADS)
Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.
2014-06-01
The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.
Pelletier, Nathan; Klinger, Dane H; Sims, Neil A; Yoshioka, Janice-Renee; Kittinger, John N
2018-05-15
Aquaculture is anticipated to play an increasingly important role in global food security because it may represent one of the best opportunities to increase the availability of healthy animal protein in the context of resource and environmental constraints. However, the growth and sustainability of the aquaculture industry faces important bottlenecks with respect to feed resources, which may be derived from diverse sources. Here, using a small but representative subset of potential aquafeed inputs (which we selected to highlight a range of relevant attributes), we review a core suite of considerations that need to be accommodated in concert in order to overcome key bottlenecks to the continued development and expansion of the aquaculture industry. Specifically, we evaluate the nutritional attributes, substitutability, scalability, and resource and environmental intensity of each input. On this basis, we illustrate a range of potential synergies and trade-offs within and across attributes that are characteristic of ingredient types. We posit that the recognition and management of such synergies and trade-offs is imperative to satisfying the multi-objective decision-making associated with sustainable increases in future aquaculture production.
The validity and scalability of the Theory of Mind Scale with toddlers and preschoolers.
Hiller, Rachel M; Weber, Nathan; Young, Robyn L
2014-12-01
Despite the importance of theory of mind (ToM) for typical development, there remain 2 key issues affecting our ability to draw robust conclusions. One is the continued focus on false belief as the sole measure of ToM. The second is the lack of empirically validated measures of ToM as a broad construct. Our key aim was to examine the validity and reliability of the 5-item ToM scale (Peterson, Wellman, & Liu, 2005). In particular, we extended on previous research of this scale by assessing its scalability and validity for use with children from 2 years of age. Sixty-eight typically developing children (aged 24 to 61 months) were assessed on the scale's 5 tasks, along with a sixth Sally-Anne false-belief task. Our data replicated the scalability of the 5 tasks for a Rasch-but not Guttman-scale. Guttman analysis showed that a 4-item scale may be more suitable for this age range. Further, the tasks showed good internal consistency and validity for use with children as young as 2 years of age. Overall, the measure provides a valid and reliable tool for the assessment of ToM, and in particular, the longitudinal assessment of this ability as a construct. (c) 2014 APA, all rights reserved.
Seedless Growth of Bismuth Nanowire Array via Vacuum Thermal Evaporation
Liu, Mingzhao; Nam, Chang-Yong; Zhang, Lihua
2015-01-01
Here a seedless and template-free technique is demonstrated to scalably grow bismuth nanowires, through thermal evaporation in high vacuum at RT. Conventionally reserved for the fabrication of metal thin films, thermal evaporation deposits bismuth into an array of vertical single crystalline nanowires over a flat thin film of vanadium held at RT, which is freshly deposited by magnetron sputtering or thermal evaporation. By controlling the temperature of the growth substrate the length and width of the nanowires can be tuned over a wide range. Responsible for this novel technique is a previously unknown nanowire growth mechanism that roots in the mild porosity of the vanadium thin film. Infiltrated into the vanadium pores, the bismuth domains (~ 1 nm) carry excessive surface energy that suppresses their melting point and continuously expels them out of the vanadium matrix to form nanowires. This discovery demonstrates the feasibility of scalable vapor phase synthesis of high purity nanomaterials without using any catalysts. PMID:26709727
A scalable and continuous-upgradable optical wireless and wired convergent access network.
Sung, J Y; Cheng, K T; Chow, C W; Yeh, C H; Pan, C-L
2014-06-02
In this work, a scalable and continuous upgradable convergent optical access network is proposed. By using a multi-wavelength coherent comb source and a programmable waveshaper at the central office (CO), optical millimeter-wave (mm-wave) signals of different frequencies (from baseband to > 100 GHz) can be generated. Hence, it provides a scalable and continuous upgradable solution for end-user who needs 60 GHz wireless services now and > 100 GHz wireless services in the future. During the upgrade, user only needs to upgrade their optical networking unit (ONU). A programmable waveshaper is used to select the suitable optical tones with wavelength separation equals to the desired mm-wave frequency; while the CO remains intact. The centralized characteristics of the proposed system can easily add any new service and end-user. The centralized control of the wavelength makes the system more stable. Wired data rate of 17.45 Gb/s and w-band wireless data rate up to 3.36 Gb/s were demonstrated after transmission over 40 km of single-mode fiber (SMF).
Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena
2010-09-30
Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less
Adaptive format conversion for scalable video coding
NASA Astrophysics Data System (ADS)
Wan, Wade K.; Lim, Jae S.
2001-12-01
The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.
Hill, Eric A.; Chrisler, William B.; Beliaev, Alex S.; ...
2017-01-03
A new co-cultivation technology is presented that converts greenhouse gasses, CH 4 and CO 2, into microbial biomass. The methanotrophic bacterium, Methylomicrobium alcaliphilum 20z, was coupled to a cyanobacterium, Synechococcus PCC 7002 via oxygenic photosynthesis. The system exhibited robust growth on diverse gas mixtures ranging from biogas to those representative of a natural gas feedstock. A continuous processes was developed on a synthetic natural gas feed that achieved steady-state by imposing coupled light and O 2 limitations on the cyanobacterium and methanotroph, respectively. Continuous co-cultivation resulted in an O 2 depleted reactor and does not require CH 4/O 2 mixturesmore » to be fed into the system, thereby enhancing process safety considerations over traditional methanotroph mono-culture platforms. This co-culture technology is scalable with respect to its ability to utilize different gas streams and its biological components constructed from model bacteria that can be metabolically customized to produce a range of biofuels and bioproducts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, Eric A.; Chrisler, William B.; Beliaev, Alex S.
A new co-cultivation technology is presented that converts greenhouse gasses, CH 4 and CO 2, into microbial biomass. The methanotrophic bacterium, Methylomicrobium alcaliphilum 20z, was coupled to a cyanobacterium, Synechococcus PCC 7002 via oxygenic photosynthesis. The system exhibited robust growth on diverse gas mixtures ranging from biogas to those representative of a natural gas feedstock. A continuous processes was developed on a synthetic natural gas feed that achieved steady-state by imposing coupled light and O 2 limitations on the cyanobacterium and methanotroph, respectively. Continuous co-cultivation resulted in an O 2 depleted reactor and does not require CH 4/O 2 mixturesmore » to be fed into the system, thereby enhancing process safety considerations over traditional methanotroph mono-culture platforms. This co-culture technology is scalable with respect to its ability to utilize different gas streams and its biological components constructed from model bacteria that can be metabolically customized to produce a range of biofuels and bioproducts.« less
Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining
NASA Astrophysics Data System (ADS)
Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio
2013-12-01
Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.
Air-stable ink for scalable, high-throughput layer deposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weil, Benjamin D; Connor, Stephen T; Cui, Yi
A method for producing and depositing air-stable, easily decomposable, vulcanized ink on any of a wide range of substrates is disclosed. The ink enables high-volume production of optoelectronic and/or electronic devices using scalable production methods, such as roll-to-roll transfer, fast rolling processes, and the like.
Yi, Hoon; Hwang, Insol; Lee, Jeong Hyeon; Lee, Dael; Lim, Haneol; Tahk, Dongha; Sung, Minho; Bae, Won-Gyu; Choi, Se-Jin; Kwak, Moon Kyu; Jeong, Hoon Eui
2014-08-27
A simple yet scalable strategy for fabricating dry adhesives with mushroom-shaped micropillars is achieved by a combination of the roll-to-roll process and modulated UV-curable elastic poly(urethane acrylate) (e-PUA) resin. The e-PUA combines the major benefits of commercial PUA and poly(dimethylsiloxane) (PDMS). It not only can be cured within a few seconds like commercial PUA but also possesses good mechanical properties comparable to those of PDMS. A roll-type fabrication system equipped with a rollable mold and a UV exposure unit is also developed for the continuous process. By integrating the roll-to-roll process with the e-PUA, dry adhesives with spatulate tips in the form of a thin flexible film can be generated in a highly continuous and scalable manner. The fabricated dry adhesives with mushroom-shaped microstructures exhibit a strong pull-off strength of up to ∼38.7 N cm(-2) on the glass surface as well as high durability without any noticeable degradation. Furthermore, an automated substrate transportation system equipped with the dry adhesives can transport a 300 mm Si wafer over 10,000 repeating cycles with high accuracy.
Continuous flow photochemistry.
Gilmore, Kerry; Seeberger, Peter H
2014-06-01
Due to the narrow width of tubing/reactors used, photochemistry performed in micro- and mesoflow systems is significantly more efficient than when performed in batch due to the Beer-Lambert Law. Owing to the constant removal of product and facility of flow chemical scalability, the degree of degradation observed is generally decreased and the productivity of photochemical processes is increased. In this Personal Account, we describe a wide range of photochemical transformations we have examined using both visible and UV light, covering cyclizations, intermolecular couplings, radical polymerizations, as well as singlet oxygen oxygenations. Copyright © 2014 The Chemical Society of Japan and Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Electrochemical Impedance Sensors for Monitoring Trace Amounts of NO3 in Selected Growing Media.
Ghaffari, Seyed Alireza; Caron, William-O; Loubier, Mathilde; Normandeau, Charles-O; Viens, Jeff; Lamhamedi, Mohammed S; Gosselin, Benoit; Messaddeq, Younes
2015-07-21
With the advent of smart cities and big data, precision agriculture allows the feeding of sensor data into online databases for continuous crop monitoring, production optimization, and data storage. This paper describes a low-cost, compact, and scalable nitrate sensor based on electrochemical impedance spectroscopy for monitoring trace amounts of NO3- in selected growing media. The nitrate sensor can be integrated to conventional microelectronics to perform online nitrate sensing continuously over a wide concentration range from 0.1 ppm to 100 ppm, with a response time of about 1 min, and feed data into a database for storage and analysis. The paper describes the structural design, the Nyquist impedance response, the measurement sensitivity and accuracy, and the field testing of the nitrate sensor performed within tree nursery settings under ISO/IEC 17025 certifications.
Electrochemical Impedance Sensors for Monitoring Trace Amounts of NO3 in Selected Growing Media
Ghaffari, Seyed Alireza; Caron, William-O.; Loubier, Mathilde; Normandeau, Charles-O.; Viens, Jeff; Lamhamedi, Mohammed S.; Gosselin, Benoit; Messaddeq, Younes
2015-01-01
With the advent of smart cities and big data, precision agriculture allows the feeding of sensor data into online databases for continuous crop monitoring, production optimization, and data storage. This paper describes a low-cost, compact, and scalable nitrate sensor based on electrochemical impedance spectroscopy for monitoring trace amounts of NO3− in selected growing media. The nitrate sensor can be integrated to conventional microelectronics to perform online nitrate sensing continuously over a wide concentration range from 0.1 ppm to 100 ppm, with a response time of about 1 min, and feed data into a database for storage and analysis. The paper describes the structural design, the Nyquist impedance response, the measurement sensitivity and accuracy, and the field testing of the nitrate sensor performed within tree nursery settings under ISO/IEC 17025 certifications. PMID:26197322
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
Adaptable Information Models in the Global Change Information System
NASA Astrophysics Data System (ADS)
Duggan, B.; Buddenberg, A.; Aulenbach, S.; Wolfe, R.; Goldstein, J.
2014-12-01
The US Global Change Research Program has sponsored the creation of the Global Change Information System (
Feasibility and scalability of spring parameters in distraction enterogenesis in a murine model.
Huynh, Nhan; Dubrovsky, Genia; Rouch, Joshua D; Scott, Andrew; Stelzner, Matthias; Shekherdimian, Shant; Dunn, James C Y
2017-07-01
Distraction enterogenesis has been investigated as a novel treatment for short bowel syndrome (SBS). With variable intestinal sizes, it is critical to determine safe, translatable spring characteristics in differently sized animal models before clinical use. Nitinol springs have been shown to lengthen intestines in rats and pigs. Here, we show spring-mediated intestinal lengthening is scalable and feasible in a murine model. A 10-mm nitinol spring was compressed to 3 mm and placed in a 5-mm intestinal segment isolated from continuity in mice. A noncompressed spring placed in a similar fashion served as a control. Spring parameters were proportionally extrapolated from previous spring parameters to accommodate the smaller size of murine intestines. After 2-3 wk, the intestinal segments were examined for size and histology. Experimental group with spring constants, k = 0.2-1.4 N/m, showed intestinal lengthening from 5.0 ± 0.6 mm to 9.5 ± 0.8 mm (P < 0.0001), whereas control segments lengthened from 5.3 ± 0.5 mm to 6.4 ± 1.0 mm (P < 0.02). Diameter increased similarly in both groups. Isolated segment perforation was noted when k ≥ 0.8 N/m. Histologically, lengthened segments had increased muscularis thickness and crypt depth in comparison to normal intestine. Nitinol springs with k ≤ 0.4 N/m can safely yield nearly 2-fold distraction enterogenesis in length and diameter in a scalable mouse model. Not only does this study derive the safe ranges and translatable spring characteristics in a scalable murine model for patients with short bowel syndrome, it also demonstrates the feasibility of spring-mediated intestinal lengthening in a mouse, which can be used to study underlying mechanisms in the future. Copyright © 2017 Elsevier Inc. All rights reserved.
Diameter and Geometry Control of Vertically Aligned SWNTs through Catalyst Manipulation
NASA Astrophysics Data System (ADS)
Xiang, Rong; Einarsson, Erik; Okawa, Jun; Murakami, Yoichi; Maruyama, Shigeo
2009-03-01
We present our recent progress on manipulating our liquid-based catalyst loading process, which possesses greater potential than conventional deposition in terms of cost and scalability, to control the diameter and morphology of single-walled carbon nanotubes (SWNTs). We demonstrate that the diameter of aligned SWNTs synthesized by alcohol catalytic CVD can be tailored over a wide range by modifying the catalyst recipe. SWNT arrays with an average diameter as small as 1.2 nm were obtained by this method. Additionally, owing to the alignment of the array, the continuous change of the SWNT diameter during a single CVD process can be clearly observed and quantitatively characterized. We have also developed a versatile wet chemistry method to localize the growth of SWNTs to desired regions via surface modification. By functionalizing the silicon surface using a classic self-assembled monolayer, the catalyst can be selectively dip-coated onto hydrophilic areas of the substrate. This technique was successful in producing both random and aligned SWNTs with various patterns. The precise control of the diameter and morphology of SWNTs, achieved by simple and scalable liquid-based surface chemistry, could greatly facilitate the application of SWNTs as the building blocks of future nano-devices.
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
Tradespace and Affordability - Phase 2
2013-12-31
infrastructure capacity. Figure 15 locates the thirteen feasible configurations in survivability- mobility capability space (capability levels are scaled...battery power, or display size decreases. Other quantities may be applicable, such as the number of nodes in a scalable-up mobile network or the...limited size of a scalable-down mobile platform. Versatility involves the range of capabilities provided by a system as it is currently configured. A
Leveraging the Cloud for Integrated Network Experimentation
2014-03-01
kernel settings, or any of the low-level subcomponents. 3. Scalable Solutions: Businesses can build scalable solutions for their clients , ranging from...values. These values 13 can assume several distributions that include normal, Pareto , uniform, exponential and Poisson, among others [21]. Additionally, D...communication, the web client establishes a connection to the server before traffic begins to flow. Web servers do not initiate connections to clients in
Scalable digital hardware for a trapped ion quantum computer
NASA Astrophysics Data System (ADS)
Mount, Emily; Gaultney, Daniel; Vrijsen, Geert; Adams, Michael; Baek, So-Young; Hudek, Kai; Isabella, Louis; Crain, Stephen; van Rynbach, Andre; Maunz, Peter; Kim, Jungsang
2016-12-01
Many of the challenges of scaling quantum computer hardware lie at the interface between the qubits and the classical control signals used to manipulate them. Modular ion trap quantum computer architectures address scalability by constructing individual quantum processors interconnected via a network of quantum communication channels. Successful operation of such quantum hardware requires a fully programmable classical control system capable of frequency stabilizing the continuous wave lasers necessary for loading, cooling, initialization, and detection of the ion qubits, stabilizing the optical frequency combs used to drive logic gate operations on the ion qubits, providing a large number of analog voltage sources to drive the trap electrodes, and a scheme for maintaining phase coherence among all the controllers that manipulate the qubits. In this work, we describe scalable solutions to these hardware development challenges.
Continuous Variable Cluster State Generation over the Optical Spatial Mode Comb
Pooser, Raphael C.; Jing, Jietai
2014-10-20
One way quantum computing uses single qubit projective measurements performed on a cluster state (a highly entangled state of multiple qubits) in order to enact quantum gates. The model is promising due to its potential scalability; the cluster state may be produced at the beginning of the computation and operated on over time. Continuous variables (CV) offer another potential benefit in the form of deterministic entanglement generation. This determinism can lead to robust cluster states and scalable quantum computation. Recent demonstrations of CV cluster states have made great strides on the path to scalability utilizing either time or frequency multiplexingmore » in optical parametric oscillators (OPO) both above and below threshold. The techniques relied on a combination of entangling operators and beam splitter transformations. Here we show that an analogous transformation exists for amplifiers with Gaussian inputs states operating on multiple spatial modes. By judicious selection of local oscillators (LOs), the spatial mode distribution is analogous to the optical frequency comb consisting of axial modes in an OPO cavity. We outline an experimental system that generates cluster states across the spatial frequency comb which can also scale the amount of quantum noise reduction to potentially larger than in other systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2 21 = 2,097,152 MPI processes on the IBMmore » BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.« less
Superlinearly scalable noise robustness of redundant coupled dynamical systems.
Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L
2016-03-01
We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.
Scalable 3D bicontinuous fluid networks: polymer heat exchangers toward artificial organs.
Roper, Christopher S; Schubert, Randall C; Maloney, Kevin J; Page, David; Ro, Christopher J; Yang, Sophia S; Jacobsen, Alan J
2015-04-17
A scalable method for fabricating architected materials well-suited for heat and mass exchange is presented. These materials exhibit unprecedented combinations of small hydraulic diameters (13.0-0.09 mm) and large hydraulic-diameter-to-thickness ratios (5.0-30,100). This process expands the range of material architectures achievable starting from photopolymer waveguide lattices or additive manufacturing. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2012-04-21
the photoelectric effect. The typical shortest wavelengths needed for ion traps range from 194 nm for Hg+ to 493 nm for Ba +, corresponding to 6.4-2.5...REPORT Comprehensive Materials and Morphologies Study of Ion Traps (COMMIT) for scalable Quantum Computation - Final Report 14. ABSTRACT 16. SECURITY...CLASSIFICATION OF: Trapped ion systems, are extremely promising for large-scale quantum computation, but face a vexing problem, with motional quantum
Development of NASA's Small Fission Power System for Science and Human Exploration
NASA Technical Reports Server (NTRS)
Gibson, Marc A.; Mason, Lee; Bowman, Cheryl; Poston, David I.; McClure, Patrick R.; Creasy, John; Robinson, Chris
2014-01-01
Exploration of our solar system has brought great knowledge to our nation's scientific and engineering community over the past several decades. As we expand our visions to explore new, more challenging destinations, we must also expand our technology base to support these new missions. NASA's Space Technology Mission Directorate is tasked with developing these technologies for future mission infusion and continues to seek answers to many existing technology gaps. One such technology gap is related to compact power systems (greater than 1 kWe) that provide abundant power for several years where solar energy is unavailable or inadequate. Below 1 kWe, Radioisotope Power Systems have been the workhorse for NASA and will continue, assuming its availability, to be used for lower power applications similar to the successful missions of Voyager, Ulysses, New Horizons, Cassini, and Curiosity. Above 1 kWe, fission power systems become an attractive technology offering a scalable modular design of the reactor, shield, power conversion, and heat transport subsystems. Near term emphasis has been placed in the 1-10kWe range that lies outside realistic radioisotope power levels and fills a promising technology gap capable of enabling both science and human exploration missions. History has shown that development of space reactors is technically, politically, and financially challenging and requires a new approach to their design and development. A small team of NASA and DOE experts are providing a solution to these enabling FPS technologies starting with the lowest power and most cost effective reactor series named "Kilopower" that is scalable from approximately 1-10 kWe.
Development of NASA's Small Fission Power System for Science and Human Exploration
NASA Technical Reports Server (NTRS)
Gibson, Marc A.; Mason, Lee S.; Bowman, Cheryl L.; Poston, David I.; McClure, Patrick R.; Creasy, John; Robinson, Chris
2015-01-01
Exploration of our solar system has brought many exciting challenges to our nations scientific and engineering community over the past several decades. As we expand our visions to explore new, more challenging destinations, we must also expand our technology base to support these new missions. NASAs Space Technology Mission Directorate is tasked with developing these technologies for future mission infusion and continues to seek answers to many existing technology gaps. One such technology gap is related to compact power systems (1 kWe) that provide abundant power for several years where solar energy is unavailable or inadequate. Below 1 kWe, Radioisotope Power Systems have been the workhorse for NASA and will continue to be used for lower power applications similar to the successful missions of Voyager, Ulysses, New Horizons, Cassini, and Curiosity. Above 1 kWe, fission power systems become an attractive technology offering a scalable modular design of the reactor, shield, power conversion, and heat transport subsystems. Near term emphasis has been placed in the 1-10kWe range that lies outside realistic radioisotope power levels and fills a promising technology gap capable of enabling both science and human exploration missions. History has shown that development of space reactors is technically, politically, and financially challenging and requires a new approach to their design and development. A small team of NASA and DOE experts are providing a solution to these enabling FPS technologies starting with the lowest power and most cost effective reactor series named Kilopower that is scalable from approximately 1-10 kWe.
Down-Bore Two-Laser Heterodyne Velocimetry of an Implosion-Driven Hypervelocity Launcher
NASA Astrophysics Data System (ADS)
Hildebrand, Myles; Huneault, Justin; Loiseau, Jason; Higgins, Andrew J.
2015-06-01
The implosion-driven launcher uses explosives to shock-compress helium, driving well-characterized projectiles to velocities exceeding 10 km/s. The masses of projectiles range between 0.1 - 10 g, and the design shows excellent scalability, reaching similar velocities across different projectile sizes. In the past, velocity measurements have been limited to muzzle velocity obtained via a high-speed videography upon the projectile exiting the launch tube. Recently, Photonic Doppler Velocimetry (PDV) has demonstrated the ability to continuously measure in-bore velocity, even in the presence of significant blow-by of high temperature helium propellant past the projectile. While a single-laser PDV is limited to approximately 8 km/s, a two-laser PDV system is developed that uses two lasers operating near 1550 nm to provide velocity measurement capabilities up to 16 km/s. The two laser PDV system is used to obtain a continuous velocity history of the projectile throughout the entire launch cycle. These continuous velocity data are used to validate models of the launcher cycle and compare different advanced concepts aimed at increasing the projectile velocity to well beyond 10 km/s.
Gogoshin, Grigoriy; Boerwinkle, Eric
2017-01-01
Abstract Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology—type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types—single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc. PMID:27681505
Gogoshin, Grigoriy; Boerwinkle, Eric; Rodin, Andrei S
2017-04-01
Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology-type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types-single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc.
On-chip detection of non-classical light by scalable integration of single-photon detectors
Najafi, Faraz; Mower, Jacob; Harris, Nicholas C.; Bellei, Francesco; Dane, Andrew; Lee, Catherine; Hu, Xiaolong; Kharel, Prashanta; Marsili, Francesco; Assefa, Solomon; Berggren, Karl K.; Englund, Dirk
2015-01-01
Photonic-integrated circuits have emerged as a scalable platform for complex quantum systems. A central goal is to integrate single-photon detectors to reduce optical losses, latency and wiring complexity associated with off-chip detectors. Superconducting nanowire single-photon detectors (SNSPDs) are particularly attractive because of high detection efficiency, sub-50-ps jitter and nanosecond-scale reset time. However, while single detectors have been incorporated into individual waveguides, the system detection efficiency of multiple SNSPDs in one photonic circuit—required for scalable quantum photonic circuits—has been limited to <0.2%. Here we introduce a micrometer-scale flip-chip process that enables scalable integration of SNSPDs on a range of photonic circuits. Ten low-jitter detectors are integrated on one circuit with 100% device yield. With an average system detection efficiency beyond 10%, and estimated on-chip detection efficiency of 14–52% for four detectors operated simultaneously, we demonstrate, to the best of our knowledge, the first on-chip photon correlation measurements of non-classical light. PMID:25575346
A Low-Noise Transimpedance Amplifier for BLM-Based Ion Channel Recording.
Crescentini, Marco; Bennati, Marco; Saha, Shimul Chandra; Ivica, Josip; de Planque, Maurits; Morgan, Hywel; Tartagni, Marco
2016-05-19
High-throughput screening (HTS) using ion channel recording is a powerful drug discovery technique in pharmacology. Ion channel recording with planar bilayer lipid membranes (BLM) is scalable and has very high sensitivity. A HTS system based on BLM ion channel recording faces three main challenges: (i) design of scalable microfluidic devices; (ii) design of compact ultra-low-noise transimpedance amplifiers able to detect currents in the pA range with bandwidth >10 kHz; (iii) design of compact, robust and scalable systems that integrate these two elements. This paper presents a low-noise transimpedance amplifier with integrated A/D conversion realized in CMOS 0.35 μm technology. The CMOS amplifier acquires currents in the range ±200 pA and ±20 nA, with 100 kHz bandwidth while dissipating 41 mW. An integrated digital offset compensation loop balances any voltage offsets from Ag/AgCl electrodes. The measured open-input input-referred noise current is as low as 4 fA/√Hz at ±200 pA range. The current amplifier is embedded in an integrated platform, together with a microfluidic device, for current recording from ion channels. Gramicidin-A, α-haemolysin and KcsA potassium channels have been used to prove both the platform and the current-to-digital converter.
A Low-Noise Transimpedance Amplifier for BLM-Based Ion Channel Recording
Crescentini, Marco; Bennati, Marco; Saha, Shimul Chandra; Ivica, Josip; de Planque, Maurits; Morgan, Hywel; Tartagni, Marco
2016-01-01
High-throughput screening (HTS) using ion channel recording is a powerful drug discovery technique in pharmacology. Ion channel recording with planar bilayer lipid membranes (BLM) is scalable and has very high sensitivity. A HTS system based on BLM ion channel recording faces three main challenges: (i) design of scalable microfluidic devices; (ii) design of compact ultra-low-noise transimpedance amplifiers able to detect currents in the pA range with bandwidth >10 kHz; (iii) design of compact, robust and scalable systems that integrate these two elements. This paper presents a low-noise transimpedance amplifier with integrated A/D conversion realized in CMOS 0.35 μm technology. The CMOS amplifier acquires currents in the range ±200 pA and ±20 nA, with 100 kHz bandwidth while dissipating 41 mW. An integrated digital offset compensation loop balances any voltage offsets from Ag/AgCl electrodes. The measured open-input input-referred noise current is as low as 4 fA/√Hz at ±200 pA range. The current amplifier is embedded in an integrated platform, together with a microfluidic device, for current recording from ion channels. Gramicidin-A, α-haemolysin and KcsA potassium channels have been used to prove both the platform and the current-to-digital converter. PMID:27213382
Grundy, Lorena S; Lee, Victoria E; Li, Nannan; Sosa, Chris; Mulhearn, William D; Liu, Rui; Register, Richard A; Nikoubashman, Arash; Prud'homme, Robert K; Panagiotopoulos, Athanassios Z; Priestley, Rodney D
2018-05-08
Colloids with internally structured geometries have shown great promise in applications ranging from biosensors to optics to drug delivery, where the internal particle structure is paramount to performance. The growing demand for such nanomaterials necessitates the development of a scalable processing platform for their production. Flash nanoprecipitation (FNP), a rapid and inherently scalable colloid precipitation technology, is used to prepare internally structured colloids from blends of block copolymers and homopolymers. As revealed by a combination of experiments and simulations, colloids prepared from different molecular weight diblock copolymers adopt either an ordered lamellar morphology consisting of concentric shells or a disordered lamellar morphology when chain dynamics are sufficiently slow to prevent defect annealing during solvent exchange. Blends of homopolymer and block copolymer in the feed stream generate more complex internally structured colloids, such as those with hierarchically structured Janus and patchy morphologies, due to additional phase separation and kinetic trapping effects. The ability of the FNP process to generate such a wide range of morphologies using a simple and scalable setup provides a pathway to manufacturing internally structured colloids on an industrial scale.
Design study of a continuously variable roller cone traction CVT for electric vehicles
NASA Technical Reports Server (NTRS)
Mccoin, D. K.; Walker, R. D.
1980-01-01
Continuously variable ratio transmissions (CVT) featuring cone and roller traction elements and computerized controls are studied. The CVT meets or exceeds all requirements set forth in the design criteria. Further, a scalability analysis indicates the basic concept is applicable to lower and higher power units, with upward scaling for increased power being more readily accomplished.
Direct writing of half-meter long CNT based fiber for flexible electronics.
Huang, Sihan; Zhao, Chunsong; Pan, Wei; Cui, Yi; Wu, Hui
2015-03-11
Rapid construction of flexible circuits has attracted increasing attention according to its important applications in future smart electronic devices. Herein, we introduce a convenient and efficient "writing" approach to fabricate and assemble ultralong functional fibers as fundamental building blocks for flexible electronic devices. We demonstrated that, by a simple hand-writing process, carbon nanotubes (CNTs) can be aligned inside a continuous and uniform polymer fiber with length of more than 50 cm and diameters ranging from 300 nm to several micrometers. The as-prepared continuous fibers exhibit high electrical conductivity as well as superior mechanical flexibility (no obvious conductance increase after 1000 bending cycles to 4 mm diameter). Such functional fibers can be easily configured into designed patterns with high precision according to the easy "writing" process. The easy construction and assembly of functional fiber shown here holds potential for convenient and scalable fabrication of flexible circuits in future smart devices like wearable electronics and three-dimensional (3D) electronic devices.
Wang, Sibo; Wu, Yunchao; Miao, Ran; ...
2017-07-26
Scalable and cost-effective synthesis and assembly of technologically important nanostructures in three-dimensional (3D) substrates hold keys to bridge the demonstrated nanotechnologies in academia with industrially relevant scalable manufacturing. In this paper, using ZnO nanorod arrays as an example, a hydrothermal-based continuous flow synthesis (CFS) method is successfully used to integrate the nano-arrays in multi-channeled monolithic cordierite. Compared to the batch process, CFS enhances the average growth rate of nano-arrays by 125%, with the average length increasing from 2 μm to 4.5 μm within the same growth time of 4 hours. The precursor utilization efficiency of CFS is enhanced by 9more » times compared to that of batch process by preserving the majority of precursors in recyclable solution. Computational fluid dynamic simulation suggests a steady-state solution flow and mass transport inside the channels of honeycomb substrates, giving rise to steady and consecutive growth of ZnO nano-arrays with an average length of 10 μm in 12 h. The monolithic ZnO nano-array-integrated cordierite obtained through CFS shows enhanced low-temperature (200 °C) desulfurization capacity and recyclability in comparison to ZnO powder wash-coated cordierite. This can be attributed to exposed ZnO {101¯0} planes, better dispersion and stronger interactions between sorbent and reactant in the ZnO nanorod arrays, as well as the sintering-resistance of nano-array configurations during sulfidation–regeneration cycles. Finally, with the demonstrated scalable synthesis and desulfurization performance of ZnO nano-arrays, a promising, industrially relevant integration strategy is provided to fabricate metal oxide nano-array-based monolithic devices for various environmental and energy applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Sibo; Wu, Yunchao; Miao, Ran
Scalable and cost-effective synthesis and assembly of technologically important nanostructures in three-dimensional (3D) substrates hold keys to bridge the demonstrated nanotechnologies in academia with industrially relevant scalable manufacturing. In this paper, using ZnO nanorod arrays as an example, a hydrothermal-based continuous flow synthesis (CFS) method is successfully used to integrate the nano-arrays in multi-channeled monolithic cordierite. Compared to the batch process, CFS enhances the average growth rate of nano-arrays by 125%, with the average length increasing from 2 μm to 4.5 μm within the same growth time of 4 hours. The precursor utilization efficiency of CFS is enhanced by 9more » times compared to that of batch process by preserving the majority of precursors in recyclable solution. Computational fluid dynamic simulation suggests a steady-state solution flow and mass transport inside the channels of honeycomb substrates, giving rise to steady and consecutive growth of ZnO nano-arrays with an average length of 10 μm in 12 h. The monolithic ZnO nano-array-integrated cordierite obtained through CFS shows enhanced low-temperature (200 °C) desulfurization capacity and recyclability in comparison to ZnO powder wash-coated cordierite. This can be attributed to exposed ZnO {101¯0} planes, better dispersion and stronger interactions between sorbent and reactant in the ZnO nanorod arrays, as well as the sintering-resistance of nano-array configurations during sulfidation–regeneration cycles. Finally, with the demonstrated scalable synthesis and desulfurization performance of ZnO nano-arrays, a promising, industrially relevant integration strategy is provided to fabricate metal oxide nano-array-based monolithic devices for various environmental and energy applications.« less
Novel Scalable 3-D MT Inverse Solver
NASA Astrophysics Data System (ADS)
Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.
2016-12-01
We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.
Scalability of grid- and subbasin-based land surface modeling approaches for hydrologic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tesfa, Teklu K.; Ruby Leung, L.; Huang, Maoyi
2014-03-27
This paper investigates the relative merits of grid- and subbasin-based land surface modeling approaches for hydrologic simulations, with a focus on their scalability (i.e., abilities to perform consistently across a range of spatial resolutions) in simulating runoff generation. Simulations produced by the grid- and subbasin-based configurations of the Community Land Model (CLM) are compared at four spatial resolutions (0.125o, 0.25o, 0.5o and 1o) over the topographically diverse region of the U.S. Pacific Northwest. Using the 0.125o resolution simulation as the “reference”, statistical skill metrics are calculated and compared across simulations at 0.25o, 0.5o and 1o spatial resolutions of each modelingmore » approach at basin and topographic region levels. Results suggest significant scalability advantage for the subbasin-based approach compared to the grid-based approach for runoff generation. Basin level annual average relative errors of surface runoff at 0.25o, 0.5o, and 1o compared to 0.125o are 3%, 4%, and 6% for the subbasin-based configuration and 4%, 7%, and 11% for the grid-based configuration, respectively. The scalability advantages of the subbasin-based approach are more pronounced during winter/spring and over mountainous regions. The source of runoff scalability is found to be related to the scalability of major meteorological and land surface parameters of runoff generation. More specifically, the subbasin-based approach is more consistent across spatial scales than the grid-based approach in snowfall/rainfall partitioning, which is related to air temperature and surface elevation. Scalability of a topographic parameter used in the runoff parameterization also contributes to improved scalability of the rain driven saturated surface runoff component, particularly during winter. Hence this study demonstrates the importance of spatial structure for multi-scale modeling of hydrological processes, with implications to surface heat fluxes in coupled land-atmosphere modeling.« less
Cambié, Dario; Bottecchia, Cecilia; Straathof, Natan J W; Hessel, Volker; Noël, Timothy
2016-09-14
Continuous-flow photochemistry in microreactors receives a lot of attention from researchers in academia and industry as this technology provides reduced reaction times, higher selectivities, straightforward scalability, and the possibility to safely use hazardous intermediates and gaseous reactants. In this review, an up-to-date overview is given of photochemical transformations in continuous-flow reactors, including applications in organic synthesis, material science, and water treatment. In addition, the advantages of continuous-flow photochemistry are pointed out and a thorough comparison with batch processing is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forrest, C. J.; Radha, P. B.; Knauer, J. P.
In this study, the deuterium-tritium (D-T) and deuterium-deuterium neutron yield ratio in cryogenic inertial confinement fusion (ICF) experiments is used to examine multifluid effects, traditionally not included in ICF modeling. This ratio has been measured for ignition-scalable direct-drive cryogenic DT implosions at the Omega Laser Facility using a high-dynamic-range neutron time-of-flight spectrometer. The experimentally inferred yield ratio is consistent with both the calculated values of the nuclear reaction rates and the measured preshot target-fuel composition. These observations indicate that the physical mechanisms that have been proposed to alter the fuel composition, such as species separation of the hydrogen isotopes, aremore » not significant during the period of peak neutron production in ignition-scalable cryogenic direct-drive DT implosions.« less
Forrest, C. J.; Radha, P. B.; Knauer, J. P.; ...
2017-03-03
In this study, the deuterium-tritium (D-T) and deuterium-deuterium neutron yield ratio in cryogenic inertial confinement fusion (ICF) experiments is used to examine multifluid effects, traditionally not included in ICF modeling. This ratio has been measured for ignition-scalable direct-drive cryogenic DT implosions at the Omega Laser Facility using a high-dynamic-range neutron time-of-flight spectrometer. The experimentally inferred yield ratio is consistent with both the calculated values of the nuclear reaction rates and the measured preshot target-fuel composition. These observations indicate that the physical mechanisms that have been proposed to alter the fuel composition, such as species separation of the hydrogen isotopes, aremore » not significant during the period of peak neutron production in ignition-scalable cryogenic direct-drive DT implosions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod
2006-04-15
In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparentmore » and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.« less
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework
2012-01-01
Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909
SP-100 - The national space reactor power system program in response to future needs
NASA Astrophysics Data System (ADS)
Armijo, J. S.; Josloff, A. T.; Bailey, H. S.; Matteo, D. N.
The SP-100 system has been designed to meet comprehensive and demanding NASA/DOD/DOE requirements. The key requirements include: nuclear safety for all mission phases, scalability from 10's to 100's of kWe, reliable performance at full power for seven years of partial power for ten years, survivability in civil or military threat environments, capability to operate autonomously for up to six months, capability to protect payloads from excessive radiation, and compatibility with shuttle and expendable launch vehicles. The authors address of major progress in terms of design, flexibility/scalability, survivability, and development. These areas, with the exception of survivability, are discussed in detail. There has been significant improvement in the generic flight system design with substantial mass savings and simplification that enhance performance and reliability. Design activity has confirmed the scalability and flexibility of the system and the ability to efficiently meet NASA, AF, and SDIO needs. SP-100 development continues to make significant progress in all key technology areas.
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.
Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John
2012-12-05
For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.
k-RP*{sub s}: A scalable distributed data structure for high-performance multi-attribute access
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litwin, W.; Neimat, M.A.
k-RP*{sub s} is a new data structure for scalable multicomputer files with multi-attribute (k-d) keys. We discuss the k-RP*{sub s} file evolution and search algorithms. Performance analysis shows that a k-RP*{sub s} file can be much larger and orders of magnitude faster than a traditional k-d file. The speed-up is especially important for range and partial match searches that are often impractical with traditional k-d files. This opens up a new perspective for many applications.
Huang, Yin; Zheng, Ning; Cheng, Zhiqiang; Chen, Ying; Lu, Bingwei; Xie, Tao; Feng, Xue
2016-12-28
Flexible and stretchable electronics offer a wide range of unprecedented opportunities beyond conventional rigid electronics. Despite their vast promise, a significant bottleneck lies in the availability of a transfer printing technique to manufacture such devices in a highly controllable and scalable manner. Current technologies usually rely on manual stick-and-place and do not offer feasible mechanisms for precise and quantitative process control, especially when scalability is taken into account. Here, we demonstrate a spatioselective and programmable transfer strategy to print electronic microelements onto a soft substrate. The method takes advantage of automated direct laser writing to trigger localized heating of a micropatterned shape memory polymer adhesive stamp, allowing highly controlled and spatioselective switching of the interfacial adhesion. This, coupled to the proper tuning of the stamp properties, enables printing with perfect yield. The wide range adhesion switchability further allows printing of hybrid electronic elements, which is otherwise challenging given the complex interfacial manipulation involved. Our temperature-controlled transfer printing technique shows its critical importance and obvious advantages in the potential scale-up of device manufacturing. Our strategy opens a route to manufacturing flexible electronics with exceptional versatility and potential scalability.
Continuous, One-pot Synthesis and Post-Synthetic Modification of NanoMOFs Using Droplet Nanoreactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jambovane, Sachin R.; Nune, Satish K.; Kelly, Ryan T.
Metal-organic frameworks (MOFs); also known as porous coordination polymers (PCP) are a class of porous crystalline materials constructed by connecting metal clusters via organic linkers. The possibility of functionalization leads to virtually infinite MOF designs using generic modular methods. Functionalized MOFs can exhibit interesting physical and chemical properties including accelerated adsorption kinetics and catalysis. Although there are discrete methods to synthesize well-defined nanoscale MOFs, rapid and flexible methods are not available for continuous, one-pot synthesis and post synthesis modification (functionalization) of MOFs. Here, we show a continuous, scalable nanodroplet-based microfluidic route that not only facilitates the synthesis of MOFs atmore » nanoscale, but also offers flexibility for direct functionalization with desired functional groups (e.g., -NH 2, -COCH 3, fluorescein isothiocyanate; FITC). In addition, the presented route of continuous manufacturing of functionalized MOFs takes significantly less time compared to state-of-the-art batch methods currently available (1 hr vs. several days). We envisage our approach to be a breakthrough method for synthesizing complex functionalized nanomaterials (metal, metal oxides, quantum dots and MOFs) that are not accessible by direct batch processing, and expand the range of a new class of functionalized MOF-based functional nanomaterials.« less
Kidambi, Piran R; Mariappan, Dhanushkodi D; Dee, Nicholas T; Vyatskikh, Andrey; Zhang, Sui; Karnik, Rohit; Hart, A John
2018-03-28
Scalable, cost-effective synthesis and integration of graphene is imperative to realize large-area applications such as nanoporous atomically thin membranes (NATMs). Here, we report a scalable route to the production of NATMs via high-speed, continuous synthesis of large-area graphene by roll-to-roll chemical vapor deposition (CVD), combined with casting of a hierarchically porous polymer support. To begin, we designed and built a two zone roll-to-roll graphene CVD reactor, which sequentially exposes the moving foil substrate to annealing and growth atmospheres, with a sharp, isothermal transition between the zones. The configurational flexibility of the reactor design allows for a detailed evaluation of key parameters affecting graphene quality and trade-offs to be considered for high-rate roll-to-roll graphene manufacturing. With this system, we achieve synthesis of uniform high-quality monolayer graphene ( I D / I G < 0.065) at speeds ≥5 cm/min. NATMs fabricated from the optimized graphene, via polymer casting and postprocessing, show size-selective molecular transport with performance comparable to that of membranes made from conventionally synthesized graphene. Therefore, this work establishes the feasibility of a scalable manufacturing process of NATMs, for applications including protein desalting and small-molecule separations.
Cotič, Živa; Rees, Rebecca; Wark, Petra A; Car, Josip
2016-10-19
In 2013, there was a shortage of approximately 7.2 million health workers worldwide, which is larger among family physicians than among specialists. eLearning could provide a potential solution to some of these global workforce challenges. However, there is little evidence on factors facilitating or hindering implementation, adoption, use, scalability and sustainability of eLearning. This review aims to synthesise results from qualitative and mixed methods studies to provide insight on factors influencing implementation of eLearning for family medicine specialty education and training. Additionally, this review aims to identify the actions needed to increase effectiveness of eLearning and identify the strategies required to improve eLearning implementation, adoption, use, sustainability and scalability for family medicine speciality education and training. A systematic search will be conducted across a range of databases for qualitative studies focusing on experiences, barriers, facilitators, and other factors related to the implementation, adoption, use, sustainability and scalability of eLearning for family medicine specialty education and training. Studies will be synthesised by using the framework analysis approach. This study will contribute to the evaluation of eLearning implementation, adoption, use, sustainability and scalability for family medicine specialty training and education and the development of eLearning guidelines for postgraduate medical education. PROSPERO http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42016036449.
Electrohydrodynamic printing for scalable MoS2 flake coating: application to gas sensing device
NASA Astrophysics Data System (ADS)
Lim, Sooman; Cho, Byungjin; Bae, Jaehyun; Kim, Ah Ra; Lee, Kyu Hwan; Kim, Se Hyun; Hahm, Myung Gwan; Nam, Jaewook
2016-10-01
Scalable sub-micrometer molybdenum disulfide ({{MoS}}2) flake films with highly uniform coverage were created using a systematic approach. An electrohydrodynamic (EHD) printing process realized a remarkably uniform distribution of exfoliated {{MoS}}2 flakes on desired substrates. In combination with a fast evaporating dispersion medium and an optimal choice of operating parameters, the EHD printing can produce a film rapidly on a substrate without excessive agglomeration or cluster formation, which can be problems in previously reported liquid-based continuous film methods. The printing of exfoliated {{MoS}}2 flakes enabled the fabrication of a gas sensor with high performance and reproducibility for {{NO}}2 and {{NH}}3.
NASA Astrophysics Data System (ADS)
Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong
As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.
Breaking BAD: A Data Serving Vision for Big Active Data
Carey, Michael J.; Jacobs, Steven; Tsotras, Vassilis J.
2017-01-01
Virtually all of today’s Big Data systems are passive in nature. Here we describe a project to shift Big Data platforms from passive to active. We detail a vision for a scalable system that can continuously and reliably capture Big Data to enable timely and automatic delivery of new information to a large pool of interested users as well as supporting analyses of historical information. We are currently building a Big Active Data (BAD) system by extending an existing scalable open-source BDMS (AsterixDB) in this active direction. This first paper zooms in on the Data Serving piece of the BAD puzzle, including its key concepts and user model. PMID:29034377
Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services
NASA Astrophysics Data System (ADS)
Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.
Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability
High Intensity Laser Power Beaming Architecture for Space and Terrestrial Missions
NASA Technical Reports Server (NTRS)
Nayfeh, Taysir; Fast, Brian; Raible, Daniel; Dinca, Dragos; Tollis, Nick; Jalics, Andrew
2011-01-01
High Intensity Laser Power Beaming (HILPB) has been developed as a technique to achieve Wireless Power Transmission (WPT) for both space and terrestrial applications. In this paper, the system architecture and hardware results for a terrestrial application of HILPB are presented. These results demonstrate continuous conversion of high intensity optical energy at near-IR wavelengths directly to electrical energy at output power levels as high as 6.24 W from the single cell 0.8 cm2 aperture receiver. These results are scalable, and may be realized by implementing receiver arraying and utilizing higher power source lasers. This type of system would enable long range optical refueling of electric platforms, such as MUAV s, airships, robotic exploration missions and provide power to spacecraft platforms which may utilize it to drive electric means of propulsion.
Scalable Online Network Modeling and Simulation
2005-08-01
ONLINE NETWORK MODELING AND SIMULATION 6. AUTHOR(S) Boleslaw Szymanski , Shivkumar Kalyanaraman, Biplab Sikdar and Christopher Carothers 5...performance for a wide range of parameter values (parameter sensitivity), understanding of protocol stability and dynamics, and studying feature ...a wide range of parameter values (parameter sensitivity), understanding of protocol stability and dynamics, and studying feature interactions
Extensions under development for the HEVC standard
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.
2013-09-01
This paper discusses standardization activities for extending the capabilities of the High Efficiency Video Coding (HEVC) standard - the first edition of which was completed in early 2013. These near-term extensions are focused on three areas: range extensions (such as enhanced chroma formats, monochrome video, and increased bit depth), bitstream scalability extensions for spatial and fidelity scalability, and 3D video extensions (including stereoscopic/multi-view coding, and probably also depth map coding and combinations thereof). Standardization extensions on each of these topics will be completed by mid-2014, and further work beyond that timeframe is also discussed.
A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Z.; Hodgson, M.; Li, W.
2016-12-01
Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.
Scalable Method to Produce Biodegradable Nanoparticles that Rapidly Penetrate Human Mucus
Xu, Qingguo; Boylan, Nicholas J.; Cai, Shutian; Miao, Bolong; Patel, Himatkumar; Hanes, Justin
2013-01-01
Mucus typically traps and rapidly removes foreign particles from the airways, gastrointestinal tract, nasopharynx, female reproductive tract and the surface of the eye. Nanoparticles capable of rapid penetration through mucus can potentially avoid rapid clearance, and open significant opportunities for controlled drug delivery at mucosal surfaces. Here, we report an industrially scalable emulsification method to produce biodegradable mucus-penetrating particles (MPP). The emulsification of diblock copolymers of poly(lactic-co-glycolic acid) and polyethylene glycol (PLGA-PEG) using low molecular weight (MW) emulsifiers forms dense brush PEG coatings on nanoparticles that allow rapid nanoparticle penetration through fresh undiluted human mucus. In comparison, conventional high MW emulsifiers, such as polyvinyl alcohol (PVA), interrupts the PEG coating on nanoparticles, resulting in their immobilization in mucus owing to adhesive interactions with mucus mesh elements. PLGA-PEG nanoparticles with a wide range of PEG MW (1, 2, 5, and 10 kDa), prepared by the emulsification method using low MW emulsifiers, all rapidly penetrated mucus. A range of drugs, from hydrophobic small molecules to hydrohilic large biologics, can be efficiently loaded into biodegradable MPP using the method described. This readily scalable method should facilitate the production of MPP products for mucosal drug delivery, as well as potentially longer-circulating particles following intravenous administration. PMID:23751567
Flow Synthesis of Diaryliodonium Triflates
2017-01-01
A safe and scalable synthesis of diaryliodonium triflates was achieved using a practical continuous-flow design. A wide array of electron-rich to electron-deficient arenes could readily be transformed to their respective diaryliodonium salts on a gram scale, with residence times varying from 2 to 60 s (44 examples). PMID:28695736
A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells.
Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel
2016-03-09
In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level.
A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells
Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel
2016-01-01
In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level. PMID:27005630
Scuba: scalable kernel-based gene prioritization.
Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio
2018-01-25
The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .
Anonymous broadcasting of classical information with a continuous-variable topological quantum code
NASA Astrophysics Data System (ADS)
Menicucci, Nicolas C.; Baragiola, Ben Q.; Demarie, Tommaso F.; Brennen, Gavin K.
2018-03-01
Broadcasting information anonymously becomes more difficult as surveillance technology improves, but remarkably, quantum protocols exist that enable provably traceless broadcasting. The difficulty is making scalable entangled resource states that are robust to errors. We propose an anonymous broadcasting protocol that uses a continuous-variable surface-code state that can be produced using current technology. High squeezing enables large transmission bandwidth and strong anonymity, and the topological nature of the state enables local error mitigation.
Li, Jin; Lindley-Start, Jack; Porch, Adrian; Barrow, David
2017-07-24
High specification, polymer capsules, to produce inertial fusion energy targets, were continuously fabricated using surfactant-free, inertial centralisation, and ultrafast polymerisation, in a scalable flow reactor. Laser-driven, inertial confinement fusion depends upon the interaction of high-energy lasers and hydrogen isotopes, contained within small, spherical and concentric target shells, causing a nuclear fusion reaction at ~150 M°C. Potentially, targets will be consumed at ~1 M per day per reactor, demanding a 5000x unit cost reduction to ~$0.20, and is a critical, key challenge. Experimentally, double emulsions were used as templates for capsule-shells, and were formed at 20 Hz, on a fluidic chip. Droplets were centralised in a dynamic flow, and their shapes both evaluated, and mathematically modeled, before subsequent shell solidification. The shells were photo-cured individually, on-the-fly, with precisely-actuated, millisecond-length (70 ms), uniform-intensity UV pulses, delivered through eight, radially orchestrated light-pipes. The near 100% yield rate of uniform shells had a minimum 99.0% concentricity and sphericity, and the solidification processing period was significantly reduced, over conventional batch methods. The data suggest the new possibility of a continuous, on-the-fly, IFE target fabrication process, employing sequential processing operations within a continuous enclosed duct system, which may include cryogenic fuel-filling, and shell curing, to produce ready-to-use IFE targets.
Scalable Light Module for Low-Cost, High-Efficiency Light- Emitting Diode Luminaires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarsa, Eric
2015-08-31
During this two-year program Cree developed a scalable, modular optical architecture for low-cost, high-efficacy light emitting diode (LED) luminaires. Stated simply, the goal of this architecture was to efficiently and cost-effectively convey light from LEDs (point sources) to broad luminaire surfaces (area sources). By simultaneously developing warm-white LED components and low-cost, scalable optical elements, a high system optical efficiency resulted. To meet program goals, Cree evaluated novel approaches to improve LED component efficacy at high color quality while not sacrificing LED optical efficiency relative to conventional packages. Meanwhile, efficiently coupling light from LEDs into modular optical elements, followed by optimallymore » distributing and extracting this light, were challenges that were addressed via novel optical design coupled with frequent experimental evaluations. Minimizing luminaire bill of materials and assembly costs were two guiding principles for all design work, in the effort to achieve luminaires with significantly lower normalized cost ($/klm) than existing LED fixtures. Chief project accomplishments included the achievement of >150 lm/W warm-white LEDs having primary optics compatible with low-cost modular optical elements. In addition, a prototype Light Module optical efficiency of over 90% was measured, demonstrating the potential of this scalable architecture for ultra-high-efficacy LED luminaires. Since the project ended, Cree has continued to evaluate optical element fabrication and assembly methods in an effort to rapidly transfer this scalable, cost-effective technology to Cree production development groups. The Light Module concept is likely to make a strong contribution to the development of new cost-effective, high-efficacy luminaries, thereby accelerating widespread adoption of energy-saving SSL in the U.S.« less
Dennehy, Olga C; Cacheux, Valérie M Y; Deadman, Benjamin J; Lynch, Denis
2016-01-01
A continuous process strategy has been developed for the preparation of α-thio-β-chloroacrylamides, a class of highly versatile synthetic intermediates. Flow platforms to generate the α-chloroamide and α-thioamide precursors were successfully adopted, progressing from the previously employed batch chemistry, and in both instances afford a readily scalable methodology. The implementation of the key α-thio-β-chloroacrylamide casade as a continuous flow reaction on a multi-gram scale is described, while the tuneable nature of the cascade, facilitated by continuous processing, is highlighted by selective generation of established intermediates and byproducts. PMID:28144320
Electrically driven deep ultraviolet MgZnO lasers at room temperature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suja, Mohammad; Bashar, Sunayna Binte; Debnath, Bishwajit
Semiconductor lasers in the deep ultraviolet (UV) range have numerous potential applications ranging from water purification and medical diagnosis to high-density data storage and flexible displays. Nevertheless, very little success was achieved in the realization of electrically driven deep UV semiconductor lasers to date. Here, we report the fabrication and characterization of deep UV MgZnO semiconductor lasers. These lasers are operated with continuous current mode at room temperature and the shortest wavelength reaches 284 nm. The wide bandgap MgZnO thin films with various Mg mole fractions were grown on c-sapphire substrate using radio-frequency plasma assisted molecular beam epitaxy. Metal-semiconductor-metal (MSM)more » random laser devices were fabricated using lithography and metallization processes. Besides the demonstration of scalable emission wavelength, very low threshold current densities of 29-33 A/cm 2 are achieved. Furthermore, numerical modeling reveals that impact ionization process is responsible for the generation of hole carriers in the MgZnO MSM devices. The interaction of electrons and holes leads to radiative excitonic recombination and subsequent coherent random lasing.« less
Electrically driven deep ultraviolet MgZnO lasers at room temperature
Suja, Mohammad; Bashar, Sunayna Binte; Debnath, Bishwajit; ...
2017-06-01
Semiconductor lasers in the deep ultraviolet (UV) range have numerous potential applications ranging from water purification and medical diagnosis to high-density data storage and flexible displays. Nevertheless, very little success was achieved in the realization of electrically driven deep UV semiconductor lasers to date. Here, we report the fabrication and characterization of deep UV MgZnO semiconductor lasers. These lasers are operated with continuous current mode at room temperature and the shortest wavelength reaches 284 nm. The wide bandgap MgZnO thin films with various Mg mole fractions were grown on c-sapphire substrate using radio-frequency plasma assisted molecular beam epitaxy. Metal-semiconductor-metal (MSM)more » random laser devices were fabricated using lithography and metallization processes. Besides the demonstration of scalable emission wavelength, very low threshold current densities of 29-33 A/cm 2 are achieved. Furthermore, numerical modeling reveals that impact ionization process is responsible for the generation of hole carriers in the MgZnO MSM devices. The interaction of electrons and holes leads to radiative excitonic recombination and subsequent coherent random lasing.« less
Robot formation control in stealth mode with scalable team size
NASA Astrophysics Data System (ADS)
Yu, Hongjun; Shi, Peng; Lim, Cheng-Chew
2016-11-01
In situations where robots need to keep electromagnetic silent in a formation, communication channels become unavailable. Moreover, as passive displacement sensors are used, limited sensing ranges are inevitable due to power insufficiency and limited noise reduction. To address the formation control problem for a scalable team of robots subject to the above restrictions, a flexible strategy is necessary. In this paper, under the assumption that the data transmission among the robots is not available, a novel controller and a protocol are designed that do not rely on communication. As the controller only drives the robots to a partially desired formation, a distributed coordination protocol is proposed to resolve the imperfections. It is shown that the effectiveness of the controller and the protocol rely on the formation connectivity, and a condition is given on the sensing range. Simulations are conducted to illustrate the feasibility and advantages of the new design scheme developed.
NASA Astrophysics Data System (ADS)
Ab-Rahman, Mohammad Syuhaimi; Swedan, Abdulhameed Almabrok
2017-12-01
The emergence of new services and data exchange applications has increased the demand for bandwidth among individuals and commercial business users at the access area. Thus, vendors of optical access networks should achieve a high-capacity system. This study demonstrates the performance of an integrated configuration of one to four multi-wavelength conversions at 10 Gb/s based on cross-phase modulation using semiconductor optical amplifier integrated with Mach-Zehnder interferometer. The Opti System simulation tool is used to simulate and demonstrate one to four wavelength conversions using one modulated wavelength and four probes of continuous wave sources. The wavelength converter processes are confirmed through investigation of the input and output characteristics, optical signal-to-noise ratio, conversion efficiency, and extinction ratio of new modulated channels after separation by demultiplexing. The outcomes of the proposed system using single channel indicate that the capacity can increase from 10 Gb/s to 50 Gb/s with a maximum number of access points increasing from 64 to 320 (each point with 156.25 Mb/s bandwidth). The splitting ratio of 1:16 provides each client with 625 Mb/s for the total number of 80 users. The Q-factor and bit error rate curves are investigated to confirm and validate the modified scheme and prove the system performance of the full topology of 25 km with 1/64 splitter. The outcomes are within the acceptable range to provide the system scalability.
NASA Astrophysics Data System (ADS)
Long, M. S.; Yantosca, R.; Nielsen, J.; Linford, J. C.; Keller, C. A.; Payer Sulprizio, M.; Jacob, D. J.
2014-12-01
The GEOS-Chem global chemical transport model (CTM), used by a large atmospheric chemistry research community, has been reengineered to serve as a platform for a range of computational atmospheric chemistry science foci and applications. Development included modularization for coupling to general circulation and Earth system models (ESMs) and the adoption of co-processor capable atmospheric chemistry solvers. This was done using an Earth System Modeling Framework (ESMF) interface that operates independently of GEOS-Chem scientific code to permit seamless transition from the GEOS-Chem stand-alone serial CTM to deployment as a coupled ESM module. In this manner, the continual stream of updates contributed by the CTM user community is automatically available for broader applications, which remain state-of-science and directly referenceable to the latest version of the standard GEOS-Chem CTM. These developments are now available as part of the standard version of the GEOS-Chem CTM. The system has been implemented as an atmospheric chemistry module within the NASA GEOS-5 ESM. The coupled GEOS-5/GEOS-Chem system was tested for weak and strong scalability and performance with a tropospheric oxidant-aerosol simulation. Results confirm that the GEOS-Chem chemical operator scales efficiently for any number of processes. Although inclusion of atmospheric chemistry in ESMs is computationally expensive, the excellent scalability of the chemical operator means that the relative cost goes down with increasing number of processes, making fine-scale resolution simulations possible.
Multiplicative Forests for Continuous-Time Processes
Weiss, Jeremy C.; Natarajan, Sriraam; Page, David
2013-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability. PMID:25284967
Multiplicative Forests for Continuous-Time Processes.
Weiss, Jeremy C; Natarajan, Sriraam; Page, David
2012-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability.
Wilson, David R; Mosenia, Arman; Suprenant, Mark P; Upadhya, Rahul; Routkevitch, Denis; Meyer, Randall A; Quinones-Hinojosa, Alfredo; Green, Jordan J
2017-06-01
Translation of biomaterial-based nanoparticle formulations to the clinic faces significant challenges including efficacy, safety, consistency and scale-up of manufacturing, and stability during long-term storage. Continuous microfluidic fabrication of polymeric nanoparticles has the potential to alleviate the challenges associated with manufacture, while offering a scalable solution for clinical level production. Poly(beta-amino esters) (PBAE)s are a class of biodegradable cationic polymers that self-assemble with anionic plasmid DNA to form polyplex nanoparticles that have been shown to be effective for transfecting cancer cells specifically in vitro and in vivo. Here, we demonstrate the use of a microfluidic device for the continuous and scalable production of PBAE/DNA nanoparticles followed by lyophilization and long term storage that results in improved in vitro efficacy in multiple cancer cell lines compared to nanoparticles produced by bulk mixing as well as in comparison to widely used commercially available transfection reagents polyethylenimine and Lipofectamine® 2000. We further characterized the nanoparticles using nanoparticle tracking analysis (NTA) to show that microfluidic mixing resulted in fewer DNA-free polymeric nanoparticles compared to those produced by bulk mixing. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 105A: 1813-1825, 2017. © 2017 Wiley Periodicals, Inc.
Vigmond, Edward J.; Boyle, Patrick M.; Leon, L. Joshua; Plank, Gernot
2014-01-01
Simulations of cardiac bioelectric phenomena remain a significant challenge despite continual advancements in computational machinery. Spanning large temporal and spatial ranges demands millions of nodes to accurately depict geometry, and a comparable number of timesteps to capture dynamics. This study explores a new hardware computing paradigm, the graphics processing unit (GPU), to accelerate cardiac models, and analyzes results in the context of simulating a small mammalian heart in real time. The ODEs associated with membrane ionic flow were computed on traditional CPU and compared to GPU performance, for one to four parallel processing units. The scalability of solving the PDE responsible for tissue coupling was examined on a cluster using up to 128 cores. Results indicate that the GPU implementation was between 9 and 17 times faster than the CPU implementation and scaled similarly. Solving the PDE was still 160 times slower than real time. PMID:19964295
Taming tosyl azide: the development of a scalable continuous diazo transfer process.
Deadman, Benjamin J; O'Mahony, Rosella M; Lynch, Denis; Crowley, Daniel C; Collins, Stuart G; Maguire, Anita R
2016-04-07
Heat and shock sensitive tosyl azide was generated and used on demand in a telescoped diazo transfer process. Small quantities of tosyl azide were accessed in a 'one pot' batch procedure using shelf stable, readily available reagents. For large scale diazo transfer reactions tosyl azide was generated and used in a telescoped flow process, to mitigate the risks associated with handling potentially explosive reagents on scale. The in situ formed tosyl azide was used to rapidly perform diazo transfer to a range of acceptors, including β-ketoesters, β-ketoamides, malonate esters and β-ketosulfones. An effective in-line quench of sulfonyl azides was also developed, whereby a sacrificial acceptor molecule ensured complete consumption of any residual hazardous diazo transfer reagent. The telescoped diazo transfer process with in-line quenching was used to safely prepare over 21 g of an α-diazocarbonyl in >98% purity without any column chromatography.
Skills Training to Avoid Inadvertent Plagiarism: Results from a Randomised Control Study
ERIC Educational Resources Information Center
Newton, Fiona J.; Wright, Jill D.; Newton, Joshua D.
2014-01-01
Plagiarism continues to be a concern within academic institutions. The current study utilised a randomised control trial of 137 new entry tertiary students to assess the efficacy of a scalable short training session on paraphrasing, patch writing and plagiarism. The results indicate that the training significantly enhanced students' overall…
Algorithmic psychometrics and the scalable subject.
Stark, Luke
2018-04-01
Recent public controversies, ranging from the 2014 Facebook 'emotional contagion' study to psychographic data profiling by Cambridge Analytica in the 2016 American presidential election, Brexit referendum and elsewhere, signal watershed moments in which the intersecting trajectories of psychology and computer science have become matters of public concern. The entangled history of these two fields grounds the application of applied psychological techniques to digital technologies, and an investment in applying calculability to human subjectivity. Today, a quantifiable psychological subject position has been translated, via 'big data' sets and algorithmic analysis, into a model subject amenable to classification through digital media platforms. I term this position the 'scalable subject', arguing it has been shaped and made legible by algorithmic psychometrics - a broad set of affordances in digital platforms shaped by psychology and the behavioral sciences. In describing the contours of this 'scalable subject', this paper highlights the urgent need for renewed attention from STS scholars on the psy sciences, and on a computational politics attentive to psychology, emotional expression, and sociality via digital media.
NASA Astrophysics Data System (ADS)
Forrest, C. J.; Radha, P. B.; Knauer, J. P.; Glebov, V. Yu.; Goncharov, V. N.; Regan, S. P.; Rosenberg, M. J.; Sangster, T. C.; Shmayda, W. T.; Stoeckl, C.; Gatu Johnson, M.
2017-03-01
The deuterium-tritium (D-T) and deuterium-deuterium neutron yield ratio in cryogenic inertial confinement fusion (ICF) experiments is used to examine multifluid effects, traditionally not included in ICF modeling. This ratio has been measured for ignition-scalable direct-drive cryogenic DT implosions at the Omega Laser Facility [T. R. Boehly et al., Opt. Commun. 133, 495 (1997), 10.1016/S0030-4018(96)00325-2] using a high-dynamic-range neutron time-of-flight spectrometer. The experimentally inferred yield ratio is consistent with both the calculated values of the nuclear reaction rates and the measured preshot target-fuel composition. These observations indicate that the physical mechanisms that have been proposed to alter the fuel composition, such as species separation of the hydrogen isotopes [D. T. Casey et al., Phys. Rev. Lett. 108, 075002 (2012), 10.1103/PhysRevLett.108.075002], are not significant during the period of peak neutron production in ignition-scalable cryogenic direct-drive DT implosions.
Zhou, Weizheng; Tong, Gangsheng; Wang, Dali; Zhu, Bangshang; Ren, Yu; Butler, Michael; Pelan, Eddie; Yan, Deyue; Zhu, Xinyuan; Stoyanov, Simeon D
2016-04-06
Hierarchical porous structures are ubiquitous in biological organisms and inorganic systems. Although such structures have been replicated, designed, and fabricated, they are often inferior to naturally occurring analogues. Apart from the complexity and multiple functionalities developed by the biological systems, the controllable and scalable production of hierarchically porous structures and building blocks remains a technological challenge. Herein, a facile and scalable approach is developed to fabricate hierarchical hollow spheres with integrated micro-, meso-, and macropores ranging from 1 nm to 100 μm (spanning five orders of magnitude). (Macro)molecules, micro-rods (which play a key role for the creation of robust capsules), and emulsion droplets have been successfully employed as multiple length scale templates, allowing the creation of hierarchical porous macrospheres. Thanks to their specific mechanical strength, these hierarchical porous spheres could be incorporated and assembled as higher level building blocks in various novel materials. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Scalability of Robotic Controllers: Speech-Based Robotic Controller Evaluation
2009-06-01
treated confidentially. Roster Number: __________________________ Date: _________________________ 1. Do you have any physical injury at the...any visual problems which you may have: Astigmatism (1) . 6. What is your height? 69 inches (mean) (range 60-74) 7. What is your
2016-02-08
Data Display Markup Language HUD heads-up display IRIG Inter-Range Instrumentation Group RCC Range Commanders Council SVG Scalable Vector Graphics...T&E test and evaluation TMATS Telemetry Attributes Transfer Standard XML eXtensible Markup Language DDML Schema Validation, RCC 126-16, February...2016 viii This page intentionally left blank. DDML Schema Validation, RCC 126-16, February 2016 1 1. Introduction This Data Display Markup
NASA Astrophysics Data System (ADS)
Sturtevant, C.; Hackley, S.; Lee, R.; Holling, G.; Bonarrigo, S.
2017-12-01
Quality assurance and control (QA/QC) is one of the most important yet challenging aspects of producing research-quality data. Data quality issues are multi-faceted, including sensor malfunctions, unmet theoretical assumptions, and measurement interference from humans or the natural environment. Tower networks such as Ameriflux, ICOS, and NEON continue to grow in size and sophistication, yet tools for robust, efficient, scalable QA/QC have lagged. Quality control remains a largely manual process heavily relying on visual inspection of data. In addition, notes of measurement interference are often recorded on paper without an explicit pathway to data flagging. As such, an increase in network size requires a near-proportional increase in personnel devoted to QA/QC, quickly stressing the human resources available. We present a scalable QA/QC framework in development for NEON that combines the efficiency and standardization of automated checks with the power and flexibility of human review. This framework includes fast-response monitoring of sensor health, a mobile application for electronically recording maintenance activities, traditional point-based automated quality flagging, and continuous monitoring of quality outcomes and longer-term holistic evaluations. This framework maintains the traceability of quality information along the entirety of the data generation pipeline, and explicitly links field reports of measurement interference to quality flagging. Preliminary results show that data quality can be effectively monitored and managed for a multitude of sites with a small group of QA/QC staff. Several components of this framework are open-source, including a R-Shiny application for efficiently monitoring, synthesizing, and investigating data quality issues.
A reconfigurable continuous-flow fluidic routing fabric using a modular, scalable primitive.
Silva, Ryan; Bhatia, Swapnil; Densmore, Douglas
2016-07-05
Microfluidic devices, by definition, are required to move liquids from one physical location to another. Given a finite and frequently fixed set of physical channels to route fluids, a primitive design element that allows reconfigurable routing of that fluid from any of n input ports to any n output ports will dramatically change the paradigms by which these chips are designed and applied. Furthermore, if these elements are "regular" regarding their design, the programming and fabrication of these elements becomes scalable. This paper presents such a design element called a transposer. We illustrate the design, fabrication and operation of a single transposer. We then scale this design to create a programmable fabric towards a general-purpose, reconfigurable microfluidic platform analogous to the Field Programmable Gate Array (FPGA) found in digital electronics.
NASA Astrophysics Data System (ADS)
Baynes, K.; Gilman, J.; Pilone, D.; Mitchell, A. E.
2015-12-01
The NASA EOSDIS (Earth Observing System Data and Information System) Common Metadata Repository (CMR) is a continuously evolving metadata system that merges all existing capabilities and metadata from EOS ClearingHOuse (ECHO) and the Global Change Master Directory (GCMD) systems. This flagship catalog has been developed with several key requirements: fast search and ingest performance ability to integrate heterogenous external inputs and outputs high availability and resiliency scalability evolvability and expandability This talk will focus on the advantages and potential challenges of tackling these requirements using a microservices architecture, which decomposes system functionality into smaller, loosely-coupled, individually-scalable elements that communicate via well-defined APIs. In addition, time will be spent examining specific elements of the CMR architecture and identifying opportunities for future integrations.
Integrated Avionics System (IAS)
NASA Technical Reports Server (NTRS)
Hunter, D. J.
2001-01-01
As spacecraft designs converge toward miniaturization and with the volumetric and mass constraints placed on avionics, programs will continue to advance the 'state of the art' in spacecraft systems development with new challenges to reduce power, mass, and volume. Although new technologies have improved packaging densities, a total system packaging architecture is required that not only reduces spacecraft volume and mass budgets, but increase integration efficiencies, provide modularity and scalability to accommodate multiple missions. With these challenges in mind, a novel packaging approach incorporates solutions that provide broader environmental applications, more flexible system interconnectivity, scalability, and simplified assembly test and integration schemes. This paper will describe the fundamental elements of the Integrated Avionics System (IAS), Horizontally Mounted Cube (HMC) hardware design, system and environmental test results. Additional information is contained in the original extended abstract.
Scalable loading of a two-dimensional trapped-ion array
Bruzewicz, Colin D.; McConnell, Robert; Chiaverini, John; Sage, Jeremy M.
2016-01-01
Two-dimensional arrays of trapped-ion qubits are attractive platforms for scalable quantum information processing. Sufficiently rapid reloading capable of sustaining a large array, however, remains a significant challenge. Here with the use of a continuous flux of pre-cooled neutral atoms from a remotely located source, we achieve fast loading of a single ion per site while maintaining long trap lifetimes and without disturbing the coherence of an ion quantum bit in an adjacent site. This demonstration satisfies all major criteria necessary for loading and reloading extensive two-dimensional arrays, as will be required for large-scale quantum information processing. Moreover, the already high loading rate can be increased by loading ions in parallel with only a concomitant increase in photo-ionization laser power and no need for additional atomic flux. PMID:27677357
Scalable Multiprocessor for High-Speed Computing in Space
NASA Technical Reports Server (NTRS)
Lux, James; Lang, Minh; Nishimoto, Kouji; Clark, Douglas; Stosic, Dorothy; Bachmann, Alex; Wilkinson, William; Steffke, Richard
2004-01-01
A report discusses the continuing development of a scalable multiprocessor computing system for hard real-time applications aboard a spacecraft. "Hard realtime applications" signifies applications, like real-time radar signal processing, in which the data to be processed are generated at "hundreds" of pulses per second, each pulse "requiring" millions of arithmetic operations. In these applications, the digital processors must be tightly integrated with analog instrumentation (e.g., radar equipment), and data input/output must be synchronized with analog instrumentation, controlled to within fractions of a microsecond. The scalable multiprocessor is a cluster of identical commercial-off-the-shelf generic DSP (digital-signal-processing) computers plus generic interface circuits, including analog-to-digital converters, all controlled by software. The processors are computers interconnected by high-speed serial links. Performance can be increased by adding hardware modules and correspondingly modifying the software. Work is distributed among the processors in a parallel or pipeline fashion by means of a flexible master/slave control and timing scheme. Each processor operates under its own local clock; synchronization is achieved by broadcasting master time signals to all the processors, which compute offsets between the master clock and their local clocks.
CAM-SE: A scalable spectral element dynamical core for the Community Atmosphere Model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, John; Edwards, Jim; Evans, Kate J
2012-01-01
The Community Atmosphere Model (CAM) version 5 includes a spectral element dynamical core option from NCAR's High-Order Method Modeling Environment. It is a continuous Galerkin spectral finite element method designed for fully unstructured quadrilateral meshes. The current configurations in CAM are based on the cubed-sphere grid. The main motivation for including a spectral element dynamical core is to improve the scalability of CAM by allowing quasi-uniform grids for the sphere that do not require polar filters. In addition, the approach provides other state-of-the-art capabilities such as improved conservation properties. Spectral elements are used for the horizontal discretization, while most othermore » aspects of the dynamical core are a hybrid of well tested techniques from CAM's finite volume and global spectral dynamical core options. Here we first give a overview of the spectral element dynamical core as used in CAM. We then give scalability and performance results from CAM running with three different dynamical core options within the Community Earth System Model, using a pre-industrial time-slice configuration. We focus on high resolution simulations of 1/4 degree, 1/8 degree, and T340 spectral truncation.« less
Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju
2014-03-21
A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated.
Motivation for Knowledge Sharing by Expert Participants in Company-Hosted Online User Communities
ERIC Educational Resources Information Center
Cheng, Jingli
2014-01-01
Company-hosted online user communities are increasingly popular as firms continue to search for ways to provide their customers with high quality and reliable support in a low cost and scalable way. Yet, empirical understanding of motivations for knowledge sharing in this type of online communities is lacking, especially with regard to an…
Final Report: Enabling Exascale Hardware and Software Design through Scalable System Virtualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Patrick G.
2015-02-01
In this grant, we enhanced the Palacios virtual machine monitor to increase its scalability and suitability for addressing exascale system software design issues. This included a wide range of research on core Palacios features, large-scale system emulation, fault injection, perfomrance monitoring, and VMM extensibility. This research resulted in large number of high-impact publications in well-known venues, the support of a number of students, and the graduation of two Ph.D. students and one M.S. student. In addition, our enhanced version of the Palacios virtual machine monitor has been adopted as a core element of the Hobbes operating system under active DOE-fundedmore » research and development.« less
NASA Technical Reports Server (NTRS)
Huang, Adam
2016-01-01
The goal of the Solid State Inflation Balloon Active Deorbiter project is to develop and demonstrate a scalable, simple, reliable, and low-cost active deorbiting system capable of controlling the downrange point of impact for the full-range of small satellites from 1 kg to 180 kg. The key enabling technology being developed is the Solid State Gas Generator (SSGG) chip, generating pure nitrogen gas from sodium azide (NaN3) micro-crystals. Coupled with a metalized nonelastic drag balloon, the complete Solid State Inflation Balloon (SSIB) system is capable of repeated inflation/deflation cycles. The SSGG minimizes size, weight, electrical power, and cost when compared to the current state of the art.
Cloud Computing in Support of Synchronized Disaster Response Operations
2010-09-01
scalable, Web application based on cloud computing technologies to facilitate communication between a broad range of public and private entities without...requiring them to compromise security or competitive advantage. The proposed design applies the unique benefits of cloud computing architectures such as
Down-bore two-laser heterodyne velocimetry of an implosion-driven hypervelocity launcher
NASA Astrophysics Data System (ADS)
Hildebrand, Myles; Huneault, Justin; Loiseau, Jason; Higgins, Andrew J.
2017-01-01
The implosion-driven launcher uses explosives to shock-compress helium, driving well-characterized projectiles to velocities exceeding 10 km/s. The masses of projectiles range between 0.1 - 15 g, and the design shows excellent scalability, reaching similar velocities across different projectile sizes. In the past, velocity measurements have been limited to muzzle velocity obtained via a high-speed videography upon the projectile exiting the launch tube. Recently, Photon Doppler Velocimetry (PDV) has demonstrated the ability to continuously measure in-bore velocity, even in the presence of significant blow-by of high temperature helium propellant past the projectile. While a single laser system sampled at 40 GS/s with a 13 GHz detector/scope bandwidth is limited to 8 km/s, a two-laser PDV system is developed that uses two lasers operating near 1550 nm to provide velocity measurement capabilities up to 16 km/s with the same bandwidth and sampling rate. The two-laser PDV system is used to obtain a continuous velocity history of the projectile throughout the entire launch cycle. These internal ballistics trajectories are used to compare different advanced concepts aimed at increasing the projectile velocity to well beyond 10 km/s.
NASA Astrophysics Data System (ADS)
Kiessling, J.; Breunig, I.; Schunemann, P. G.; Buse, K.; Vodopyanov, K. L.
2013-10-01
We report a diffraction-limited photonic terahertz (THz) source with linewidth <10 MHz that can be used for nonlinear THz studies in the continuous wave (CW) regime with uninterrupted tunability in a broad range of THz frequencies. THz output is produced in orientation-patterned (OP) gallium arsenide (GaAs) via intracavity frequency mixing between the two closely spaced resonating signal and idler waves of an optical parametric oscillator (OPO) operating near λ = 2 μm. The doubly resonant type II OPO is based on a periodically poled lithium niobate (PPLN) pumped by a single-frequency Yb:YAG disc laser at 1030 nm. We take advantage of the enhancement of both optical fields inside a high-finesse OPO cavity: with 10 W of 1030 nm pump, 100 W of intracavity power near 2 μm was attained with GaAs inside cavity. This allows dramatic improvement in terms of generated THz power, as compared to the state-of-the art CW methods. We achieved >25 μW of single-frequency tunable CW THz output power scalable to >1 mW with proper choice of pump laser wavelength.
Takeda, Shuntaro; Furusawa, Akira
2017-09-22
We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.
NASA Astrophysics Data System (ADS)
Takeda, Shuntaro; Furusawa, Akira
2017-09-01
We propose a scalable scheme for optical quantum computing using measurement-induced continuous-variable quantum gates in a loop-based architecture. Here, time-bin-encoded quantum information in a single spatial mode is deterministically processed in a nested loop by an electrically programmable gate sequence. This architecture can process any input state and an arbitrary number of modes with almost minimum resources, and offers a universal gate set for both qubits and continuous variables. Furthermore, quantum computing can be performed fault tolerantly by a known scheme for encoding a qubit in an infinite-dimensional Hilbert space of a single light mode.
Dynamic full-scalability conversion in scalable video coding
NASA Astrophysics Data System (ADS)
Lee, Dong Su; Bae, Tae Meon; Thang, Truong Cong; Ro, Yong Man
2007-02-01
For outstanding coding efficiency with scalability functions, SVC (Scalable Video Coding) is being standardized. SVC can support spatial, temporal and SNR scalability and these scalabilities are useful to provide a smooth video streaming service even in a time varying network such as a mobile environment. But current SVC is insufficient to support dynamic video conversion with scalability, thereby the adaptation of bitrate to meet a fluctuating network condition is limited. In this paper, we propose dynamic full-scalability conversion methods for QoS adaptive video streaming in SVC. To accomplish full scalability dynamic conversion, we develop corresponding bitstream extraction, encoding and decoding schemes. At the encoder, we insert the IDR NAL periodically to solve the problems of spatial scalability conversion. At the extractor, we analyze the SVC bitstream to get the information which enable dynamic extraction. Real time extraction is achieved by using this information. Finally, we develop the decoder so that it can manage the changing scalability. Experimental results showed that dynamic full-scalability conversion was verified and it was necessary for time varying network condition.
Vocal activity as a low cost and scalable index of seabird colony size
Borker, Abraham L.; McKown, Matthew W.; Ackerman, Joshua T.; Eagles-Smith, Collin A.; Tershy, Bernie R.; Croll, Donald A.
2014-01-01
Although wildlife conservation actions have increased globally in number and complexity, the lack of scalable, cost-effective monitoring methods limits adaptive management and the evaluation of conservation efficacy. Automated sensors and computer-aided analyses provide a scalable and increasingly cost-effective tool for conservation monitoring. A key assumption of automated acoustic monitoring of birds is that measures of acoustic activity at colony sites are correlated with the relative abundance of nesting birds. We tested this assumption for nesting Forster's terns (Sterna forsteri) in San Francisco Bay for 2 breeding seasons. Sensors recorded ambient sound at 7 colonies that had 15–111 nests in 2009 and 2010. Colonies were spaced at least 250 m apart and ranged from 36 to 2,571 m2. We used spectrogram cross-correlation to automate the detection of tern calls from recordings. We calculated mean seasonal call rate and compared it with mean active nest count at each colony. Acoustic activity explained 71% of the variation in nest abundance between breeding sites and 88% of the change in colony size between years. These results validate a primary assumption of acoustic indices; that is, for terns, acoustic activity is correlated to relative abundance, a fundamental step toward designing rigorous and scalable acoustic monitoring programs to measure the effectiveness of conservation actions for colonial birds and other acoustically active wildlife.
The novel high-performance 3-D MT inverse solver
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Geraskin, Alexey; Kuvshinov, Alexey
2016-04-01
We present novel, robust, scalable, and fast 3-D magnetotelluric (MT) inverse solver. The solver is written in multi-language paradigm to make it as efficient, readable and maintainable as possible. Separation of concerns and single responsibility concepts go through implementation of the solver. As a forward modelling engine a modern scalable solver extrEMe, based on contracting integral equation approach, is used. Iterative gradient-type (quasi-Newton) optimization scheme is invoked to search for (regularized) inverse problem solution, and adjoint source approach is used to calculate efficiently the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT responses, and supports massive parallelization. Moreover, different parallelization strategies implemented in the code allow optimal usage of available computational resources for a given problem statement. To parameterize an inverse domain the so-called mask parameterization is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to HPC Piz Daint (6th supercomputer in the world) demonstrate practically linear scalability of the code up to thousands of nodes.
Madaria, Anuj R; Yao, Maoqing; Chi, Chunyung; Huang, Ningfeng; Lin, Chenxi; Li, Ruijuan; Povinelli, Michelle L; Dapkus, P Daniel; Zhou, Chongwu
2012-06-13
Vertically aligned, catalyst-free semiconducting nanowires hold great potential for photovoltaic applications, in which achieving scalable synthesis and optimized optical absorption simultaneously is critical. Here, we report combining nanosphere lithography (NSL) and selected area metal-organic chemical vapor deposition (SA-MOCVD) for the first time for scalable synthesis of vertically aligned gallium arsenide nanowire arrays, and surprisingly, we show that such nanowire arrays with patterning defects due to NSL can be as good as highly ordered nanowire arrays in terms of optical absorption and reflection. Wafer-scale patterning for nanowire synthesis was done using a polystyrene nanosphere template as a mask. Nanowires grown from substrates patterned by NSL show similar structural features to those patterned using electron beam lithography (EBL). Reflection of photons from the NSL-patterned nanowire array was used as a measure of the effect of defects present in the structure. Experimentally, we show that GaAs nanowires as short as 130 nm show reflection of <10% over the visible range of the solar spectrum. Our results indicate that a highly ordered nanowire structure is not necessary: despite the "defects" present in NSL-patterned nanowire arrays, their optical performance is similar to "defect-free" structures patterned by more costly, time-consuming EBL methods. Our scalable approach for synthesis of vertical semiconducting nanowires can have application in high-throughput and low-cost optoelectronic devices, including solar cells.
Chu, Brian C; Carpenter, Aubrey L; Wyszynski, Christopher M; Conklin, Phoebe H; Comer, Jonathan S
2017-01-01
A sizable gap exists between the availability of evidence-based psychological treatments and the number of community therapists capable of delivering such treatments. Limited time, resources, and access to experts prompt the need for easily disseminable, lower cost options for therapist training and continued support beyond initial training. A pilot randomized trial tested scalable extended support models for therapists following initial training. Thirty-five postdegree professionals (43%) or graduate trainees (57%) from diverse disciplines viewed an initial web-based training in cognitive-behavioral therapy (CBT) for youth anxiety and then were randomly assigned to 10 weeks of expert streaming (ES; viewing weekly online supervision sessions of an expert providing consultation), peer consultation (PC; non-expert-led group discussions of CBT), or fact sheet self-study (FS; weekly review of instructional fact sheets). In initial expectations, trainees rated PC as more appropriate and useful to meet its goals than either ES or FS. At post, all support programs were rated as equally satisfactory and useful for therapists' work, and comparable in increasing self-reported use of CBT strategies (b = .19, p = .02). In contrast, negative linear trends were found on a knowledge quiz (b = -1.23, p = .01) and self-reported beliefs about knowledge (b = -1.50, p < .001) and skill (b = -1.15, p < .001). Attrition and poor attendance presented a moderate concern for PC, and ES was rated as having the lowest implementation potential. Preliminary findings encourage further development of low-cost, scalable options for continued support of evidence-based training.
Roll-to-roll production of spray coated N-doped carbon nanotube electrodes for supercapacitors
NASA Astrophysics Data System (ADS)
Karakaya, Mehmet; Zhu, Jingyi; Raghavendra, Achyut J.; Podila, Ramakrishna; Parler, Samuel G.; Kaplan, James P.; Rao, Apparao M.
2014-12-01
Although carbon nanomaterials are being increasingly used in energy storage, there has been a lack of inexpensive, continuous, and scalable synthesis methods. Here, we present a scalable roll-to-roll (R2R) spray coating process for synthesizing randomly oriented multi-walled carbon nanotubes electrodes on Al foils. The coin and jellyroll type supercapacitors comprised such electrodes yield high power densities (˜700 mW/cm3) and energy densities (1 mW h/cm3) on par with Li-ion thin film batteries. These devices exhibit excellent cycle stability with no loss in performance over more than a thousand cycles. Our cost analysis shows that the R2R spray coating process can produce supercapacitors with 10 times the energy density of conventional activated carbon devices at ˜17% lower cost.
Numerical simulation on a straight-bladed vertical axis wind turbine with auxiliary blade
NASA Astrophysics Data System (ADS)
Li, Y.; Zheng, Y. F.; Feng, F.; He, Q. B.; Wang, N. X.
2016-08-01
To improve the starting performance of the straight-bladed vertical axis wind turbine (SB-VAWT) at low wind speed, and the output characteristics at high wind speed, a flexible, scalable auxiliary vane mechanism was designed and installed into the rotor of SB-VAWT in this study. This new vertical axis wind turbine is a kind of lift-to-drag combination wind turbine. The flexible blade expanded, and the driving force of the wind turbines comes mainly from drag at low rotational speed. On the other hand, the flexible blade is retracted at higher speed, and the driving force is primarily from a lift. To research the effects of the flexible, scalable auxiliary module on the performance of SB-VAWT and to find its best parameters, the computational fluid dynamics (CFD) numerical calculation was carried out. The calculation result shows that the flexible, scalable blades can automatic expand and retract with the rotational speed. The moment coefficient at low tip speed ratio increased substantially. Meanwhile, the moment coefficient has also been improved at high tip speed ratios in certain ranges.
Forrest, C J; Radha, P B; Knauer, J P; Glebov, V Yu; Goncharov, V N; Regan, S P; Rosenberg, M J; Sangster, T C; Shmayda, W T; Stoeckl, C; Gatu Johnson, M
2017-03-03
The deuterium-tritium (D-T) and deuterium-deuterium neutron yield ratio in cryogenic inertial confinement fusion (ICF) experiments is used to examine multifluid effects, traditionally not included in ICF modeling. This ratio has been measured for ignition-scalable direct-drive cryogenic DT implosions at the Omega Laser Facility [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)OPCOB80030-401810.1016/S0030-4018(96)00325-2] using a high-dynamic-range neutron time-of-flight spectrometer. The experimentally inferred yield ratio is consistent with both the calculated values of the nuclear reaction rates and the measured preshot target-fuel composition. These observations indicate that the physical mechanisms that have been proposed to alter the fuel composition, such as species separation of the hydrogen isotopes [D. T. Casey et al., Phys. Rev. Lett. 108, 075002 (2012)PRLTAO0031-900710.1103/PhysRevLett.108.075002], are not significant during the period of peak neutron production in ignition-scalable cryogenic direct-drive DT implosions.
NASA Technical Reports Server (NTRS)
Datta, Anubhav; Johnson, Wayne R.
2009-01-01
This paper has two objectives. The first objective is to formulate a 3-dimensional Finite Element Model for the dynamic analysis of helicopter rotor blades. The second objective is to implement and analyze a dual-primal iterative substructuring based Krylov solver, that is parallel and scalable, for the solution of the 3-D FEM analysis. The numerical and parallel scalability of the solver is studied using two prototype problems - one for ideal hover (symmetric) and one for a transient forward flight (non-symmetric) - both carried out on up to 48 processors. In both hover and forward flight conditions, a perfect linear speed-up is observed, for a given problem size, up to the point of substructure optimality. Substructure optimality and the linear parallel speed-up range are both shown to depend on the problem size as well as on the selection of the coarse problem. With a larger problem size, linear speed-up is restored up to the new substructure optimality. The solver also scales with problem size - even though this conclusion is premature given the small prototype grids considered in this study.
Statistical exchange-coupling errors and the practicality of scalable silicon donor qubits
NASA Astrophysics Data System (ADS)
Song, Yang; Das Sarma, S.
2016-12-01
Recent experimental efforts have led to considerable interest in donor-based localized electron spins in Si as viable qubits for a scalable silicon quantum computer. With the use of isotopically purified 28Si and the realization of extremely long spin coherence time in single-donor electrons, the recent experimental focus is on two-coupled donors with the eventual goal of a scaled-up quantum circuit. Motivated by this development, we simulate the statistical distribution of the exchange coupling J between a pair of donors under realistic donor placement straggles, and quantify the errors relative to the intended J value. With J values in a broad range of donor-pair separation ( 5 <|R |<60 nm), we work out various cases systematically, for a target donor separation R0 along the [001], [110] and [111] Si crystallographic directions, with |R0|=10 ,20 or 30 nm and standard deviation σR=1 ,2 ,5 or 10 nm. Our extensive theoretical results demonstrate the great challenge for a prescribed J gate even with just a donor pair, a first step for any scalable Si-donor-based quantum computer.
Quicksilver: Middleware for Scalable Self-Regenerative Systems
2006-04-01
Applications can be coded in any of about 25 programming languages ranging from the obvious ones to some very obscure languages , such as OCaml ...technology. Like Tempest, Quicksilver can support applications written in any of a wide range of programming languages supported by .NET. However, whereas...so that developers can work in standard languages and with standard tools and still exploit those solutions. Vendors need to see some success
ERIC Educational Resources Information Center
Busch, Julia; Barzel, Bärbel; Leuders, Timo
2015-01-01
Diagnosing student achievement in a formative way is a crucial skill for planning and carrying out effective mathematics lessons. This study takes a subject-specific view and aims at investigating diagnostic competence in the field of mathematical functions at secondary level and how to improve it. Following three evidence-based design principles,…
SWARMS: Scalable sWarms of Autonomous Robots and Mobile Sensors
2013-03-18
Pasqualetti, Antonio Franchi , Francesco Bullo. On optimal cooperative patrolling, 2010 49th IEEE Conference on Decision and Control (CDC). 2010/12/15 00...exhibits “ global stability” Provided a complete convergence proof for the adaptive version of the range only station keeping problem. Graph Theoretic
ERIC Educational Resources Information Center
Gordon, Dan
2011-01-01
When it comes to implementing innovative classroom technology programs, urban school districts face significant challenges stemming from their big-city status. These range from large bureaucracies, to scalability, to how to meet the needs of a more diverse group of students. Because of their size, urban districts tend to have greater distance…
Semenikhin, Nikolay S; Kadasala, Naveen Reddy; Moon, Robert J; Perry, Joseph W; Sandhage, Kenneth H
2018-04-17
Cellulose nanocrystals (CNCs) can be attractive templates for the generation of functional inorganic/organic nanoparticles, given their fine sizes, aspect ratios, and sustainable worldwide availability in abundant quantities. Here, we present for the first time a scalable, surfactant-free, tailorable wet chemical process for converting commercially available CNCs into individual aspected gold nanoshell-bearing particles with tunable surface plasmon resonance bands. Using a rational cellulose functionalization approach, stable suspensions of positively charged CNCs have been generated. Continuous, conductive, nanocrystalline gold coatings were then applied to the individual, electrostatically stabilized CNCs via decoration with 1-3 nm diameter gold particles followed by electroless gold deposition. Optical analyses indicated that these core-shell nanoparticles exhibited two surface plasmon absorbance bands, with one located in the visible range (near 550 nm) and the other at near infrared (NIR) wavelengths. The NIR band possessed a peak maximum wavelength that could be tuned over a wide range (1000-1300 nm) by adjusting the gold coating thickness. The bandwidth and wavelength of the peak maximum of the NIR band were also sensitive to the particle size distribution and could be further refined by fractionation using viscosity gradient centrifugation.
Soenksen, L R; Kassis, T; Noh, M; Griffith, L G; Trumper, D L
2018-03-13
Precise fluid height sensing in open-channel microfluidics has long been a desirable feature for a wide range of applications. However, performing accurate measurements of the fluid level in small-scale reservoirs (<1 mL) has proven to be an elusive goal, especially if direct fluid-sensor contact needs to be avoided. In particular, gravity-driven systems used in several microfluidic applications to establish pressure gradients and impose flow remain open-loop and largely unmonitored due to these sensing limitations. Here we present an optimized self-shielded coplanar capacitive sensor design and automated control system to provide submillimeter fluid-height resolution (∼250 μm) and control of small-scale open reservoirs without the need for direct fluid contact. Results from testing and validation of our optimized sensor and system also suggest that accurate fluid height information can be used to robustly characterize, calibrate and dynamically control a range of microfluidic systems with complex pumping mechanisms, even in cell culture conditions. Capacitive sensing technology provides a scalable and cost-effective way to enable continuous monitoring and closed-loop feedback control of fluid volumes in small-scale gravity-dominated wells in a variety of microfluidic applications.
Narth, Christophe; Lagardère, Louis; Polack, Étienne; Gresh, Nohad; Wang, Qiantao; Bell, David R; Rackers, Joshua A; Ponder, Jay W; Ren, Pengyu Y; Piquemal, Jean-Philip
2016-02-15
We propose a general coupling of the Smooth Particle Mesh Ewald SPME approach for distributed multipoles to a short-range charge penetration correction modifying the charge-charge, charge-dipole and charge-quadrupole energies. Such an approach significantly improves electrostatics when compared to ab initio values and has been calibrated on Symmetry-Adapted Perturbation Theory reference data. Various neutral molecular dimers have been tested and results on the complexes of mono- and divalent cations with a water ligand are also provided. Transferability of the correction is adressed in the context of the implementation of the AMOEBA and SIBFA polarizable force fields in the TINKER-HP software. As the choices of the multipolar distribution are discussed, conclusions are drawn for the future penetration-corrected polarizable force fields highlighting the mandatory need of non-spurious procedures for the obtention of well balanced and physically meaningful distributed moments. Finally, scalability and parallelism of the short-range corrected SPME approach are addressed, demonstrating that the damping function is computationally affordable and accurate for molecular dynamics simulations of complex bio- or bioinorganic systems in periodic boundary conditions. Copyright © 2016 Wiley Periodicals, Inc.
Optimized scalable network switch
Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2007-12-04
In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.
Optimized scalable network switch
Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.
2010-02-23
In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.
NASA Technical Reports Server (NTRS)
Carr, Gregory A.; Iannello, Christopher J.; Chen, Yuan; Hunter, Don J.; DelCastillo, Linda; Bradley, Arthur T.; Stell, Christopher; Mojarradi, Mohammad M.
2013-01-01
This paper is to present a concept of a modular and scalable High Temperature Boost (HTB) Power Processing Unit (PPU) capable of operating at temperatures beyond the standard military temperature range. The various extreme environments technologies are also described as the fundamental technology path to this concept. The proposed HTB PPU is intended for power processing in the area of space solar electric propulsion, where reduction of in-space mass and volume are desired, and sometimes even critical, to achieve the goals of future space flight missions. The concept of the HTB PPU can also be applied to other extreme environment applications, such as geothermal and petroleum deep-well drilling, where higher temperature operation is required.
NASA Technical Reports Server (NTRS)
Carr, Gregory A.; Iannello, Christopher J.; Chen, Yuan; Hunter, Don J.; Del Castillo, Linda; Bradley, Arthur T.; Stell, Christopher; Mojarradi, Mohammad M.
2013-01-01
This paper is to present a concept of a modular and scalable High Temperature Boost (HTB) Power Processing Unit (PPU) capable of operating at temperatures beyond the standard military temperature range. The various extreme environments technologies are also described as the fundamental technology path to this concept. The proposed HTB PPU is intended for power processing in the area of space solar electric propulsion, where the reduction of in-space mass and volume are desired, and sometimes even critical, to achieve the goals of future space flight missions. The concept of the HTB PPU can also be applied to other extreme environment applications, such as geothermal and petroleum deep-well drilling, where higher temperature operation is required.
High-power fiber-coupled 100W visible spectrum diode lasers for display applications
NASA Astrophysics Data System (ADS)
Unger, Andreas; Küster, Matthias; Köhler, Bernd; Biesenbach, Jens
2013-02-01
Diode lasers in the blue and red spectral range are the most promising light sources for upcoming high-brightness digital projectors in cinemas and large venue displays. They combine improved efficiency, longer lifetime and a greatly improved color space compared to traditional xenon light sources. In this paper we report on high-power visible diode laser sources to serve the demands of this emerging market. A unique electro-optical platform enables scalable fiber coupled sources at 638 nm with an output power of up to 100 W from a 400 μm NA0.22 fiber. For the blue diode laser we demonstrate scalable sources from 5 W to 100 W from a 400 μm NA0.22 fiber.
Kastner, Monika; Sayal, Radha; Oliver, Doug; Straus, Sharon E; Dolovich, Lisa
2017-08-01
Chronic diseases are a significant public health concern, particularly in older adults. To address the delivery of health care services to optimally meet the needs of older adults with multiple chronic diseases, Health TAPESTRY (Teams Advancing Patient Experience: Strengthening Quality) uses a novel approach that involves patient home visits by trained volunteers to collect and transmit relevant health information using e-health technology to inform appropriate care from an inter-professional healthcare team. Health TAPESTRY was implemented, pilot tested, and evaluated in a randomized controlled trial (analysis underway). Knowledge translation (KT) interventions such as Health TAPESTRY should involve an investigation of their sustainability and scalability determinants to inform further implementation. However, this is seldom considered in research or considered early enough, so the objectives of this study were to assess the sustainability and scalability potential of Health TAPESTRY from the perspective of the team who developed and pilot-tested it. Our objectives were addressed using a sequential mixed-methods approach involving the administration of a validated, sustainability survey developed by the National Health Service (NHS) to all members of the Health TAPESTRY team who were actively involved in the development, implementation and pilot evaluation of the intervention (Phase 1: n = 38). Mean sustainability scores were calculated to identify the best potential for improvement across sustainability factors. Phase 2 was a qualitative study of interviews with purposively selected Health TAPESTRY team members to gain a more in-depth understanding of the factors that influence the sustainability and scalability Health TAPESTRY. Two independent reviewers coded transcribed interviews and completed a multi-step thematic analysis. Outcomes were participant perceptions of the determinants influencing the sustainability and scalability of Health TAPESTRY. Twenty Health TAPESTRY team members (53% response rate) completed the NHS sustainability survey. The overall mean sustainability score was 64.6 (range 22.8-96.8). Important opportunities for improving sustainability were better staff involvement and training, clinical leadership engagement, and infrastructure for sustainability. Interviews with 25 participants (response rate 60%) showed that factors influencing the sustainability and scalability of Health TAPESTRY emerged across two dimensions: I) Health TAPESTRY operations (development and implementation activities undertaken by the central team); and II) the Health TAPESTRY intervention (factors specific to the intervention and its elements). Resource capacity appears to be an important factor to consider for Health TAPESTRY operations as it was identified across both sustainability and scalability factors; and perceived lack of interprofessional team and volunteer resource capacity and the need for stakeholder buy-in are important considerations for the Health TAPESTRY intervention. We used these findings to create actionable recommendations to initiate dialogue among Health TAPESTRY team members to improve the intervention. Our study identified sustainability and scalability determinants of the Health TAPESTRY intervention that can be used to optimize its potential for impact. Next steps will involve using findings to inform a guide to facilitate sustainability and scalability of Health TAPESTRY in other jurisdictions considering its adoption. Our findings build on the limited current knowledge of sustainability, and advances KT science related to the sustainability and scalability of KT interventions.
ERIC Educational Resources Information Center
Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David
1999-01-01
Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…
Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju
2014-01-01
A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated. PMID:24662407
Perspective: The future of quantum dot photonic integrated circuits
NASA Astrophysics Data System (ADS)
Norman, Justin C.; Jung, Daehwan; Wan, Yating; Bowers, John E.
2018-03-01
Direct epitaxial integration of III-V materials on Si offers substantial manufacturing cost and scalability advantages over heterogeneous integration. The challenge is that epitaxial growth introduces high densities of crystalline defects that limit device performance and lifetime. Quantum dot lasers, amplifiers, modulators, and photodetectors epitaxially grown on Si are showing promise for achieving low-cost, scalable integration with silicon photonics. The unique electrical confinement properties of quantum dots provide reduced sensitivity to the crystalline defects that result from III-V/Si growth, while their unique gain dynamics show promise for improved performance and new functionalities relative to their quantum well counterparts in many devices. Clear advantages for using quantum dot active layers for lasers and amplifiers on and off Si have already been demonstrated, and results for quantum dot based photodetectors and modulators look promising. Laser performance on Si is improving rapidly with continuous-wave threshold currents below 1 mA, injection efficiencies of 87%, and output powers of 175 mW at 20 °C. 1500-h reliability tests at 35 °C showed an extrapolated mean-time-to-failure of more than ten million hours. This represents a significant stride toward efficient, scalable, and reliable III-V lasers on on-axis Si substrates for photonic integrate circuits that are fully compatible with complementary metal-oxide-semiconductor (CMOS) foundries.
Shadid, J. N.; Pawlowski, R. P.; Cyr, E. C.; ...
2016-02-10
Here, we discuss that the computational solution of the governing balance equations for mass, momentum, heat transfer and magnetic induction for resistive magnetohydrodynamics (MHD) systems can be extremely challenging. These difficulties arise from both the strong nonlinear, nonsymmetric coupling of fluid and electromagnetic phenomena, as well as the significant range of time- and length-scales that the interactions of these physical mechanisms produce. This paper explores the development of a scalable, fully-implicit stabilized unstructured finite element (FE) capability for 3D incompressible resistive MHD. The discussion considers the development of a stabilized FE formulation in context of the variational multiscale (VMS) method,more » and describes the scalable implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton–Krylov methods, which are preconditioned using fully-coupled algebraic multilevel preconditioners. These preconditioners are shown to enable a robust, scalable and efficient solution approach for the large-scale sparse linear systems generated by the Newton linearization. Verification results demonstrate the expected order-of-accuracy for the stabilized FE discretization. The approach is tested on a variety of prototype problems, that include MHD duct flows, an unstable hydromagnetic Kelvin–Helmholtz shear layer, and a 3D island coalescence problem used to model magnetic reconnection. Initial results that explore the scaling of the solution methods are also presented on up to 128K processors for problems with up to 1.8B unknowns on a CrayXK7.« less
Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli
2014-03-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.
LoRa Scalability: A Simulation Model Based on Interference Measurements
Haxhibeqiri, Jetmir; Van den Abeele, Floris; Moerman, Ingrid; Hoebeke, Jeroen
2017-01-01
LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data. PMID:28545239
LoRa Scalability: A Simulation Model Based on Interference Measurements.
Haxhibeqiri, Jetmir; Van den Abeele, Floris; Moerman, Ingrid; Hoebeke, Jeroen
2017-05-23
LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.
Direct laser writing of micro-supercapacitors on hydrated graphite oxide films.
Gao, Wei; Singh, Neelam; Song, Li; Liu, Zheng; Reddy, Arava Leela Mohana; Ci, Lijie; Vajtai, Robert; Zhang, Qing; Wei, Bingqing; Ajayan, Pulickel M
2011-07-31
Microscale supercapacitors provide an important complement to batteries in a variety of applications, including portable electronics. Although they can be manufactured using a number of printing and lithography techniques, continued improvements in cost, scalability and form factor are required to realize their full potential. Here, we demonstrate the scalable fabrication of a new type of all-carbon, monolithic supercapacitor by laser reduction and patterning of graphite oxide films. We pattern both in-plane and conventional electrodes consisting of reduced graphite oxide with micrometre resolution, between which graphite oxide serves as a solid electrolyte. The substantial amounts of trapped water in the graphite oxide makes it simultaneously a good ionic conductor and an electrical insulator, allowing it to serve as both an electrolyte and an electrode separator with ion transport characteristics similar to that observed for Nafion membranes. The resulting micro-supercapacitor devices show good cyclic stability, and energy storage capacities comparable to existing thin-film supercapacitors.
Direct laser writing of micro-supercapacitors on hydrated graphite oxide films
NASA Astrophysics Data System (ADS)
Gao, Wei; Singh, Neelam; Song, Li; Liu, Zheng; Reddy, Arava Leela Mohana; Ci, Lijie; Vajtai, Robert; Zhang, Qing; Wei, Bingqing; Ajayan, Pulickel M.
2011-08-01
Microscale supercapacitors provide an important complement to batteries in a variety of applications, including portable electronics. Although they can be manufactured using a number of printing and lithography techniques, continued improvements in cost, scalability and form factor are required to realize their full potential. Here, we demonstrate the scalable fabrication of a new type of all-carbon, monolithic supercapacitor by laser reduction and patterning of graphite oxide films. We pattern both in-plane and conventional electrodes consisting of reduced graphite oxide with micrometre resolution, between which graphite oxide serves as a solid electrolyte. The substantial amounts of trapped water in the graphite oxide makes it simultaneously a good ionic conductor and an electrical insulator, allowing it to serve as both an electrolyte and an electrode separator with ion transport characteristics similar to that observed for Nafion membranes. The resulting micro-supercapacitor devices show good cyclic stability, and energy storage capacities comparable to existing thin-film supercapacitors.
Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows
NASA Astrophysics Data System (ADS)
Zwick, David; Hackl, Jason; Balachandar, S.
2017-11-01
Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.
NASA Technical Reports Server (NTRS)
Parish, David W.; Grabbe, Robert D.; Marzwell, Neville I.
1994-01-01
A Modular Autonomous Robotic System (MARS), consisting of a modular autonomous vehicle control system that can be retrofit on to any vehicle to convert it to autonomous control and support a modular payload for multiple applications is being developed. The MARS design is scalable, reconfigurable, and cost effective due to the use of modern open system architecture design methodologies, including serial control bus technology to simplify system wiring and enhance scalability. The design is augmented with modular, object oriented (C++) software implementing a hierarchy of five levels of control including teleoperated, continuous guidepath following, periodic guidepath following, absolute position autonomous navigation, and relative position autonomous navigation. The present effort is focused on producing a system that is commercially viable for routine autonomous patrolling of known, semistructured environments, like environmental monitoring of chemical and petroleum refineries, exterior physical security and surveillance, perimeter patrolling, and intrafacility transport applications.
Direct manufacturing of ultrathin graphite on three-dimensional nanoscale features
Pacios, Mercè; Hosseini, Peiman; Fan, Ye; He, Zhengyu; Krause, Oliver; Hutchison, John; Warner, Jamie H.; Bhaskaran, Harish
2016-01-01
There have been many successful attempts to grow high-quality large-area graphene on flat substrates. Doing so at the nanoscale has thus far been plagued by significant scalability problems, particularly because of the need for delicate transfer processes onto predefined features, which are necessarily low-yield processes and which can introduce undesirable residues. Herein we describe a highly scalable, clean and effective, in-situ method that uses thin film deposition techniques to directly grow on a continuous basis ultrathin graphite (uG) on uneven nanoscale surfaces. We then demonstrate that this is possible on a model system of atomic force probe tips of various radii. Further, we characterize the growth characteristics of this technique as well as the film’s superior conduction and lower adhesion at these scales. This sets the stage for such a process to allow the use of highly functional graphite in high-aspect-ratio nanoscale components. PMID:26939862
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implementedmore » and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.« less
Randomized, Controlled Trial of CBT Training for PTSD Providers
2015-10-01
design, implement and evaluate a cost effective, web based self paced training program to provide skills-oriented continuing education for mental...but has received little systematic evaluation to date. Noting the urgency and high priority of this issue, Fairburn and Cooper (2011) have... evaluate scalable and cost-effective new methods for training of mental health clinicians providing treatment services to veterans with PTSD. The
Communication Avoiding and Overlapping for Numerical Linear Algebra
2012-05-08
future exascale systems, communication cost must be avoided or overlapped. Communication-avoiding 2.5D algorithms improve scalability by reducing...linear algebra problems to future exascale systems, communication cost must be avoided or overlapped. Communication-avoiding 2.5D algorithms improve...will continue to grow relative to the cost of computation. With exascale computing as the long-term goal, the community needs to develop techniques
WebPresent: a World Wide Web-based telepresentation tool for physicians
NASA Astrophysics Data System (ADS)
Sampath-Kumar, Srihari; Banerjea, Anindo; Moshfeghi, Mehran
1997-05-01
In this paper, we present the design architecture and the implementation status of WebPresent - a world wide web based tele-presentation tool. This tool allows a physician to use a conference server workstation and make a presentation of patient cases to a geographically distributed audience. The audience consists of other physicians collaborating on patients' health care management and physicians participating in continuing medical education. These physicians are at several locations with networks of different bandwidth and capabilities connecting them. Audiences also receive the patient case information on different computers ranging form high-end display workstations to laptops with low-resolution displays. WebPresent is a scalable networked multimedia tool which supports the presentation of hypertext, images, audio, video, and a white-board to remote physicians with hospital Intranet access. WebPresent allows the audience to receive customized information. The data received can differ in resolution and bandwidth, depending on the availability of resources such as display resolution and network bandwidth.
Controlling Energy Performance on the Big Stage - The New York Times Company
DOE Office of Scientific and Technical Information (OSTI.GOV)
Settlemyre, Kevin; Regnier, Cindy
2015-08-01
The Times partnered with the U.S. Department of Energy (DOE) as part of DOE’s Commercial Building Partnerships (CBP) Program to develop a post-occupancy evaluation (POE) of three EEMs that were implemented during the construction of The Times building between 2004-2006. With aggressive goals to reduce energy use and carbon emissions at a national level, one strategy of the US Department of Energy is looking to exemplary buildings that have already invested in new approaches to achieving the energy performance goals that are now needed at scale. The Times building incorporated a number of innovative technologies, systems and processes that makemore » their project a model for widespread replication in new and existing buildings. The measured results from the post occupancy evaluation study, the tools and processes developed, and continuous improvements in the performance and cost of the systems studied suggest that these savings are scalable and replicable in a wide range of commercial buildings nationwide.« less
Experimental demonstration of deep frequency modulation interferometry.
Isleif, Katharina-Sophie; Gerberding, Oliver; Schwarze, Thomas S; Mehmet, Moritz; Heinzel, Gerhard; Cervantes, Felipe Guzmán
2016-01-25
Experiments for space and ground-based gravitational wave detectors often require a large dynamic range interferometric position readout of test masses with 1 pm/√Hz precision over long time scales. Heterodyne interferometer schemes that achieve such precisions are available, but they require complex optical set-ups, limiting their scalability for multiple channels. This article presents the first experimental results on deep frequency modulation interferometry, a new technique that combines sinusoidal laser frequency modulation in unequal arm length interferometers with a non-linear fit algorithm. We have tested the technique in a Michelson and a Mach-Zehnder Interferometer topology, respectively, demonstrated continuous phase tracking of a moving mirror and achieved a performance equivalent to a displacement sensitivity of 250 pm/Hz at 1 mHz between the phase measurements of two photodetectors monitoring the same optical signal. By performing time series fitting of the extracted interference signals, we measured that the linearity of the laser frequency modulation is on the order of 2% for the laser source used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norman, Justin; Kennedy, M. J.; Selvidge, Jennifer
High performance III-V lasers at datacom and telecom wavelengths on on-axis (001) Si are needed for scalable datacenter interconnect technologies. We demonstrate electrically injected quantum dot lasers grown on on-axis (001) Si patterned with {111} v-grooves lying in the [110] direction. No additional Ge buffers or substrate miscut was used. The active region consists of five InAs/InGaAs dot-in-a-well layers. Here, we achieve continuous wave lasing with thresholds as low as 36 mA and operation up to 80°C.
Norman, Justin; Kennedy, M. J.; Selvidge, Jennifer; ...
2017-02-14
High performance III-V lasers at datacom and telecom wavelengths on on-axis (001) Si are needed for scalable datacenter interconnect technologies. We demonstrate electrically injected quantum dot lasers grown on on-axis (001) Si patterned with {111} v-grooves lying in the [110] direction. No additional Ge buffers or substrate miscut was used. The active region consists of five InAs/InGaAs dot-in-a-well layers. Here, we achieve continuous wave lasing with thresholds as low as 36 mA and operation up to 80°C.
Liebe, J D; Hübner, U
2013-01-01
Continuous improvements of IT-performance in healthcare organisations require actionable performance indicators, regularly conducted, independent measurements and meaningful and scalable reference groups. Existing IT-benchmarking initiatives have focussed on the development of reliable and valid indicators, but less on the questions about how to implement an environment for conducting easily repeatable and scalable IT-benchmarks. This study aims at developing and trialling a procedure that meets the afore-mentioned requirements. We chose a well established, regularly conducted (inter-) national IT-survey of healthcare organisations (IT-Report Healthcare) as the environment and offered the participants of the 2011 survey (CIOs of hospitals) to enter a benchmark. The 61 structural and functional performance indicators covered among others the implementation status and integration of IT-systems and functions, global user satisfaction and the resources of the IT-department. Healthcare organisations were grouped by size and ownership. The benchmark results were made available electronically and feedback on the use of these results was requested after several months. Fifty-ninehospitals participated in the benchmarking. Reference groups consisted of up to 141 members depending on the number of beds (size) and the ownership (public vs. private). A total of 122 charts showing single indicator frequency views were sent to each participant. The evaluation showed that 94.1% of the CIOs who participated in the evaluation considered this benchmarking beneficial and reported that they would enter again. Based on the feedback of the participants we developed two additional views that provide a more consolidated picture. The results demonstrate that establishing an independent, easily repeatable and scalable IT-benchmarking procedure is possible and was deemed desirable. Based on these encouraging results a new benchmarking round which includes process indicators is currently conducted.
Scalable alignment and transfer of nanowires in a Spinning Langmuir Film.
Zhu, Ren; Lai, Yicong; Nguyen, Vu; Yang, Rusen
2014-10-21
Many nanomaterial-based integrated nanosystems require the assembly of nanowires and nanotubes into ordered arrays. A generic alignment method should be simple and fast for the proof-of-concept study by a researcher, and low-cost and scalable for mass production in industries. Here we have developed a novel Spinning-Langmuir-Film technique to fulfill both requirements. We used surfactant-enhanced shear flow to align inorganic and organic nanowires, which could be easily transferred to other substrates and ready for device fabrication in less than 20 minutes. The aligned nanowire areal density can be controlled in a wide range from 16/mm(-2) to 258/mm(-2), through the compression of the film. The surface surfactant layer significantly influences the quality of alignment and has been investigated in detail.
Engineering scalable biological systems
2010-01-01
Synthetic biology is focused on engineering biological organisms to study natural systems and to provide new solutions for pressing medical, industrial and environmental problems. At the core of engineered organisms are synthetic biological circuits that execute the tasks of sensing inputs, processing logic and performing output functions. In the last decade, significant progress has been made in developing basic designs for a wide range of biological circuits in bacteria, yeast and mammalian systems. However, significant challenges in the construction, probing, modulation and debugging of synthetic biological systems must be addressed in order to achieve scalable higher-complexity biological circuits. Furthermore, concomitant efforts to evaluate the safety and biocontainment of engineered organisms and address public and regulatory concerns will be necessary to ensure that technological advances are translated into real-world solutions. PMID:21468204
A scalable SIMD digital signal processor for high-quality multifunctional printer systems
NASA Astrophysics Data System (ADS)
Kang, Hyeong-Ju; Choi, Yongwoo; Kim, Kimo; Park, In-Cheol; Kim, Jung-Wook; Lee, Eul-Hwan; Gahang, Goo-Soo
2005-01-01
This paper describes a high-performance scalable SIMD digital signal processor (DSP) developed for multifunctional printer systems. The DSP supports a variable number of datapaths to cover a wide range of performance and maintain a RISC-like pipeline structure. Many special instructions suitable for image processing algorithms are included in the DSP. Quad/dual instructions are introduced for 8-bit or 16-bit data, and bit-field extraction/insertion instructions are supported to process various data types. Conditional instructions are supported to deal with complex relative conditions efficiently. In addition, an intelligent DMA block is integrated to align data in the course of data reading. Experimental results show that the proposed DSP outperforms a high-end printer-system DSP by at least two times.
Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan
2016-10-28
Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.
Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan
2016-01-01
Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. PMID:27801829
TreeVector: scalable, interactive, phylogenetic trees for the web.
Pethica, Ralph; Barker, Gary; Kovacs, Tim; Gough, Julian
2010-01-28
Phylogenetic trees are complex data forms that need to be graphically displayed to be human-readable. Traditional techniques of plotting phylogenetic trees focus on rendering a single static image, but increases in the production of biological data and large-scale analyses demand scalable, browsable, and interactive trees. We introduce TreeVector, a Scalable Vector Graphics-and Java-based method that allows trees to be integrated and viewed seamlessly in standard web browsers with no extra software required, and can be modified and linked using standard web technologies. There are now many bioinformatics servers and databases with a range of dynamic processes and updates to cope with the increasing volume of data. TreeVector is designed as a framework to integrate with these processes and produce user-customized phylogenies automatically. We also address the strengths of phylogenetic trees as part of a linked-in browsing process rather than an end graphic for print. TreeVector is fast and easy to use and is available to download precompiled, but is also open source. It can also be run from the web server listed below or the user's own web server. It has already been deployed on two recognized and widely used database Web sites.
NEXUS Scalable and Distributed Next-Generation Avionics Bus for Space Missions
NASA Technical Reports Server (NTRS)
He, Yutao; Shalom, Eddy; Chau, Savio N.; Some, Raphael R.; Bolotin, Gary S.
2011-01-01
A paper discusses NEXUS, a common, next-generation avionics interconnect that is transparently compatible with wired, fiber-optic, and RF physical layers; provides a flexible, scalable, packet switched topology; is fault-tolerant with sub-microsecond detection/recovery latency; has scalable bandwidth from 1 Kbps to 10 Gbps; has guaranteed real-time determinism with sub-microsecond latency/jitter; has built-in testability; features low power consumption (< 100 mW per Gbps); is lightweight with about a 5,000-logic-gate footprint; and is implemented in a small Bus Interface Unit (BIU) with reconfigurable back-end providing interface to legacy subsystems. NEXUS enhances a commercial interconnect standard, Serial RapidIO, to meet avionics interconnect requirements without breaking the standard. This unified interconnect technology can be used to meet performance, power, size, and reliability requirements of all ranges of equipment, sensors, and actuators at chip-to-chip, board-to-board, or box-to-box boundary. Early results from in-house modeling activity of Serial RapidIO using VisualSim indicate that the use of a switched, high-performance avionics network will provide a quantum leap in spacecraft onboard science and autonomy capability for science and exploration missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allman, M. S., E-mail: shane.allman@boulder.nist.gov; Verma, V. B.; Stevens, M.
We demonstrate a 64-pixel free-space-coupled array of superconducting nanowire single photon detectors optimized for high detection efficiency in the near-infrared range. An integrated, readily scalable, multiplexed readout scheme is employed to reduce the number of readout lines to 16. The cryogenic, optical, and electronic packaging to read out the array as well as characterization measurements are discussed.
Current and Future Development of a Non-hydrostatic Unified Atmospheric Model (NUMA)
2010-09-09
following capabilities: 1. Highly scalable on current and future computer architectures ( exascale computing and beyond and GPUs) 2. Flexibility... Exascale Computing • 10 of Top 500 are already in the Petascale range • Should also keep our eyes on GPUs (e.g., Mare Nostrum) 2. Numerical
Computer-Aided Assessment in Mechanics: Question Design and Test Evaluation
ERIC Educational Resources Information Center
Gill, M.; Greenhow, M.
2007-01-01
This article describes pedagogic issues in setting objective tests in mechanics using Question Mark Perception, coupled with MathML mathematics mark-up and the Scalable Vector Graphics (SVG) syntax for producing diagrams. The content of the questions (for a range of question types such as multi-choice, numerical input and variants such as…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Xujun; Li, Jiyuan; Jiang, Xikai
An efficient parallel Stokes’s solver is developed towards the complete inclusion of hydrodynamic interactions of Brownian particles in any geometry. A Langevin description of the particle dynamics is adopted, where the long-range interactions are included using a Green’s function formalism. We present a scalable parallel computational approach, where the general geometry Stokeslet is calculated following a matrix-free algorithm using the General geometry Ewald-like method. Our approach employs a highly-efficient iterative finite element Stokes’ solver for the accurate treatment of long-range hydrodynamic interactions within arbitrary confined geometries. A combination of mid-point time integration of the Brownian stochastic differential equation, the parallelmore » Stokes’ solver, and a Chebyshev polynomial approximation for the fluctuation-dissipation theorem result in an O(N) parallel algorithm. We also illustrate the new algorithm in the context of the dynamics of confined polymer solutions in equilibrium and non-equilibrium conditions. Our method is extended to treat suspended finite size particles of arbitrary shape in any geometry using an Immersed Boundary approach.« less
Zhao, Xujun; Li, Jiyuan; Jiang, Xikai; ...
2017-06-29
An efficient parallel Stokes’s solver is developed towards the complete inclusion of hydrodynamic interactions of Brownian particles in any geometry. A Langevin description of the particle dynamics is adopted, where the long-range interactions are included using a Green’s function formalism. We present a scalable parallel computational approach, where the general geometry Stokeslet is calculated following a matrix-free algorithm using the General geometry Ewald-like method. Our approach employs a highly-efficient iterative finite element Stokes’ solver for the accurate treatment of long-range hydrodynamic interactions within arbitrary confined geometries. A combination of mid-point time integration of the Brownian stochastic differential equation, the parallelmore » Stokes’ solver, and a Chebyshev polynomial approximation for the fluctuation-dissipation theorem result in an O(N) parallel algorithm. We also illustrate the new algorithm in the context of the dynamics of confined polymer solutions in equilibrium and non-equilibrium conditions. Our method is extended to treat suspended finite size particles of arbitrary shape in any geometry using an Immersed Boundary approach.« less
Mapping of H.264 decoding on a multiprocessor architecture
NASA Astrophysics Data System (ADS)
van der Tol, Erik B.; Jaspers, Egbert G.; Gelderblom, Rob H.
2003-05-01
Due to the increasing significance of development costs in the competitive domain of high-volume consumer electronics, generic solutions are required to enable reuse of the design effort and to increase the potential market volume. As a result from this, Systems-on-Chip (SoCs) contain a growing amount of fully programmable media processing devices as opposed to application-specific systems, which offered the most attractive solutions due to a high performance density. The following motivates this trend. First, SoCs are increasingly dominated by their communication infrastructure and embedded memory, thereby making the cost of the functional units less significant. Moreover, the continuously growing design costs require generic solutions that can be applied over a broad product range. Hence, powerful programmable SoCs are becoming increasingly attractive. However, to enable power-efficient designs, that are also scalable over the advancing VLSI technology, parallelism should be fully exploited. Both task-level and instruction-level parallelism can be provided by means of e.g. a VLIW multiprocessor architecture. To provide the above-mentioned scalability, we propose to partition the data over the processors, instead of traditional functional partitioning. An advantage of this approach is the inherent locality of data, which is extremely important for communication-efficient software implementations. Consequently, a software implementation is discussed, enabling e.g. SD resolution H.264 decoding with a two-processor architecture, whereas High-Definition (HD) decoding can be achieved with an eight-processor system, executing the same software. Experimental results show that the data communication considerably reduces up to 65% directly improving the overall performance. Apart from considerable improvement in memory bandwidth, this novel concept of partitioning offers a natural approach for optimally balancing the load of all processors, thereby further improving the overall speedup.
A scalable moment-closure approximation for large-scale biochemical reaction networks
Kazeroonian, Atefeh; Theis, Fabian J.; Hasenauer, Jan
2017-01-01
Abstract Motivation: Stochastic molecular processes are a leading cause of cell-to-cell variability. Their dynamics are often described by continuous-time discrete-state Markov chains and simulated using stochastic simulation algorithms. As these stochastic simulations are computationally demanding, ordinary differential equation models for the dynamics of the statistical moments have been developed. The number of state variables of these approximating models, however, grows at least quadratically with the number of biochemical species. This limits their application to small- and medium-sized processes. Results: In this article, we present a scalable moment-closure approximation (sMA) for the simulation of statistical moments of large-scale stochastic processes. The sMA exploits the structure of the biochemical reaction network to reduce the covariance matrix. We prove that sMA yields approximating models whose number of state variables depends predominantly on local properties, i.e. the average node degree of the reaction network, instead of the overall network size. The resulting complexity reduction is assessed by studying a range of medium- and large-scale biochemical reaction networks. To evaluate the approximation accuracy and the improvement in computational efficiency, we study models for JAK2/STAT5 signalling and NFκB signalling. Our method is applicable to generic biochemical reaction networks and we provide an implementation, including an SBML interface, which renders the sMA easily accessible. Availability and implementation: The sMA is implemented in the open-source MATLAB toolbox CERENA and is available from https://github.com/CERENADevelopers/CERENA. Contact: jan.hasenauer@helmholtz-muenchen.de or atefeh.kazeroonian@tum.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881983
Platform for efficient switching between multiple devices in the intensive care unit.
De Backere, F; Vanhove, T; Dejonghe, E; Feys, M; Herinckx, T; Vankelecom, J; Decruyenaere, J; De Turck, F
2015-01-01
This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". Handheld computers, such as tablets and smartphones, are becoming more and more accessible in the clinical care setting and in Intensive Care Units (ICUs). By making the most useful and appropriate data available on multiple devices and facilitate the switching between those devices, staff members can efficiently integrate them in their workflow, allowing for faster and more accurate decisions. This paper addresses the design of a platform for the efficient switching between multiple devices in the ICU. The key functionalities of the platform are the integration of the platform into the workflow of the medical staff and providing tailored and dynamic information at the point of care. The platform is designed based on a 3-tier architecture with a focus on extensibility, scalability and an optimal user experience. After identification to a device using Near Field Communication (NFC), the appropriate medical information will be shown on the selected device. The visualization of the data is adapted to the type of the device. A web-centric approach was used to enable extensibility and portability. A prototype of the platform was thoroughly evaluated. The scalability, performance and user experience were evaluated. Performance tests show that the response time of the system scales linearly with the amount of data. Measurements with up to 20 devices have shown no performance loss due to the concurrent use of multiple devices. The platform provides a scalable and responsive solution to enable the efficient switching between multiple devices. Due to the web-centric approach new devices can easily be integrated. The performance and scalability of the platform have been evaluated and it was shown that the response time and scalability of the platform was within an acceptable range.
2015-05-30
scalable application of cutting edge technologies. 20 4. Responding to changing resources—With likely significant resource reductions the depot...deal with underutilized organic capability while continuing to increase outsourcing of depot workload. In addition the study states that a...the unique organic skills that TYAD could 40 bring to the software sustainment mission could be valuable based on the specific type of software
Scalable clustering algorithms for continuous environmental flow cytometry.
Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill
2016-02-01
Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Continuous Heterogeneous Photocatalysis in Serial Micro-Batch Reactors.
Pieber, Bartholomäus; Shalom, Menny; Antonietti, Markus; Seeberger, Peter H; Gilmore, Kerry
2018-01-29
Solid reagents, leaching catalysts, and heterogeneous photocatalysts are commonly employed in batch processes but are ill-suited for continuous-flow chemistry. Heterogeneous catalysts for thermal reactions are typically used in packed-bed reactors, which cannot be penetrated by light and thus are not suitable for photocatalytic reactions involving solids. We demonstrate that serial micro-batch reactors (SMBRs) allow for the continuous utilization of solid materials together with liquids and gases in flow. This technology was utilized to develop selective and efficient fluorination reactions using a modified graphitic carbon nitride heterogeneous catalyst instead of costly homogeneous metal polypyridyl complexes. The merger of this inexpensive, recyclable catalyst and the SMBR approach enables sustainable and scalable photocatalysis. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin
2013-01-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803
Newby, James A; Huck, Lena; Blaylock, D Wayne; Witt, Paul M; Ley, Steven V; Browne, Duncan L
2014-01-03
Conducting low-temperature organometallic reactions under continuous flow conditions offers the potential to more accurately control exotherms and thus provide more reproducible and scalable processes. Herein, progress towards this goal with regards to the lithium-halogen exchange/borylation reaction is reported. In addition to improving the scope of substrates available on a research scale, methods to improve reaction profiles and expedite purification of the products are also described. On moving to a continuous system, thermocouple measurements have been used to track exotherms and provide a level of safety for continuous processing of organometallic reagents. The use of an in-line continuous liquid-liquid separation device to circumvent labour intensive downstream off-line processing is also reported. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
EnerCage: A Smart Experimental Arena With Scalable Architecture for Behavioral Experiments
Uei-Ming Jow; Peter McMenamin; Mehdi Kiani; Manns, Joseph R.; Ghovanloo, Maysam
2014-01-01
Wireless power, when coupled with miniaturized implantable electronics, has the potential to provide a solution to several challenges facing neuroscientists during basic and preclinical studies with freely behaving animals. The EnerCage system is one such solution as it allows for uninterrupted electrophysiology experiments over extended periods of time and vast experimental arenas, while eliminating the need for bulky battery payloads or tethering. It has a scalable array of overlapping planar spiral coils (PSCs) and three-axis magnetic sensors for focused wireless power transmission to devices on freely moving subjects. In this paper, we present the first fully functional EnerCage system, in which the number of PSC drivers and magnetic sensors was reduced to one-third of the number used in our previous design via multicoil coupling. The power transfer efficiency (PTE) has been improved to 5.6% at a 120 mm coupling distance and a 48.5 mm lateral misalignment (worst case) between the transmitter (Tx) array and receiver (Rx) coils. The new EnerCage system is equipped with an Ethernet backbone, further supporting its modular/scalable architecture, which, in turn, allows experimental arenas with arbitrary shapes and dimensions. A set of experiments on a freely behaving rat were conducted by continuously delivering 20 mW to the electronics in the animal headstage for more than one hour in a powered 3538 cm2 experimental area. PMID:23955695
EnerCage: a smart experimental arena with scalable architecture for behavioral experiments.
Uei-Ming Jow; McMenamin, Peter; Kiani, Mehdi; Manns, Joseph R; Ghovanloo, Maysam
2014-01-01
Wireless power, when coupled with miniaturized implantable electronics, has the potential to provide a solution to several challenges facing neuroscientists during basic and preclinical studies with freely behaving animals. The EnerCage system is one such solution as it allows for uninterrupted electrophysiology experiments over extended periods of time and vast experimental arenas, while eliminating the need for bulky battery payloads or tethering. It has a scalable array of overlapping planar spiral coils (PSCs) and three-axis magnetic sensors for focused wireless power transmission to devices on freely moving subjects. In this paper, we present the first fully functional EnerCage system, in which the number of PSC drivers and magnetic sensors was reduced to one-third of the number used in our previous design via multicoil coupling. The power transfer efficiency (PTE) has been improved to 5.6% at a 120 mm coupling distance and a 48.5 mm lateral misalignment (worst case) between the transmitter (Tx) array and receiver (Rx) coils. The new EnerCage system is equipped with an Ethernet backbone, further supporting its modular/scalable architecture, which, in turn, allows experimental arenas with arbitrary shapes and dimensions. A set of experiments on a freely behaving rat were conducted by continuously delivering 20 mW to the electronics in the animal headstage for more than one hour in a powered 3538 cm(2) experimental area.
Continuous-variable quantum computing in optical time-frequency modes using quantum memories.
Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A
2014-09-26
We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.
Liu, Kui; Guo, Jun; Cai, Chunxiao; Zhang, Junxiang; Gao, Jiangrui
2016-11-15
Multipartite entanglement is used for quantum information applications, such as building multipartite quantum communications. Generally, generation of multipartite entanglement is based on a complex beam-splitter network. Here, based on the spatial freedom of light, we experimentally demonstrated spatial quadripartite continuous variable entanglement among first-order Hermite-Gaussian modes using a single type II optical parametric oscillator operating below threshold with an HG0245° pump beam. The entanglement can be scalable for larger numbers of spatial modes by changing the spatial profile of the pump beam. In addition, spatial multipartite entanglement will be useful for future spatial multichannel quantum information applications.
NASA Astrophysics Data System (ADS)
Liu, Shuangyi; Huang, Limin; Li, Wanlu; Liu, Xiaohua; Jing, Shui; Li, Jackie; O'Brien, Stephen
2015-07-01
Colloidal perovskite oxide nanocrystals have attracted a great deal of interest owing to the ability to tune physical properties by virtue of the nanoscale, and generate thin film structures under mild chemical conditions, relying on self-assembly or heterogeneous mixing. This is particularly true for ferroelectric/dielectric perovskite oxide materials, for which device applications cover piezoelectrics, MEMs, memory, gate dielectrics and energy storage. The synthesis of complex oxide nanocrystals, however, continues to present issues pertaining to quality, yield, % crystallinity, purity and may also suffer from tedious separation and purification processes, which are disadvantageous to scaling production. We report a simple, green and scalable ``self-collection'' growth method that produces uniform and aggregate-free colloidal perovskite oxide nanocrystals including BaTiO3 (BT), BaxSr1-xTiO3 (BST) and quaternary oxide BaSrTiHfO3 (BSTH) in high crystallinity and high purity. The synthesis approach is solution processed, based on the sol-gel transformation of metal alkoxides in alcohol solvents with controlled or stoichiometric amounts of water and in the stark absence of surfactants and stabilizers, providing pure colloidal nanocrystals in a remarkably low temperature range (15 °C-55 °C). Under a static condition, the nanoscale hydrolysis of the metal alkoxides accomplishes a complete transformation to fully crystallized single domain perovskite nanocrystals with a passivated surface layer of hydroxyl/alkyl groups, such that the as-synthesized nanocrystals can exist in the form of super-stable and transparent sol, or self-accumulate to form a highly crystalline solid gel monolith of nearly 100% yield for easy separation/purification. The process produces high purity ligand-free nanocrystals excellent dispersibility in polar solvents, with no impurity remaining in the mother solution other than trace alcohol byproducts (such as isopropanol). The afforded stable and transparent suspension/solution can be treated as inks, suitable for printing or spin/spray coating, demonstrating great capabilities of this process for fabrication of high performance dielectric thin films. The simple ``self-collection'' strategy can be described as green and scalable due to the simplified procedure from synthesis to separation/purification, minimum waste generation, and near room temperature crystallization of nanocrystal products with tunable sizes in extremely high yield and high purity.Colloidal perovskite oxide nanocrystals have attracted a great deal of interest owing to the ability to tune physical properties by virtue of the nanoscale, and generate thin film structures under mild chemical conditions, relying on self-assembly or heterogeneous mixing. This is particularly true for ferroelectric/dielectric perovskite oxide materials, for which device applications cover piezoelectrics, MEMs, memory, gate dielectrics and energy storage. The synthesis of complex oxide nanocrystals, however, continues to present issues pertaining to quality, yield, % crystallinity, purity and may also suffer from tedious separation and purification processes, which are disadvantageous to scaling production. We report a simple, green and scalable ``self-collection'' growth method that produces uniform and aggregate-free colloidal perovskite oxide nanocrystals including BaTiO3 (BT), BaxSr1-xTiO3 (BST) and quaternary oxide BaSrTiHfO3 (BSTH) in high crystallinity and high purity. The synthesis approach is solution processed, based on the sol-gel transformation of metal alkoxides in alcohol solvents with controlled or stoichiometric amounts of water and in the stark absence of surfactants and stabilizers, providing pure colloidal nanocrystals in a remarkably low temperature range (15 °C-55 °C). Under a static condition, the nanoscale hydrolysis of the metal alkoxides accomplishes a complete transformation to fully crystallized single domain perovskite nanocrystals with a passivated surface layer of hydroxyl/alkyl groups, such that the as-synthesized nanocrystals can exist in the form of super-stable and transparent sol, or self-accumulate to form a highly crystalline solid gel monolith of nearly 100% yield for easy separation/purification. The process produces high purity ligand-free nanocrystals excellent dispersibility in polar solvents, with no impurity remaining in the mother solution other than trace alcohol byproducts (such as isopropanol). The afforded stable and transparent suspension/solution can be treated as inks, suitable for printing or spin/spray coating, demonstrating great capabilities of this process for fabrication of high performance dielectric thin films. The simple ``self-collection'' strategy can be described as green and scalable due to the simplified procedure from synthesis to separation/purification, minimum waste generation, and near room temperature crystallization of nanocrystal products with tunable sizes in extremely high yield and high purity. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr02351c
NASA Astrophysics Data System (ADS)
Stay, Justin L.; Carr, Dustin; Ferguson, Steve; Haber, Todd; Jenkins, Robert; Mock, Joel
2017-02-01
Optical coherence tomography (OCT) has become a useful and common diagnostic tool within the field of ophthalmology. Although presently a commercial technology, research continues in improving image quality and applying the imaging method to other tissue types. Swept-wavelength lasers based upon fiber ring cavities containing fiber Fabry-Ṕerot tunable filters (FFP-TF), as an intracavity element, provide swept-source optical coherence tomography (SS-OCT) systems with a robust and scalable platform. The FFP-TF can be fabricated within a large range of operating wavelengths, free spectral ranges (FSR), and finesses. To date, FFP-TFs have been fabricated at operating wavelengths from 400 nm to 2.2 µm, FSRs as large as 45 THz, and finesses as high as 30 000. The results in this paper focus on presenting the capability of the FFP-TF as an intracavity element in producing swept-wavelength lasers sources and quantifying the trade off between coherence length and sweep range. We present results within a range of feasible operating conditions. Particular focus is given to the discovery of laser configurations that result in maximization of sweep range and/or power. A novel approach to the electronic drive of the PZT-based FFP-TF is also presented, which eliminates the need for the existence of a mechanical resonance of the optical device. This approach substantially increases the range of drive frequencies with which the filter can be driven and has a positive impact for both the short all-fiber laser cavity (presented in this paper) and long cavity FDML designs as well.
ERIC Educational Resources Information Center
Panoutsopoulos, Hercules; Donert, Karl; Papoutsis, Panos; Kotsanis, Ioannis
2015-01-01
During the last few years, ongoing developments in the technological field of Cloud computing have initiated discourse on the potential of the Cloud to be systematically exploited in educational contexts. Research interest has been stimulated by a range of advantages of Cloud technologies (e.g. adaptability, flexibility, scalability,…
Dense, Efficient Chip-to-Chip Communication at the Extremes of Computing
ERIC Educational Resources Information Center
Loh, Matthew
2013-01-01
The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural…
The Development of the Non-hydrostatic Unified Model of the Atmosphere (NUMA)
2011-09-19
capabilities: 1. Highly scalable on current and future computer architectures ( exascale computing: this means CPUs and GPUs) 2. Flexibility to use a...From Terascale to Petascale/ Exascale Computing • 10 of Top 500 are already in the Petascale range • 3 of top 10 are GPU-based machines 2
MPI Runtime Error Detection with MUST: Advances in Deadlock Detection
Hilbrich, Tobias; Protze, Joachim; Schulz, Martin; ...
2013-01-01
The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require ( p ) analysis time permore » MPI operation, for p processes. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.« less
Scalable and balanced dynamic hybrid data assimilation
NASA Astrophysics Data System (ADS)
Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa
2017-04-01
Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them implemented as parallel model runs themselves. The only bottleneck in the process is the gathering and scattering of initial and final model state snapshots before and after the parallel runs which requires a very efficient and low-latency communication network. However, the volume of data communicated is small and the intervening minimization steps are only 3D-Var, which means their computational load is negligible compared with the fully parallel model runs. We present example results of scalable VEnKF with the 4D lake and shallow sea model COHERENS, assimilating simultaneously continuous in situ measurements in a single point and infrequent satellite images that cover a whole lake, with the fully scalable VEnKF.
AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 2, Issue 1
2010-01-01
Researchers in AHPCRC Technical Area 4 focus on improving processes for developing scalable, accurate parallel programs that are easily ported from one...control number. 1. REPORT DATE 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4 . TITLE AND SUBTITLE AHPCRC (Army High...continued on page 4 Virtual levels in Sequoia represent an abstract memory hierarchy without specifying data transfer mechanisms, giving the
Perovskite Technology is Scalable, But Questions Remain about the Best
Methods | News | NREL Perovskite Technology is Scalable, But Questions Remain about the Best Methods News Release: Perovskite Technology is Scalable, But Questions Remain about the Best Methods NREL be used on a larger surface. The NREL researchers examined potential scalable deposition methods
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Goh, Sherry Meow Peng; Swaminathan, Muthukaruppan; Lai, Julian U-Ming; Anwar, Azlinda; Chan, Soh Ha; Cheong, Ian
2017-01-01
High Epstein Barr Virus (EBV) titers detected by the indirect Immunofluorescence Assay (IFA) are a reliable predictor of Nasopharyngeal Carcinoma (NPC). Despite being the gold standard for serological detection of NPC, the IFA is limited by scaling bottlenecks. Specifically, 5 serial dilutions of each patient sample must be prepared and visually matched by an evaluator to one of 5 discrete titers. Here, we describe a simple method for inferring continuous EBV titers from IFA images acquired from NPC-positive patient sera using only a single sample dilution. In the first part of our study, 2 blinded evaluators used a set of reference titer standards to perform independent re-evaluations of historical samples with known titers. Besides exhibiting high inter-evaluator agreement, both evaluators were also in high concordance with historical titers, thus validating the accuracy of the reference titer standards. In the second part of the study, the reference titer standards were IFA-processed and assigned an 'EBV Score' using image analysis. A log-linear relationship between titers and EBV Score was observed. This relationship was preserved even when images were acquired and analyzed 3days post-IFA. We conclude that image analysis of IFA-processed samples can be used to infer a continuous EBV titer with just a single dilution of NPC-positive patient sera. This work opens new possibilities for improving the accuracy and scalability of IFA in the context of clinical screening. Copyright © 2016. Published by Elsevier B.V.
Binary Interval Search: a scalable algorithm for counting interval intersections.
Layer, Ryan M; Skadron, Kevin; Robins, Gabriel; Hall, Ira M; Quinlan, Aaron R
2013-01-01
The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. https://github.com/arq5x/bits.
Scalable and responsive event processing in the cloud
Suresh, Visalakshmi; Ezhilchelvan, Paul; Watson, Paul
2013-01-01
Event processing involves continuous evaluation of queries over streams of events. Response-time optimization is traditionally done over a fixed set of nodes and/or by using metrics measured at query-operator levels. Cloud computing makes it easy to acquire and release computing nodes as required. Leveraging this flexibility, we propose a novel, queueing-theory-based approach for meeting specified response-time targets against fluctuating event arrival rates by drawing only the necessary amount of computing resources from a cloud platform. In the proposed approach, the entire processing engine of a distinct query is modelled as an atomic unit for predicting response times. Several such units hosted on a single node are modelled as a multiple class M/G/1 system. These aspects eliminate intrusive, low-level performance measurements at run-time, and also offer portability and scalability. Using model-based predictions, cloud resources are efficiently used to meet response-time targets. The efficacy of the approach is demonstrated through cloud-based experiments. PMID:23230164
Single-photon imager based on a superconducting nanowire delay line
NASA Astrophysics Data System (ADS)
Zhao, Qing-Yuan; Zhu, Di; Calandri, Niccolò; Dane, Andrew E.; McCaughan, Adam N.; Bellei, Francesco; Wang, Hao-Zhu; Santavicca, Daniel F.; Berggren, Karl K.
2017-03-01
Detecting spatial and temporal information of individual photons is critical to applications in spectroscopy, communication, biological imaging, astronomical observation and quantum-information processing. Here we demonstrate a scalable single-photon imager using a single continuous superconducting nanowire that is not only a single-photon detector but also functions as an efficient microwave delay line. In this context, photon-detection pulses are guided in the nanowire and enable the readout of the position and time of photon-absorption events from the arrival times of the detection pulses at the nanowire's two ends. Experimentally, we slowed down the velocity of pulse propagation to ∼2% of the speed of light in free space. In a 19.7 mm long nanowire that meandered across an area of 286 × 193 μm2, we were able to resolve ∼590 effective pixels with a temporal resolution of 50 ps (full width at half maximum). The nanowire imager presents a scalable approach for high-resolution photon imaging in space and time.
The Generation of Diazo Compounds in Continuous-Flow.
Hock, Katharina J; Koenigs, Rene M
2018-03-25
Toxic, cancerogenic and explosive - these attributes are typically associated with diazo compounds. Nonetheless, diazo compounds are nowadays a highly demanded class of reagents for organic synthesis, yet the concerns with regards to safe and scalable transformations of these compounds are still exceptionally high. Lately, the research area of the continuous-flow synthesis of diazo compounds attracted significant interest and a whole variety of protocols for their "on-demand" preparation have been realized to date. This concept article focuses on the recent developments using continuous-flow technologies to access diazo compounds; thus minimizing risks and hazards when working with this particular class of compounds. In this article we discuss these concepts and highlight different pre-requisites to access and to perform downstream functionalization reaction. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Scalable manufacturing of biomimetic moldable hydrogels for industrial applications.
Yu, Anthony C; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M; Sevit, Alex M; Tibbitt, Mark W; Acosta, Jesse D; Zhang, Tony; Franzia, Paul W; Langer, Robert; Appel, Eric A
2016-12-13
Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer-nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires.
Scalable manufacturing of biomimetic moldable hydrogels for industrial applications
NASA Astrophysics Data System (ADS)
Yu, Anthony C.; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M.; Sevit, Alex M.; Tibbitt, Mark W.; Acosta, Jesse D.; Zhang, Tony; Franzia, Paul W.; Langer, Robert; Appel, Eric A.
2016-12-01
Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer-nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires.
Bae, Won-Gyu; Kim, Hong Nam; Kim, Doogon; Park, Suk-Hee; Jeong, Hoon Eui; Suh, Kahp-Yang
2014-02-01
Multiscale, hierarchically patterned surfaces, such as lotus leaves, butterfly wings, adhesion pads of gecko lizards are abundantly found in nature, where microstructures are usually used to strengthen the mechanical stability while nanostructures offer the main functionality, i.e., wettability, structural color, or dry adhesion. To emulate such hierarchical structures in nature, multiscale, multilevel patterning has been extensively utilized for the last few decades towards various applications ranging from wetting control, structural colors, to tissue scaffolds. In this review, we highlight recent advances in scalable multiscale patterning to bring about improved functions that can even surpass those found in nature, with particular focus on the analogy between natural and synthetic architectures in terms of the role of different length scales. This review is organized into four sections. First, the role and importance of multiscale, hierarchical structures is described with four representative examples. Second, recent achievements in multiscale patterning are introduced with their strengths and weaknesses. Third, four application areas of wetting control, dry adhesives, selectively filtrating membranes, and multiscale tissue scaffolds are overviewed by stressing out how and why multiscale structures need to be incorporated to carry out their performances. Finally, we present future directions and challenges for scalable, multiscale patterned surfaces. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Performance-scalable volumetric data classification for online industrial inspection
NASA Astrophysics Data System (ADS)
Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.
2002-03-01
Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.
Scalable manufacturing of biomimetic moldable hydrogels for industrial applications
Yu, Anthony C.; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M.; Sevit, Alex M.; Tibbitt, Mark W.; Acosta, Jesse D.; Zhang, Tony; Franzia, Paul W.; Langer, Robert
2016-01-01
Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer–nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires. PMID:27911849
Sijbrandij, Marit; Acarturk, Ceren; Bird, Martha; Bryant, Richard A; Burchert, Sebastian; Carswell, Kenneth; de Jong, Joop; Dinesen, Cecilie; Dawson, Katie S; El Chammay, Rabih; van Ittersum, Linde; Jordans, Mark; Knaevelsrud, Christine; McDaid, David; Miller, Kenneth; Morina, Naser; Park, A-La; Roberts, Bayard; van Son, Yvette; Sondorp, Egbert; Pfaltz, Monique C; Ruttenberg, Leontien; Schick, Matthis; Schnyder, Ulrich; van Ommeren, Mark; Ventevogel, Peter; Weissbecker, Inka; Weitz, Erica; Wiedemann, Nana; Whitney, Claire; Cuijpers, Pim
2017-01-01
The crisis in Syria has resulted in vast numbers of refugees seeking asylum in Syria's neighbouring countries as well as in Europe. Refugees are at considerable risk of developing common mental disorders, including depression, anxiety, and posttraumatic stress disorder (PTSD). Most refugees do not have access to mental health services for these problems because of multiple barriers in national and refugee specific health systems, including limited availability of mental health professionals. To counter some of challenges arising from limited mental health system capacity the World Health Organization (WHO) has developed a range of scalable psychological interventions aimed at reducing psychological distress and improving functioning in people living in communities affected by adversity. These interventions, including Problem Management Plus (PM+) and its variants, are intended to be delivered through individual or group face-to-face or smartphone formats by lay, non-professional people who have not received specialized mental health training, We provide an evidence-based rationale for the use of the scalable PM+ oriented programmes being adapted for Syrian refugees and provide information on the newly launched STRENGTHS programme for adapting, testing and scaling up of PM+ in various modalities in both neighbouring and European countries hosting Syrian refugees.
Scalable wide-field optical coherence tomography-based angiography for in vivo imaging applications
Xu, Jingjiang; Wei, Wei; Song, Shaozhen; Qi, Xiaoli; Wang, Ruikang K.
2016-01-01
Recent advances in optical coherence tomography (OCT)-based angiography have demonstrated a variety of biomedical applications in the diagnosis and therapeutic monitoring of diseases with vascular involvement. While promising, its imaging field of view (FOV) is however still limited (typically less than 9 mm2), which somehow slows down its clinical acceptance. In this paper, we report a high-speed spectral-domain OCT operating at 1310 nm to enable wide FOV up to 750 mm2. Using optical microangiography (OMAG) algorithm, we are able to map vascular networks within living biological tissues. Thanks to 2,048 pixel-array line scan InGaAs camera operating at 147 kHz scan rate, the system delivers a ranging depth of ~7.5 mm and provides wide-field OCT-based angiography at a single data acquisition. We implement two imaging modes (i.e., wide-field mode and high-resolution mode) in the OCT system, which gives highly scalable FOV with flexible lateral resolution. We demonstrate scalable wide-field vascular imaging for multiple finger nail beds in human and whole brain in mice with skull left intact at a single 3D scan, promising new opportunities for wide-field OCT-based angiography for many clinical applications. PMID:27231630
Gene Delivery into Plant Cells for Recombinant Protein Production
Chen, Qiang
2015-01-01
Recombinant proteins are primarily produced from cultures of mammalian, insect, and bacteria cells. In recent years, the development of deconstructed virus-based vectors has allowed plants to become a viable platform for recombinant protein production, with advantages in versatility, speed, cost, scalability, and safety over the current production paradigms. In this paper, we review the recent progress in the methodology of agroinfiltration, a solution to overcome the challenge of transgene delivery into plant cells for large-scale manufacturing of recombinant proteins. General gene delivery methodologies in plants are first summarized, followed by extensive discussion on the application and scalability of each agroinfiltration method. New development of a spray-based agroinfiltration and its application on field-grown plants is highlighted. The discussion of agroinfiltration vectors focuses on their applications for producing complex and heteromultimeric proteins and is updated with the development of bridge vectors. Progress on agroinfiltration in Nicotiana and non-Nicotiana plant hosts is subsequently showcased in context of their applications for producing high-value human biologics and low-cost and high-volume industrial enzymes. These new advancements in agroinfiltration greatly enhance the robustness and scalability of transgene delivery in plants, facilitating the adoption of plant transient expression systems for manufacturing recombinant proteins with a broad range of applications. PMID:26075275
Sijbrandij, Marit; Acarturk, Ceren; Bird, Martha; Bryant, Richard A; Burchert, Sebastian; Carswell, Kenneth; de Jong, Joop; Dinesen, Cecilie; Dawson, Katie S.; El Chammay, Rabih; van Ittersum, Linde; Jordans, Mark; Knaevelsrud, Christine; McDaid, David; Miller, Kenneth; Morina, Naser; Park, A-La; Roberts, Bayard; van Son, Yvette; Sondorp, Egbert; Pfaltz, Monique C.; Ruttenberg, Leontien; Schick, Matthis; Schnyder, Ulrich; van Ommeren, Mark; Ventevogel, Peter; Weissbecker, Inka; Weitz, Erica; Wiedemann, Nana; Whitney, Claire; Cuijpers, Pim
2017-01-01
ABSTRACT The crisis in Syria has resulted in vast numbers of refugees seeking asylum in Syria’s neighbouring countries as well as in Europe. Refugees are at considerable risk of developing common mental disorders, including depression, anxiety, and posttraumatic stress disorder (PTSD). Most refugees do not have access to mental health services for these problems because of multiple barriers in national and refugee specific health systems, including limited availability of mental health professionals. To counter some of challenges arising from limited mental health system capacity the World Health Organization (WHO) has developed a range of scalable psychological interventions aimed at reducing psychological distress and improving functioning in people living in communities affected by adversity. These interventions, including Problem Management Plus (PM+) and its variants, are intended to be delivered through individual or group face-to-face or smartphone formats by lay, non-professional people who have not received specialized mental health training, We provide an evidence-based rationale for the use of the scalable PM+ oriented programmes being adapted for Syrian refugees and provide information on the newly launched STRENGTHS programme for adapting, testing and scaling up of PM+ in various modalities in both neighbouring and European countries hosting Syrian refugees. PMID:29163867
Orozco, Raquel; Godfrey, Scott; Coffman, Jon; Amarikwa, Linus; Parker, Stephanie; Hernandez, Lindsay; Wachuku, Chinenye; Mai, Ben; Song, Brian; Hoskatti, Shashidhar; Asong, Jinkeng; Shamlou, Parviz; Bardliving, Cameron; Fiadeiro, Marcus
2017-07-01
We designed, built or 3D printed, and screened tubular reactors that minimize axial dispersion to serve as incubation chambers for continuous virus inactivation of biological products. Empirical residence time distribution data were used to derive each tubular design's volume equivalent to a theoretical plate (VETP) values at a various process flow rates. One design, the Jig in a Box (JIB), yielded the lowest VETP, indicating optimal radial mixing and minimal axial dispersion. A minimum residence time (MRT) approach was employed, where the MRT is the minimum time the product spends in the tubular reactor. This incubation time is typically 60 minutes in a batch process. We provide recommendations for combinations of flow rates and device dimensions for operation of the JIB connected in series that will meet a 60-min MRT. The results show that under a wide range of flow rates and corresponding volumes, it takes 75 ± 3 min for 99% of the product to exit the reactor while meeting the 60-min MRT criterion and fulfilling the constraint of keeping a differential pressure drop under 5 psi. Under these conditions, the VETP increases slightly from 3 to 5 mL though the number of theoretical plates stays constant at about 1326 ± 88. We also demonstrated that the final design volume was only 6% ± 1% larger than the ideal plug flow volume. Using such a device would enable continuous viral inactivation in a truly continuous process or in the effluent of a batch chromatography column. Viral inactivation studies would be required to validate such a design. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:954-965, 2017. © 2017 American Institute of Chemical Engineers.
Northrop Grumman TR202 LOX/LH2 Deep Throttling Engine Technology Project Status
NASA Technical Reports Server (NTRS)
Gromski, Jason; Majamaki, Annik; Chianese, Silvio; Weinstock, Vladimir; Kim, Tony S.
2010-01-01
NASA's Propulsion and Cryogenic Advanced Development (PCAD) project is currently developing enabling propulsion technologies in support of future lander missions. To meet lander requirements, several technical challenges need to be overcome, one of which is the ability for the descent engine(s) to operate over a deep throttle range with cryogenic propellants. To address this need, PCAD has enlisted Northrop Grumman Aerospace Systems (NGAS) in a technology development effort associated with the TR202 engine. The TR202 is a LOX/LH2 expander cycle engine driven by independent turbopump assemblies and featuring a variable area pintle injector similar to the injector used on the TR200 Apollo Lunar Module Descent Engine (LMDE). Since the Apollo missions, NGAS has continued to mature deep throttling pintle injector technology. The TR202 program has completed two series of pintle injector testing. The first series of testing used ablative thrust chambers and demonstrated igniter operation as well as stable performance at discrete points throughout the designed 10:1 throttle range. The second series was conducted with calorimeter chambers and demonstrated injector performance at discrete points throughout the throttle range as well as chamber heat flow adequate to power an expander cycle design across the throttle range. This paper provides an overview of the TR202 program, describing the different phases and key milestones. It describes how test data was correlated to the engine conceptual design. The test data obtained has created a valuable database for deep throttling cryogenic pintle technology, a technology that is readily scalable in thrust level.
ERIC Educational Resources Information Center
Ngai, Grace; Chan, Stephen C. F.; Leong, Hong Va; Ng, Vincent T. Y.
2013-01-01
This article presents the design and development of i*CATch, a construction kit for physical and wearable computing that was designed to be scalable, plug-and-play, and to provide support for iterative and exploratory learning. It consists of a standardized construction interface that can be adapted for a wide range of soft textiles or electronic…
Chemoselective N-arylation of aminobenzamides via copper catalysed Chan-Evans-Lam reactions.
Liu, Shuai; Zu, Weisai; Zhang, Jinli; Xu, Liang
2017-11-15
Chemoselective N-arylation of unprotected aminobenzamides was achieved via Cu-catalysed Chan-Evans-Lam cross-coupling with aryl boronic acids for the first time. Simple copper catalysts enable the selective arylation of amino groups in ortho/meta/para-aminobenzamides under open-flask conditions. The reactions were scalable and compatible with a wide range of functional groups.
Bright Room-Temperature Single-Photon Emission from Defects in Gallium Nitride.
Berhane, Amanuel M; Jeong, Kwang-Yong; Bodrog, Zoltán; Fiedler, Saskia; Schröder, Tim; Triviño, Noelia Vico; Palacios, Tomás; Gali, Adam; Toth, Milos; Englund, Dirk; Aharonovich, Igor
2017-03-01
Room-temperature quantum emitters in gallium nitride (GaN) are reported. The emitters originate from cubic inclusions in hexagonal lattice and exhibit narrowband luminescence in the red spectral range. The sources are found in different GaN substrates, and therefore are promising for scalable quantum technologies. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Core Flight System (cFS) a Low Cost Solution for SmallSats
NASA Technical Reports Server (NTRS)
McComas, David; Strege, Susanne; Wilmot, Jonathan
2015-01-01
The cFS is a FSW product line that uses a layered architecture and compile-time configuration parameters which make it portable and scalable for a wide range of platforms. The software layers that defined the application run-time environment are now under a NASA-wide configuration control board with the goal of sustaining an open-source application ecosystem.
Long-range interactions and parallel scalability in molecular simulations
NASA Astrophysics Data System (ADS)
Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko
2007-01-01
Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.
Mamlin, Burke W; Biondich, Paul G; Wolfe, Ben A; Fraser, Hamish; Jazayeri, Darius; Allen, Christian; Miranda, Justin; Tierney, William M
2006-01-01
Millions of people are continue to die each year from HIV/AIDS. The majority of infected persons (>95%) live in the developing world. A worthy response to this pandemic will require coordinated, scalable, and flexible information systems. We describe the OpenMRS system, an open source, collaborative effort that can serve as a foundation for EMR development in developing countries. We report our progress to date, lessons learned, and future directions.
2016 National Algal Biofuels Technology Review Fact Sheet
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016-06-01
Algae-based biofuels and bioproducts offer great promise in contributing to the U.S. Department of Energy (DOE) Bioenergy Technologies Office’s (BETO’s) vision of a thriving and sustainable bioeconomy fueled by innovative technologies. The state of technology for producing algal biofuels continues to mature with ongoing investment by DOE and the private sector, but additional research, development, and demonstration (RD&D) is needed to achieve widespread deployment of affordable, scalable, and sustainable algal biofuels.
Campbell, Karen J; Hesketh, Kylie D; McNaughton, Sarah A; Ball, Kylie; McCallum, Zoë; Lynch, John; Crawford, David A
2016-02-18
Understanding how we can prevent childhood obesity in scalable and sustainable ways is imperative. Early RCT interventions focused on the first two years of life have shown promise however, differences in Body Mass Index between intervention and control groups diminish once the interventions cease. Innovative and cost-effective strategies seeking to continue to support parents to engender appropriate energy balance behaviours in young children need to be explored. The Infant Feeding Activity and Nutrition Trial (InFANT) Extend Program builds on the early outcomes of the Melbourne InFANT Program. This cluster randomized controlled trial will test the efficacy of an extended (33 versus 15 month) and enhanced (use of web-based materials, and Facebook® engagement), version of the original Melbourne InFANT Program intervention in a new cohort. Outcomes at 36 months of age will be compared against the control group. This trial will provide important information regarding capacity and opportunities to maximize early childhood intervention effectiveness over the first three years of life. This study continues to build the evidence base regarding the design of cost-effective, scalable interventions to promote protective energy balance behaviors in early childhood, and in turn, promote improved child weight and health across the life course. ACTRN12611000386932. Registered 13 April 2011.
Complete quantum control of exciton qubits bound to isoelectronic centres.
Éthier-Majcher, G; St-Jean, P; Boso, G; Tosi, A; Klem, J F; Francoeur, S
2014-05-30
In recent years, impressive demonstrations related to quantum information processing have been realized. The scalability of quantum interactions between arbitrary qubits within an array remains however a significant hurdle to the practical realization of a quantum computer. Among the proposed ideas to achieve fully scalable quantum processing, the use of photons is appealing because they can mediate long-range quantum interactions and could serve as buses to build quantum networks. Quantum dots or nitrogen-vacancy centres in diamond can be coupled to light, but the former system lacks optical homogeneity while the latter suffers from a low dipole moment, rendering their large-scale interconnection challenging. Here, through the complete quantum control of exciton qubits, we demonstrate that nitrogen isoelectronic centres in GaAs combine both the uniformity and predictability of atomic defects and the dipole moment of semiconductor quantum dots. This establishes isoelectronic centres as a promising platform for quantum information processing.
Scalable lithography from Natural DNA Patterns via polyacrylamide gel
NASA Astrophysics Data System (ADS)
Qu, Jiehao; Hou, Xianliang; Fan, Wanchao; Xi, Guanghui; Diao, Hongyan; Liu, Xiangdon
2015-12-01
A facile strategy for fabricating scalable stamps has been developed using cross-linked polyacrylamide gel (PAMG) that controllably and precisely shrinks and swells with water content. Aligned patterns of natural DNA molecules were prepared by evaporative self-assembly on a PMMA substrate, and were transferred to unsaturated polyester resin (UPR) to form a negative replica. The negative was used to pattern the linear structures onto the surface of water-swollen PAMG, and the pattern sizes on the PAMG stamp were customized by adjusting the water content of the PAMG. As a result, consistent reproduction of DNA patterns could be achieved with feature sizes that can be controlled over the range of 40%-200% of the original pattern dimensions. This methodology is novel and may pave a new avenue for manufacturing stamp-based functional nanostructures in a simple and cost-effective manner on a large scale.
Optical nano-woodpiles: large-area metallic photonic crystals and metamaterials.
Ibbotson, Lindsey A; Demetriadou, Angela; Croxall, Stephen; Hess, Ortwin; Baumberg, Jeremy J
2015-02-09
Metallic woodpile photonic crystals and metamaterials operating across the visible spectrum are extremely difficult to construct over large areas, because of the intricate three-dimensional nanostructures and sub-50 nm features demanded. Previous routes use electron-beam lithography or direct laser writing but widespread application is restricted by their expense and low throughput. Scalable approaches including soft lithography, colloidal self-assembly, and interference holography, produce structures limited in feature size, material durability, or geometry. By multiply stacking gold nanowire flexible gratings, we demonstrate a scalable high-fidelity approach for fabricating flexible metallic woodpile photonic crystals, with features down to 10 nm produced in bulk and at low cost. Control of stacking sequence, asymmetry, and orientation elicits great control, with visible-wavelength band-gap reflections exceeding 60%, and with strong induced chirality. Such flexible and stretchable architectures can produce metamaterials with refractive index near zero, and are easily tuned across the IR and visible ranges.
Networking and AI systems: Requirements and benefits
NASA Technical Reports Server (NTRS)
1988-01-01
The price performance benefits of network systems is well documented. The ability to share expensive resources sold timesharing for mainframes, department clusters of minicomputers, and now local area networks of workstations and servers. In the process, other fundamental system requirements emerged. These have now been generalized with open system requirements for hardware, software, applications and tools. The ability to interconnect a variety of vendor products has led to a specification of interfaces that allow new techniques to extend existing systems for new and exciting applications. As an example of the message passing system, local area networks provide a testbed for many of the issues addressed by future concurrent architectures: synchronization, load balancing, fault tolerance and scalability. Gold Hill has been working with a number of vendors on distributed architectures that range from a network of workstations to a hypercube of microprocessors with distributed memory. Results from early applications are promising both for performance and scalability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Tianbiao L.; Wei, Xiaoliang; Nie, Zimin
The worldwide increasing energy demands and rising CO2 emissions motivate a search of new technologies to take advantage of renewable energy such as solar and wind. Rechargeable redox flow batteries (RFBs) with their high power density, high energy efficiency, scalability (up to MW and MWh), and safety features are one suitable option for integrating such energy sources and overcoming their intermittency. Source limitation and forbidden high system costs of current RFBs technologies impede wide implementation. Here we report a total organic aqueous redox flow battery (OARFB), using low cost and sustainable MV (anolyte) and 4-HO-TEMPO (catholyte), and benign NaCl supportingmore » electrolyte. The electrochemical properties of the organic redox active materials were studied using cyclic voltammetry and rotating disk electrode voltammetry. The MV/4-HO-TEMPO ARFB has an exceptionally high cell voltage, 1.25 V. Prototypes of the organic ARFB can be operated at high current densities ranging from 20 to 100 mA/cm2, and deliver stable capacity for 100 cycles with nearly 100% coulombic efficiency. The overall technical characters of the MV/4-HO-TEMPO ARFB are very attractive for continuous technic development.« less
Transparent heaters made by ultrasonic spray pyrolysis of SnO2 on soda-lime glass substrates
NASA Astrophysics Data System (ADS)
Ansari, Mohammad; Akbari-Saatloo, Mehdi; Gharesi, Mohsen
2017-12-01
Transparent heaters have become important owing to the increasing demand in automotive and display device manufacturing industries. Indium tin oxide (ITO) is the most commonly used material for production of transparent heaters, but the fabrication cost is high as the indium resources are diminishing fast. This has been the driving force behind the intense research for discovering more durable and cost-effective alternatives. Tin oxide, with its high temperature stability and coexisting high levels of conductivity and transparency, can replace expensive ITO in the fabrication of transparent heaters. Here, we propose tin oxide films deposited using ultrasonic spray pyrolysis as the raw material for the fabrication of transparent heaters. Silver contacts are paste printed on the deposited SnO2 layers, which provide the necessary connections to the external circuitry. Deposition of films having sheet resistance in the 150 Ω/□ range takes only ∼5 minutes and the utilized methods are fully scalable to mass production level. Durability tests, carried out for weeks of continuous operation at different elevated temperatures, demonstrated the long load life of the produced heaters.
Silk protein nanowires patterned using electron beam lithography.
Pal, Ramendra K; Yadavalli, Vamsi K
2018-08-17
Nanofabrication approaches to pattern proteins at the nanoscale are useful in applications ranging from organic bioelectronics to cellular engineering. Specifically, functional materials based on natural polymers offer sustainable and environment-friendly substitutes to synthetic polymers. Silk proteins (fibroin and sericin) have emerged as an important class of biomaterials for next generation applications owing to excellent optical and mechanical properties, inherent biocompatibility, and biodegradability. However, the ability to precisely control their spatial positioning at the nanoscale via high throughput tools continues to remain a challenge. In this study electron beam lithography (EBL) is used to provide nanoscale patterning using methacrylate conjugated silk proteins that are photoreactive 'photoresists' materials. Very low energy electron beam radiation can be used to pattern silk proteins at the nanoscale and over large areas, whereby such nanostructure fabrication can be performed without specialized EBL tools. Significantly, using conducting polymers in conjunction with these silk proteins, the formation of protein nanowires down to 100 nm is shown. These wires can be easily degraded using enzymatic degradation. Thus, proteins can be precisely and scalably patterned and doped with conducting polymers and enzymes to form degradable, organic bioelectronic devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurrus, Elizabeth; Engel, Dave; Star, Keith
The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve the equations of continuum electrostatics for large biomolecular assemblages that has provided impact in the study of a broad range of chemical, biological, and biomedical applications. APBS addresses three key technology challenges for understanding solvation and electrostatics in biomedical applications: accurate and efficient models for biomolecular solvation and electrostatics, robust and scalable software for applying those theories to biomolecular systems, and mechanisms for sharing and analyzing biomolecular electrostatics data in the scientific community. To address new research applications and advancing computational capabilities, we have continually updated APBS and its suitemore » of accompanying software since its release in 2001. In this manuscript, we discuss the models and capabilities that have recently been implemented within the APBS software package including: a Poisson-Boltzmann analytical and a semi-analytical solver, an optimized boundary element solver, a geometry-based geometric flow solvation model, a graph theory based algorithm for determining pKa values, and an improved web-based visualization tool for viewing electrostatics.« less
Improvements to the APBS biomolecular solvation software suite.
Jurrus, Elizabeth; Engel, Dave; Star, Keith; Monson, Kyle; Brandi, Juan; Felberg, Lisa E; Brookes, David H; Wilson, Leighton; Chen, Jiahui; Liles, Karina; Chun, Minju; Li, Peter; Gohara, David W; Dolinsky, Todd; Konecny, Robert; Koes, David R; Nielsen, Jens Erik; Head-Gordon, Teresa; Geng, Weihua; Krasny, Robert; Wei, Guo-Wei; Holst, Michael J; McCammon, J Andrew; Baker, Nathan A
2018-01-01
The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve the equations of continuum electrostatics for large biomolecular assemblages that have provided impact in the study of a broad range of chemical, biological, and biomedical applications. APBS addresses the three key technology challenges for understanding solvation and electrostatics in biomedical applications: accurate and efficient models for biomolecular solvation and electrostatics, robust and scalable software for applying those theories to biomolecular systems, and mechanisms for sharing and analyzing biomolecular electrostatics data in the scientific community. To address new research applications and advancing computational capabilities, we have continually updated APBS and its suite of accompanying software since its release in 2001. In this article, we discuss the models and capabilities that have recently been implemented within the APBS software package including a Poisson-Boltzmann analytical and a semi-analytical solver, an optimized boundary element solver, a geometry-based geometric flow solvation model, a graph theory-based algorithm for determining pK a values, and an improved web-based visualization tool for viewing electrostatics. © 2017 The Protein Society.
Greensilica® vectors for smart textiles.
Matos, Joana C; Avelar, Inês; Martins, M Bárbara F; Gonçalves, M Clara
2017-01-20
The present work aims developing a versatile Greensilica ® vector/carrier, able to bind to a wide range of textile matrices of carbohydrate polymers and susceptible of being loaded with chemicals/drugs/therapeutic molecules, to create a green tailor-made (multi)functional high-tech textile. A green, eco-friendly, ammonia-free, easily scalable, time-saving sol-gel process was established for the production of those silica-based colloidal particles (SiO 2 , amine-SiO 2 , diamine-SiO 2 , and epoxy-SiO 2 ). Two different textile matrices (cotton, polyester) were functionalized, through the impregnation of Greensilica® particles. The impregnation was performed with and without cure. Diamine-SiO 2 colloidal particles exhibited the higher bonding efficiency in cured textile matrices (both cotton and polyester), while with no cure the best adherence to cotton and polyester textile matrices was achieved with diamine-SiO 2 and amine-SiO 2 , respectively. Use once and throw away and continued use applications were envisaged and screened through washing tests. The efficiency of the textiles impregnation was confirmed by SEM, and quantified by ICP. Copyright © 2016 Elsevier Ltd. All rights reserved.
High frequency signal acquisition and control system based on DSP+FPGA
NASA Astrophysics Data System (ADS)
Liu, Xiao-qi; Zhang, Da-zhi; Yin, Ya-dong
2017-10-01
This paper introduces a design and implementation of high frequency signal acquisition and control system based on DSP + FPGA. The system supports internal/external clock and internal/external trigger sampling. It has a maximum sampling rate of 400MBPS and has a 1.4GHz input bandwidth for the ADC. Data can be collected continuously or periodically in systems and they are stored in DDR2. At the same time, the system also supports real-time acquisition, the collected data after digital frequency conversion and Cascaded Integrator-Comb (CIC) filtering, which then be sent to the CPCI bus through the high-speed DSP, can be assigned to the fiber board for subsequent processing. The system integrates signal acquisition and pre-processing functions, which uses high-speed A/D, high-speed DSP and FPGA mixed technology and has a wide range of uses in data acquisition and recording. In the signal processing, the system can be seamlessly connected to the dedicated processor board. The system has the advantages of multi-selectivity, good scalability and so on, which satisfies the different requirements of different signals in different projects.
Benigni, Matthew C; Joseph, Kenneth; Carley, Kathleen M
2017-01-01
The Islamic State of Iraq and ash-Sham (ISIS) continues to use social media as an essential element of its campaign to motivate support. On Twitter, ISIS' unique ability to leverage unaffiliated sympathizers that simply retweet propaganda has been identified as a primary mechanism in their success in motivating both recruitment and "lone wolf" attacks. The present work explores a large community of Twitter users whose activity supports ISIS propaganda diffusion in varying degrees. Within this ISIS supporting community, we observe a diverse range of actor types, including fighters, propagandists, recruiters, religious scholars, and unaffiliated sympathizers. The interaction between these users offers unique insight into the people and narratives critical to ISIS' sustainment. In their entirety, we refer to this diverse set of users as an online extremist community or OEC. We present Iterative Vertex Clustering and Classification (IVCC), a scalable analytic approach for OEC detection in annotated heterogeneous networks, and provide an illustrative case study of an online community of over 22,000 Twitter users whose online behavior directly advocates support for ISIS or contibutes to the group's propaganda dissemination through retweets.
A scalable multi-DLP pico-projector system for virtual reality
NASA Astrophysics Data System (ADS)
Teubl, F.; Kurashima, C.; Cabral, M.; Fels, S.; Lopes, R.; Zuffo, M.
2014-03-01
Virtual Reality (VR) environments can offer immersion, interaction and realistic images to users. A VR system is usually expensive and requires special equipment in a complex setup. One approach is to use Commodity-Off-The-Shelf (COTS) desktop multi-projectors manually or camera based calibrated to reduce the cost of VR systems without significant decrease of the visual experience. Additionally, for non-planar screen shapes, special optics such as lenses and mirrors are required thus increasing costs. We propose a low-cost, scalable, flexible and mobile solution that allows building complex VR systems that projects images onto a variety of arbitrary surfaces such as planar, cylindrical and spherical surfaces. This approach combines three key aspects: 1) clusters of DLP-picoprojectors to provide homogeneous and continuous pixel density upon arbitrary surfaces without additional optics; 2) LED lighting technology for energy efficiency and light control; 3) smaller physical footprint for flexibility purposes. Therefore, the proposed system is scalable in terms of pixel density, energy and physical space. To achieve these goals, we developed a multi-projector software library called FastFusion that calibrates all projectors in a uniform image that is presented to viewers. FastFusion uses a camera to automatically calibrate geometric and photometric correction of projected images from ad-hoc positioned projectors, the only requirement is some few pixels overlapping amongst them. We present results with eight Pico-projectors, with 7 lumens (LED) and DLP 0.17 HVGA Chipset.
A scalable healthcare information system based on a service-oriented architecture.
Yang, Tzu-Hsiang; Sun, Yeali S; Lai, Feipei
2011-06-01
Many existing healthcare information systems are composed of a number of heterogeneous systems and face the important issue of system scalability. This paper first describes the comprehensive healthcare information systems used in National Taiwan University Hospital (NTUH) and then presents a service-oriented architecture (SOA)-based healthcare information system (HIS) based on the service standard HL7. The proposed architecture focuses on system scalability, in terms of both hardware and software. Moreover, we describe how scalability is implemented in rightsizing, service groups, databases, and hardware scalability. Although SOA-based systems sometimes display poor performance, through a performance evaluation of our HIS based on SOA, the average response time for outpatient, inpatient, and emergency HL7Central systems are 0.035, 0.04, and 0.036 s, respectively. The outpatient, inpatient, and emergency WebUI average response times are 0.79, 1.25, and 0.82 s. The scalability of the rightsizing project and our evaluation results show that the SOA HIS we propose provides evidence that SOA can provide system scalability and sustainability in a highly demanding healthcare information system.
A simulator tool set for evaluating HEVC/SHVC streaming
NASA Astrophysics Data System (ADS)
Al Hadhrami, Tawfik; Nightingale, James; Wang, Qi; Grecos, Christos; Kehtarnavaz, Nasser
2015-02-01
Video streaming and other multimedia applications account for an ever increasing proportion of all network traffic. The recent adoption of High Efficiency Video Coding (HEVC) as the H.265 standard provides many opportunities for new and improved services multimedia services and applications in the consumer domain. Since the delivery of version one of H.265, the Joint Collaborative Team on Video Coding have been working towards standardisation of a scalable extension (SHVC) to the H.265 standard and a series of range extensions and new profiles. As these enhancements are added to the standard the range of potential applications and research opportunities will expend. For example the use of video is also growing rapidly in other sectors such as safety, security, defence and health with real-time high quality video transmission playing an important role in areas like critical infrastructure monitoring and disaster management. Each of which may benefit from the application of enhanced HEVC/H.265 and SHVC capabilities. The majority of existing research into HEVC/H.265 transmission has focussed on the consumer domain addressing issues such as broadcast transmission and delivery to mobile devices with the lack of freely available tools widely cited as an obstacle to conducting this type of research. In this paper we present a toolset which facilitates the transmission and evaluation of HEVC/H.265 and SHVC encoded video on the popular open source NCTUns simulator. Our toolset provides researchers with a modular, easy to use platform for evaluating video transmission and adaptation proposals on large scale wired, wireless and hybrid architectures. The toolset consists of pre-processing, transmission, SHVC adaptation and post-processing tools to gather and analyse statistics. It has been implemented using HM15 and SHM5, the latest versions of the HEVC and SHVC reference software implementations to ensure that currently adopted proposals for scalable and range extensions to the standard can be investigated. We demonstrate the effectiveness and usability of our toolset by evaluating SHVC streaming and adaptation to meet terminal constraints and network conditions in a range of wired, wireless, and large scale wireless mesh network scenarios, each of which is designed to simulate a realistic environment. Our results are compared to those for H264/SVC, the scalable extension to the existing H.264/AVC advanced video coding standard.
NASA Astrophysics Data System (ADS)
Jing, Changfeng; Liang, Song; Ruan, Yong; Huang, Jie
2008-10-01
During the urbanization process, when facing complex requirements of city development, ever-growing urban data, rapid development of planning business and increasing planning complexity, a scalable, extensible urban planning management information system is needed urgently. PM2006 is such a system that can deal with these problems. In response to the status and problems in urban planning, the scalability and extensibility of PM2006 are introduced which can be seen as business-oriented workflow extensibility, scalability of DLL-based architecture, flexibility on platforms of GIS and database, scalability of data updating and maintenance and so on. It is verified that PM2006 system has good extensibility and scalability which can meet the requirements of all levels of administrative divisions and can adapt to ever-growing changes in urban planning business. At the end of this paper, the application of PM2006 in Urban Planning Bureau of Suzhou city is described.
Entanglement in a Quantum Annealing Processor
2016-09-07
that QA is a viable technology for large- scale quantum computing . DOI: 10.1103/PhysRevX.4.021041 Subject Areas: Quantum Physics, Quantum Information...Superconductivity I. INTRODUCTION The past decade has been exciting for the field of quantum computation . A wide range of physical imple- mentations...measurements used in studying prototype universal quantum computers [9–14]. These constraints make it challenging to experimentally determine whether a scalable
Recent advances in superconducting nanowire single photon detectors for single-photon imaging
NASA Astrophysics Data System (ADS)
Verma, V. B.; Allman, M. S.; Stevens, M.; Gerrits, T.; Horansky, R. D.; Lita, A. E.; Marsili, F.; Beyer, A.; Shaw, M. D.; Stern, J. A.; Mirin, R. P.; Nam, S. W.
2016-05-01
We demonstrate a 64-pixel free-space-coupled array of superconducting nanowire single photon detectors optimized for high detection efficiency in the near-infrared range. An integrated, readily scalable, multiplexed readout scheme is employed to reduce the number of readout lines to 16. The cryogenic, optical, and electronic packaging to read out the array, as well as characterization measurements are discussed.
Comparative-effectiveness research in distributed health data networks.
Toh, S; Platt, R; Steiner, J F; Brown, J S
2011-12-01
Comparative-effectiveness research (CER) can be conducted within a distributed health data network. Such networks allow secure access to separate data sets from different data partners and overcome many practical obstacles related to patient privacy, data security, and proprietary concerns. A scalable network architecture supports a wide range of CER activities and meets the data infrastructure needs envisioned by the Federal Coordinating Council for Comparative Effectiveness Research.
Cost Considerations in Cloud Computing
2014-01-01
investments. 2. Database Options The potential promise that “ big data ” analytics holds for many enterprise mission areas makes relevant the question of the...development of a range of new distributed file systems and data - bases that have better scalability properties than traditional SQL databases. Hadoop ... data . Many systems exist that extend or supplement Hadoop —such as Apache Accumulo, which provides a highly granular mechanism for managing security
NASA Astrophysics Data System (ADS)
Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle
2016-08-01
Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.
Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...
2016-08-10
Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O( N 2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Methodmore » (FMM) to evaluate the integrals in O( N) operations, with O( N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less
Large-scale electrophysiology: acquisition, compression, encryption, and storage of big data.
Brinkmann, Benjamin H; Bower, Mark R; Stengel, Keith A; Worrell, Gregory A; Stead, Matt
2009-05-30
The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single-neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single-neuron action potentials, high frequency oscillations, and high amplitude ultra-slow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range-encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information.
Large-scale Electrophysiology: Acquisition, Compression, Encryption, and Storage of Big Data
Brinkmann, Benjamin H.; Bower, Mark R.; Stengel, Keith A.; Worrell, Gregory A.; Stead, Matt
2009-01-01
The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32 kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single neuron action potentials, high frequency oscillations, and high amplitude ultraslow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information. PMID:19427545
Binary Interval Search: a scalable algorithm for counting interval intersections
Layer, Ryan M.; Skadron, Kevin; Robins, Gabriel; Hall, Ira M.; Quinlan, Aaron R.
2013-01-01
Motivation: The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. Results: We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. Availability: https://github.com/arq5x/bits. Contact: arq5x@virginia.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23129298
Rapid, low cost prototyping of transdermal devices for personal healthcare monitoring.
Sharma, Sanjiv; Saeed, Anwer; Johnson, Christopher; Gadegaard, Nikolaj; Cass, Anthony Eg
2017-04-01
The next generation of devices for personal healthcare monitoring will comprise molecular sensors to monitor analytes of interest in the skin compartment. Transdermal devices based on microneedles offer an excellent opportunity to explore the dynamics of molecular markers in the interstitial fluid, however good acceptability of these next generation devices will require several technical problems associated with current commercially available wearable sensors to be overcome. These particularly include reliability, comfort and cost. An essential pre-requisite for transdermal molecular sensing devices is that they can be fabricated using scalable technologies which are cost effective. We present here a minimally invasive microneedle array as a continuous monitoring platform technology. Method for scalable fabrication of these structures is presented. The microneedle arrays were characterised mechanically and were shown to penetrate human skin under moderate thumb pressure. They were then functionalised and evaluated as glucose, lactate and theophylline biosensors. The results suggest that this technology can be employed in the measurement of metabolites, therapeutic drugs and biomarkers and could have an important role to play in the management of chronic diseases.
Open release of the DCA++ project
NASA Astrophysics Data System (ADS)
Haehner, Urs; Solca, Raffaele; Staar, Peter; Alvarez, Gonzalo; Maier, Thomas; Summers, Michael; Schulthess, Thomas
We present the first open release of the DCA++ project, a highly scalable and efficient research code to solve quantum many-body problems with cutting edge quantum cluster algorithms. The implemented dynamical cluster approximation (DCA) and its DCA+ extension with a continuous self-energy capture nonlocal correlations in strongly correlated electron systems thereby allowing insight into high-Tc superconductivity. With the increasing heterogeneity of modern machines, DCA++ provides portable performance on conventional and emerging new architectures, such as hybrid CPU-GPU and Xeon Phi, sustaining multiple petaflops on ORNL's Titan and CSCS' Piz Daint. Moreover, we will describe how best practices in software engineering can be applied to make software development sustainable and scalable in a research group. Software testing and documentation not only prevent productivity collapse, but more importantly, they are necessary for correctness, credibility and reproducibility of scientific results. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF) awarded by the INCITE program, and of the Swiss National Supercomputing Center. OLCF is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.
Robust and compact entanglement generation from diode-laser-pumped four-wave mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrie, B. J.; Yang, Y.; Eaton, M.
Four-wave-mixing processes are now routinely used to demonstrate multi-spatial-mode Einstein- Podolsky-Rosen entanglement and intensity difference squeezing. Recently, diode-laser-pumped four-wave mixing processes have been shown to provide an affordable, compact, and stable source for intensity difference squeezing, but it was unknown if excess phase noise present in power amplifier pump configurations would be an impediment to achieving quadrature entanglement. Here, we demonstrate the operating regimes under which these systems are capable of producing entanglement and under which excess phase noise produced by the amplifier contaminates the output state. We show that Einstein-Podolsky-Rosen entanglement in two mode squeezed states can be generatedmore » by a four-wave-mixing source deriving both the pump field and the local oscillators from a tapered-amplifier diode-laser. In conclusion, this robust continuous variable entanglement source is highly scalable and amenable to miniaturization, making it a critical step toward the development of integrated quantum sensors and scalable quantum information processors, such as spatial comb cluster states.« less
Robust and compact entanglement generation from diode-laser-pumped four-wave mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrie, B. J., E-mail: lawriebj@ornl.gov; Pooser, R. C.; Yang, Y.
Four-wave-mixing processes are now routinely used to demonstrate multi-spatial-mode Einstein-Podolsky-Rosen entanglement and intensity difference squeezing. Diode-laser-pumped four-wave mixing processes have recently been shown to provide an affordable, compact, and stable source for intensity difference squeezing, but it was unknown if excess phase noise present in power amplifier pump configurations would be an impediment to achieving quadrature entanglement. Here, we demonstrate the operating regimes under which these systems are capable of producing entanglement and under which excess phase noise produced by the amplifier contaminates the output state. We show that Einstein-Podolsky-Rosen entanglement in two mode squeezed states can be generated bymore » a four-wave-mixing source deriving both the pump field and the local oscillators from a tapered-amplifier diode-laser. This robust continuous variable entanglement source is highly scalable and amenable to miniaturization, making it a critical step toward the development of integrated quantum sensors and scalable quantum information processors, such as spatial comb cluster states.« less
Synthesis of millimeter-scale transition metal dichalcogenides single crystals
Gong, Yongji; Ye, Gonglan; Lei, Sidong; ...
2016-02-10
The emergence of semiconducting transition metal dichalcogenide (TMD) atomic layers has opened up unprecedented opportunities in atomically thin electronics. Yet the scalable growth of TMD layers with large grain sizes and uniformity has remained very challenging. Here is reported a simple, scalable chemical vapor deposition approach for the growth of MoSe2 layers is reported, in which the nucleation density can be reduced from 105 to 25 nuclei cm -2, leading to millimeter-scale MoSe 2 single crystals as well as continuous macrocrystalline films with millimeter size grains. The selective growth of monolayers and multilayered MoSe2 films with well-defined stacking orientation canmore » also be controlled via tuning the growth temperature. In addition, periodic defects, such as nanoscale triangular holes, can be engineered into these layers by controlling the growth conditions. The low density of grain boundaries in the films results in high average mobilities, around ≈42 cm 2 V -1 s -1, for back-gated MoSe 2 transistors. This generic synthesis approach is also demonstrated for other TMD layers such as millimeter-scale WSe 2 single crystals.« less
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less
Robust and compact entanglement generation from diode-laser-pumped four-wave mixing
Lawrie, B. J.; Yang, Y.; Eaton, M.; ...
2016-04-11
Four-wave-mixing processes are now routinely used to demonstrate multi-spatial-mode Einstein- Podolsky-Rosen entanglement and intensity difference squeezing. Recently, diode-laser-pumped four-wave mixing processes have been shown to provide an affordable, compact, and stable source for intensity difference squeezing, but it was unknown if excess phase noise present in power amplifier pump configurations would be an impediment to achieving quadrature entanglement. Here, we demonstrate the operating regimes under which these systems are capable of producing entanglement and under which excess phase noise produced by the amplifier contaminates the output state. We show that Einstein-Podolsky-Rosen entanglement in two mode squeezed states can be generatedmore » by a four-wave-mixing source deriving both the pump field and the local oscillators from a tapered-amplifier diode-laser. In conclusion, this robust continuous variable entanglement source is highly scalable and amenable to miniaturization, making it a critical step toward the development of integrated quantum sensors and scalable quantum information processors, such as spatial comb cluster states.« less
Distributed controller clustering in software defined networks.
Abdelaziz, Ahmed; Fong, Ang Tan; Gani, Abdullah; Garba, Usman; Khan, Suleman; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond
2017-01-01
Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunney, Brian T.N.; Anderson, Robert W.
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less
Silver nanoparticles: Synthesis methods, bio-applications and properties.
Abbasi, Elham; Milani, Morteza; Fekri Aval, Sedigheh; Kouhi, Mohammad; Akbarzadeh, Abolfazl; Tayefi Nasrabadi, Hamid; Nikasa, Parisa; Joo, San Woo; Hanifehpour, Younes; Nejati-Koshki, Kazem; Samiei, Mohammad
2016-01-01
Silver nanoparticles size makes wide range of new applications in various fields of industry. Synthesis of noble metal nanoparticles for applications such as catalysis, electronics, optics, environmental and biotechnology is an area of constant interest. Two main methods for Silver nanoparticles are the physical and chemical methods. The problem with these methods is absorption of toxic substances onto them. Green synthesis approaches overcome this limitation. Silver nanoparticles size makes wide range of new applications in various fields of industry. This article summarizes exclusively scalable techniques and focuses on strengths, respectively, limitations with respect to the biomedical applicability and regulatory requirements concerning silver nanoparticles.
Scalability problems of simple genetic algorithms.
Thierens, D
1999-01-01
Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.
Systematic and Scalable Testing of Concurrent Programs
2013-12-16
The evaluation of CHESS [107] checked eight different programs ranging from process management libraries to a distributed execution engine to a research...tool (§3.1) targets systematic testing of scheduling nondeterminism in multi- threaded components of the Omega cluster management system [129], while...tool for systematic testing of multithreaded com- ponents of the Omega cluster management system [129]. In particular, §3.1.1 defines a model for
2007-09-01
Configuration Consideration ...........................54 C. MAE NGAT DAM, CHIANG MAI , THAILAND, FIELD EXPERIMENT...2006 802.11 Network Topology Mae Ngat Dam, Chiang Mai , Thailand.......................39 Figure 31. View of COASTS 2006 802.11 Topology...Requirements (Background From Google Earth).....62 Figure 44. Mae Ngat Dam, Chiang Mai , Thailand (From Google Earth
Augmented longitudinal acoustic trap for scalable microparticle enrichment.
Cui, M; Binkley, M M; Shekhani, H N; Berezin, M Y; Meacham, J M
2018-05-01
We introduce an acoustic microfluidic device architecture that locally augments the pressure field for separation and enrichment of targeted microparticles in a longitudinal acoustic trap. Pairs of pillar arrays comprise "pseudo walls" that are oriented perpendicular to the inflow direction. Though sample flow is unimpeded, pillar arrays support half-wave resonances that correspond to the array gap width. Positive acoustic contrast particles of supracritical diameter focus to nodal locations of the acoustic field and are held against drag from the bulk fluid motion. Thus, the longitudinal standing bulk acoustic wave (LSBAW) device achieves size-selective and material-specific separation and enrichment of microparticles from a continuous sample flow. A finite element analysis model is used to predict eigenfrequencies of LSBAW architectures with two pillar geometries, slanted and lamellar. Corresponding pressure fields are used to identify longitudinal resonances that are suitable for microparticle enrichment. Optimal operating conditions exhibit maxima in the ratio of acoustic energy density in the LSBAW trap to that in inlet and outlet regions of the microchannel. Model results guide fabrication and experimental evaluation of realized LSBAW assemblies regarding enrichment capability. We demonstrate separation and isolation of 20 μ m polystyrene and ∼10 μ m antibody-decorated glass beads within both pillar geometries. The results also establish several practical attributes of our approach. The LSBAW device is inherently scalable and enables continuous enrichment at a prescribed location. These features benefit separations applications while also allowing concurrent observation and analysis of trap contents.
Photoreactive elastin-like proteins for use as versatile bioactive materials and surface coatings
Raphel, Jordan; Parisi-Amon, Andreina; Heilshorn, Sarah
2012-01-01
Photocrosslinkable, protein-engineered biomaterials combine a rapid, controllable, cytocompatible crosslinking method with a modular design strategy to create a new family of bioactive materials. These materials have a wide range of biomedical applications, including the development of bioactive implant coatings, drug delivery vehicles, and tissue engineering scaffolds. We present the successful functionalization of a bioactive elastin-like protein with photoreactive diazirine moieties. Scalable synthesis is achieved using a standard recombinant protein expression host followed by site-specific modification of lysine residues with a heterobifunctional N-hydroxysuccinimide ester-diazirine crosslinker. The resulting biomaterial is demonstrated to be processable by spin coating, drop casting, soft lithographic patterning, and mold casting to fabricate a variety of two- and three-dimensional photocrosslinked biomaterials with length scales spanning the nanometer to millimeter range. Protein thin films proved to be highly stable over a three-week period. Cell-adhesive functional domains incorporated into the engineered protein materials were shown to remain active post-photo-processing. Human adipose-derived stem cells achieved faster rates of cell adhesion and larger spread areas on thin films of the engineered protein compared to control substrates. The ease and scalability of material production, processing versatility, and modular bioactive functionality make this recombinantly engineered protein an ideal candidate for the development of novel biomaterial coatings, films, and scaffolds. PMID:23015764
Photoreactive elastin-like proteins for use as versatile bioactive materials and surface coatings.
Raphel, Jordan; Parisi-Amon, Andreina; Heilshorn, Sarah
2012-10-07
Photocrosslinkable, protein-engineered biomaterials combine a rapid, controllable, cytocompatible crosslinking method with a modular design strategy to create a new family of bioactive materials. These materials have a wide range of biomedical applications, including the development of bioactive implant coatings, drug delivery vehicles, and tissue engineering scaffolds. We present the successful functionalization of a bioactive elastin-like protein with photoreactive diazirine moieties. Scalable synthesis is achieved using a standard recombinant protein expression host followed by site-specific modification of lysine residues with a heterobifunctional N-hydroxysuccinimide ester-diazirine crosslinker. The resulting biomaterial is demonstrated to be processable by spin coating, drop casting, soft lithographic patterning, and mold casting to fabricate a variety of two- and three-dimensional photocrosslinked biomaterials with length scales spanning the nanometer to millimeter range. Protein thin films proved to be highly stable over a three-week period. Cell-adhesive functional domains incorporated into the engineered protein materials were shown to remain active post-photo-processing. Human adipose-derived stem cells achieved faster rates of cell adhesion and larger spread areas on thin films of the engineered protein compared to control substrates. The ease and scalability of material production, processing versatility, and modular bioactive functionality make this recombinantly engineered protein an ideal candidate for the development of novel biomaterial coatings, films, and scaffolds.
NASA Astrophysics Data System (ADS)
Grant, K. D.; Johnson, B. R.; Miller, S. W.; Jamilkowski, M. L.
2014-12-01
The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). The Joint Polar Satellite System will replace the afternoon orbit component and ground processing system of the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA. The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological and geophysical observations of the Earth. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS). Developed and maintained by Raytheon Intelligence, Information and Services (IIS), the CGS is a multi-mission enterprise system serving NOAA, NASA and their national and international partners. The CGS provides a wide range of support to a number of missions. Originally designed to support S-NPP and JPSS, the CGS has demonstrated its scalability and flexibility to incorporate all of these other important missions efficiently and with minimal cost, schedule and risk, while strengthening global partnerships in weather and environmental monitoring. The CGS architecture will be upgraded to Block 2.0 in 2015 to satisfy several key objectives, including: "operationalizing" S-NPP, which had originally been intended as a risk reduction mission; leveraging lessons learned to date in multi-mission support; taking advantage of newer, more reliable and efficient technologies; and satisfying new requirements and constraints due to the continually evolving budgetary environment. To ensure the CGS meets these needs, we have developed 48 Technical Performance Measures (TPMs) across 9 categories: Data Availability, Data Latency, Operational Availability, Margin, Scalability, Situational Awareness, Transition (between environments and sites), WAN Efficiency, and Data Recovery Processing. This paper will provide an overview of the CGS Block 2.0 architecture, with particular focus on the 9 TPM categories listed above. We will describe how we ensure the deployed architecture meets these TPMs to satisfy our multi-mission objectives with the deployment of Block 2.0 in 2015.
Multifunctional, supramolecular, continuous artificial nacre fibres
NASA Astrophysics Data System (ADS)
Hu, Xiaozhen; Xu, Zhen; Gao, Chao
2012-10-01
Nature has created amazing materials during the process of evolution, inspiring scientists to studiously mimic them. Nacre is of particular interest, and it has been studied for more than half-century for its strong, stiff, and tough attributes resulting from the recognized ``brick-and-mortar'' (B&M) layered structure comprised of inorganic aragonite platelets and biomacromolecules. The past two decades have witnessed great advances in nacre-mimetic composites, but they are solely limited in films with finite size (centimetre-scale). To realize the adream target of continuous nacre-mimics with perfect structures is still a great challenge unresolved. Here, we present a simple and scalable strategy to produce bio-mimic continuous fibres with B&M structures of alternating graphene sheets and hyperbranched polyglycerol (HPG) binders via wet-spinning assembly technology. The resulting macroscopic supramolecular fibres exhibit excellent mechanical properties comparable or even superior to nacre and bone, and possess fine electrical conductivity and outstanding corrosion-resistance.
QUADrATiC: scalable gene expression connectivity mapping for repurposing FDA-approved therapeutics.
O'Reilly, Paul G; Wen, Qing; Bankhead, Peter; Dunne, Philip D; McArt, Darragh G; McPherson, Suzanne; Hamilton, Peter W; Mills, Ken I; Zhang, Shu-Dong
2016-05-04
Gene expression connectivity mapping has proven to be a powerful and flexible tool for research. Its application has been shown in a broad range of research topics, most commonly as a means of identifying potential small molecule compounds, which may be further investigated as candidates for repurposing to treat diseases. The public release of voluminous data from the Library of Integrated Cellular Signatures (LINCS) programme further enhanced the utilities and potentials of gene expression connectivity mapping in biomedicine. We describe QUADrATiC ( http://go.qub.ac.uk/QUADrATiC ), a user-friendly tool for the exploration of gene expression connectivity on the subset of the LINCS data set corresponding to FDA-approved small molecule compounds. It enables the identification of compounds for repurposing therapeutic potentials. The software is designed to cope with the increased volume of data over existing tools, by taking advantage of multicore computing architectures to provide a scalable solution, which may be installed and operated on a range of computers, from laptops to servers. This scalability is provided by the use of the modern concurrent programming paradigm provided by the Akka framework. The QUADrATiC Graphical User Interface (GUI) has been developed using advanced Javascript frameworks, providing novel visualization capabilities for further analysis of connections. There is also a web services interface, allowing integration with other programs or scripts. QUADrATiC has been shown to provide an improvement over existing connectivity map software, in terms of scope (based on the LINCS data set), applicability (using FDA-approved compounds), usability and speed. It offers potential to biological researchers to analyze transcriptional data and generate potential therapeutics for focussed study in the lab. QUADrATiC represents a step change in the process of investigating gene expression connectivity and provides more biologically-relevant results than previous alternative solutions.
Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.
ERIC Educational Resources Information Center
Wang, James Z.; Du, Yanping
Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…
Temporally Scalable Visual SLAM using a Reduced Pose Graph
2012-05-25
m b r i d g e , m a 0 213 9 u s a — w w w. c s a i l . m i t . e d u MIT-CSAIL-TR-2012-013 May 25, 2012 Temporally Scalable Visual SLAM using a...00-00-2012 4. TITLE AND SUBTITLE Temporally Scalable Visual SLAM using a Reduced Pose Graph 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...demonstrate a system for temporally scalable visual SLAM using a reduced pose graph representation. Unlike previous visual SLAM approaches that use
Wu, Fan; Stark, Eran; Ku, Pei-Cheng; Wise, Kensall D.; Buzsáki, György; Yoon, Euisik
2015-01-01
SUMMARY We report a scalable method to monolithically integrate microscopic light emitting diodes (μLEDs) and recording sites onto silicon neural probes for optogenetic applications in neuroscience. Each μLED and recording site has dimensions similar to a pyramidal neuron soma, providing confined emission and electrophysiological recording of action potentials and local field activity. We fabricated and implanted the four-shank probes, each integrated with 12 μLEDs and 32 recording sites, into the CA1 pyramidal layer of anesthetized and freely moving mice. Spikes were robustly induced by 60 nW light power, and fast population oscillations were induced at the microwatt range. To demonstrate the spatiotemporal precision of parallel stimulation and recording, we achieved independent control of distinct cells ~50 μm apart and of differential somatodendritic compartments of single neurons. The scalability and spatiotemporal resolution of this monolithic optogenetic tool provides versatility and precision for cellular-level circuit analysis in deep structures of intact, freely moving animals. PMID:26627311
Using S3 cloud storage with ROOT and CvmFS
NASA Astrophysics Data System (ADS)
Arsuaga-Ríos, María; Heikkilä, Seppo S.; Duellmann, Dirk; Meusel, René; Blomer, Jakob; Couturier, Ben
2015-12-01
Amazon S3 is a widely adopted web API for scalable cloud storage that could also fulfill storage requirements of the high-energy physics community. CERN has been evaluating this option using some key HEP applications such as ROOT and the CernVM filesystem (CvmFS) with S3 back-ends. In this contribution we present an evaluation of two versions of the Huawei UDS storage system stressed with a large number of clients executing HEP software applications. The performance of concurrently storing individual objects is presented alongside with more complex data access patterns as produced by the ROOT data analysis framework. Both Huawei UDS generations show a successful scalability by supporting multiple byte-range requests in contrast with Amazon S3 or Ceph which do not support these commonly used HEP operations. We further report the S3 integration with recent CvmFS versions and summarize the experience with CvmFS/S3 for publishing daily releases of the full LHCb experiment software stack.
High-Speed and Scalable Whole-Brain Imaging in Rodents and Primates.
Seiriki, Kaoru; Kasai, Atsushi; Hashimoto, Takeshi; Schulze, Wiebke; Niu, Misaki; Yamaguchi, Shun; Nakazawa, Takanobu; Inoue, Ken-Ichi; Uezono, Shiori; Takada, Masahiko; Naka, Yuichiro; Igarashi, Hisato; Tanuma, Masato; Waschek, James A; Ago, Yukio; Tanaka, Kenji F; Hayata-Takano, Atsuko; Nagayasu, Kazuki; Shintani, Norihito; Hashimoto, Ryota; Kunii, Yasuto; Hino, Mizuki; Matsumoto, Junya; Yabe, Hirooki; Nagai, Takeharu; Fujita, Katsumasa; Matsuda, Toshio; Takuma, Kazuhiro; Baba, Akemichi; Hashimoto, Hitoshi
2017-06-21
Subcellular resolution imaging of the whole brain and subsequent image analysis are prerequisites for understanding anatomical and functional brain networks. Here, we have developed a very high-speed serial-sectioning imaging system named FAST (block-face serial microscopy tomography), which acquires high-resolution images of a whole mouse brain in a speed range comparable to that of light-sheet fluorescence microscopy. FAST enables complete visualization of the brain at a resolution sufficient to resolve all cells and their subcellular structures. FAST renders unbiased quantitative group comparisons of normal and disease model brain cells for the whole brain at a high spatial resolution. Furthermore, FAST is highly scalable to non-human primate brains and human postmortem brain tissues, and can visualize neuronal projections in a whole adult marmoset brain. Thus, FAST provides new opportunities for global approaches that will allow for a better understanding of brain systems in multiple animal models and in human diseases. Copyright © 2017 Elsevier Inc. All rights reserved.
Wang, Jing; Xuan, Yi; Qi, Minghao; Huang, Haiyang; Li, You; Li, Ming; Chen, Xin; Sheng, Zhen; Wu, Aimin; Li, Wei; Wang, Xi; Zou, Shichang; Gan, Fuwan
2015-05-01
A broadband and fabrication-tolerant on-chip scalable mode-division multiplexing (MDM) scheme based on mode-evolution counter-tapered couplers is designed and experimentally demonstrated on a silicon-on-insulator (SOI) platform. Due to the broadband advantage offered by mode evolution, the two-mode MDM link exhibits a very large, -1 dB bandwidth of >180 nm, which is considerably larger than most of the previously reported MDM links whether they are based on mode-interference or evolution. In addition, the performance metrics remain stable for large-device width deviations from the designed valued by -60 nm to 40 nm, and for temperature variations from -25°C to 75°C. This MDM scheme can be readily extended to higher-order mode multiplexing and a three-mode MDM link is measured with less than -10 dB crosstalk from 1.46 to 1.64 μm wavelength range.
Large-scale, Exhaustive Lattice-based Structural Auditing of SNOMED CT.
Zhang, Guo-Qiang; Bodenreider, Olivier
2010-11-13
One criterion for the well-formedness of ontologies is that their hierarchical structure forms a lattice. Formal Concept Analysis (FCA) has been used as a technique for assessing the quality of ontologies, but is not scalable to large ontologies such as SNOMED CT (> 300k concepts). We developed a methodology called Lattice-based Structural Auditing (LaSA), for auditing biomedical ontologies, implemented through automated SPARQL queries, in order to exhaustively identify all non-lattice pairs in SNOMED CT. The percentage of non-lattice pairs ranges from 0 to 1.66 among the 19 SNOMED CT hierarchies. Preliminary manual inspection of a limited portion of the over 544k non-lattice pairs, among over 356 million candidate pairs, revealed inconsistent use of precoordination in SNOMED CT, but also a number of false positives. Our results are consistent with those based on FCA, with the advantage that the LaSA pipeline is scalable and applicable to ontological systems consisting mostly of taxonomic links.
Kim, Haegyeom; Lim, Hee-Dae; Kim, Sung-Wook; Hong, Jihyun; Seo, Dong-Hwa; Kim, Dae-chul; Jeon, Seokwoo; Park, Sungjin; Kang, Kisuk
2013-01-01
High-performance and cost-effective rechargeable batteries are key to the success of electric vehicles and large-scale energy storage systems. Extensive research has focused on the development of (i) new high-energy electrodes that can store more lithium or (ii) high-power nano-structured electrodes hybridized with carbonaceous materials. However, the current status of lithium batteries based on redox reactions of heavy transition metals still remains far below the demands required for the proposed applications. Herein, we present a novel approach using tunable functional groups on graphene nano-platelets as redox centers. The electrode can deliver high capacity of ~250 mAh g−1, power of ~20 kW kg−1 in an acceptable cathode voltage range, and provide excellent cyclability up to thousands of repeated charge/discharge cycles. The simple, mass-scalable synthetic route for the functionalized graphene nano-platelets proposed in this work suggests that the graphene cathode can be a promising new class of electrode. PMID:23514953
Large-scale, Exhaustive Lattice-based Structural Auditing of SNOMED CT
Zhang, Guo-Qiang; Bodenreider, Olivier
2010-01-01
One criterion for the well-formedness of ontologies is that their hierarchical structure forms a lattice. Formal Concept Analysis (FCA) has been used as a technique for assessing the quality of ontologies, but is not scalable to large ontologies such as SNOMED CT (> 300k concepts). We developed a methodology called Lattice-based Structural Auditing (LaSA), for auditing biomedical ontologies, implemented through automated SPARQL queries, in order to exhaustively identify all non-lattice pairs in SNOMED CT. The percentage of non-lattice pairs ranges from 0 to 1.66 among the 19 SNOMED CT hierarchies. Preliminary manual inspection of a limited portion of the over 544k non-lattice pairs, among over 356 million candidate pairs, revealed inconsistent use of precoordination in SNOMED CT, but also a number of false positives. Our results are consistent with those based on FCA, with the advantage that the LaSA pipeline is scalable and applicable to ontological systems consisting mostly of taxonomic links. PMID:21347113
A Simple, Scalable, Script-based Science Processor
NASA Technical Reports Server (NTRS)
Lynnes, Christopher
2004-01-01
The production of Earth Science data from orbiting spacecraft is an activity that takes place 24 hours a day, 7 days a week. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), this results in as many as 16,000 program executions each day, far too many to be run by human operators. In fact, when the Moderate Resolution Imaging Spectroradiometer (MODIS) was launched aboard the Terra spacecraft in 1999, the automated commercial system for running science processing was able to manage no more than 4,000 executions per day. Consequently, the GES DAAC developed a lightweight system based on the popular Per1 scripting language, named the Simple, Scalable, Script-based Science Processor (S4P). S4P automates science processing, allowing operators to focus on the rare problems occurring from anomalies in data or algorithms. S4P has been reused in several systems ranging from routine processing of MODIS data to data mining and is publicly available from NASA.
Myosin concentration underlies cell size–dependent scalability of actomyosin ring constriction
Wright, Graham D.; Leong, Fong Yew; Chiam, Keng-Hwee; Chen, Yinxiao; Jedd, Gregory; Balasubramanian, Mohan K.
2011-01-01
In eukaryotes, cytokinesis is accomplished by an actomyosin-based contractile ring. Although in Caenorhabditis elegans embryos larger cells divide at a faster rate than smaller cells, it remains unknown whether a similar mode of scalability operates in other cells. We investigated cytokinesis in the filamentous fungus Neurospora crassa, which exhibits a wide range of hyphal circumferences. We found that N. crassa cells divide using an actomyosin ring and larger rings constricted faster than smaller rings. However, unlike in C. elegans, the total amount of myosin remained constant throughout constriction, and there was a size-dependent increase in the starting concentration of myosin in the ring. We predict that the increased number of ring-associated myosin motors in larger rings leads to the increased constriction rate. Accordingly, reduction or inhibition of ring-associated myosin slows down the rate of constriction. Because the mechanical characteristics of contractile rings are conserved, we predict that these findings will be relevant to actomyosin ring constriction in other cell types. PMID:22123864
Scalable, efficient ASICS for the square kilometre array: From A/D conversion to central correlation
NASA Astrophysics Data System (ADS)
Schmatz, M. L.; Jongerius, R.; Dittmann, G.; Anghel, A.; Engbersen, T.; van Lunteren, J.; Buchmann, P.
2014-05-01
The Square Kilometre Array (SKA) is a future radio telescope, currently being designed by the worldwide radio-astronomy community. During the first of two construction phases, more than 250,000 antennas will be deployed, clustered in aperture-array stations. The antennas will generate 2.5 Pb/s of data, which needs to be processed in real time. For the processing stages from A/D conversion to central correlation, we propose an ASIC solution using only three chip architectures. The architecture is scalable - additional chips support additional antennas or beams - and versatile - it can relocate its receiver band within a range of a few MHz up to 4GHz. This flexibility makes it applicable to both SKA phases 1 and 2. The proposed chips implement an antenna and station processor for 289 antennas with a power consumption on the order of 600W and a correlator, including corner turn, for 911 stations on the order of 90 kW.
Scalable DB+IR Technology: Processing Probabilistic Datalog with HySpirit.
Frommholz, Ingo; Roelleke, Thomas
2016-01-01
Probabilistic Datalog (PDatalog, proposed in 1995) is a probabilistic variant of Datalog and a nice conceptual idea to model Information Retrieval in a logical, rule-based programming paradigm. Making PDatalog work in real-world applications requires more than probabilistic facts and rules, and the semantics associated with the evaluation of the programs. We report in this paper some of the key features of the HySpirit system required to scale the execution of PDatalog programs. Firstly, there is the requirement to express probability estimation in PDatalog. Secondly, fuzzy-like predicates are required to model vague predicates (e.g. vague match of attributes such as age or price). Thirdly, to handle large data sets there are scalability issues to be addressed, and therefore, HySpirit provides probabilistic relational indexes and parallel and distributed processing . The main contribution of this paper is a consolidated view on the methods of the HySpirit system to make PDatalog applicable in real-scale applications that involve a wide range of requirements typical for data (information) management and analysis.
Optical nano-woodpiles: large-area metallic photonic crystals and metamaterials
Ibbotson, Lindsey A.; Demetriadou, Angela; Croxall, Stephen; Hess, Ortwin; Baumberg, Jeremy J.
2015-01-01
Metallic woodpile photonic crystals and metamaterials operating across the visible spectrum are extremely difficult to construct over large areas, because of the intricate three-dimensional nanostructures and sub-50 nm features demanded. Previous routes use electron-beam lithography or direct laser writing but widespread application is restricted by their expense and low throughput. Scalable approaches including soft lithography, colloidal self-assembly, and interference holography, produce structures limited in feature size, material durability, or geometry. By multiply stacking gold nanowire flexible gratings, we demonstrate a scalable high-fidelity approach for fabricating flexible metallic woodpile photonic crystals, with features down to 10 nm produced in bulk and at low cost. Control of stacking sequence, asymmetry, and orientation elicits great control, with visible-wavelength band-gap reflections exceeding 60%, and with strong induced chirality. Such flexible and stretchable architectures can produce metamaterials with refractive index near zero, and are easily tuned across the IR and visible ranges. PMID:25660667
Tierney, Brian D.; Choi, Sukwon; DasGupta, Sandeepan; ...
2017-08-16
A distributed impedance “field cage” structure is proposed and evaluated for electric field control in GaN-based, lateral high electron mobility transistors (HEMTs) operating as kilovolt-range power devices. In this structure, a resistive voltage divider is used to control the electric field throughout the active region. The structure complements earlier proposals utilizing floating field plates that did not employ resistively connected elements. Transient results, not previously reported for field plate schemes using either floating or resistively connected field plates, are presented for ramps of dV ds /dt = 100 V/ns. For both DC and transient results, the voltage between the gatemore » and drain is laterally distributed, ensuring the electric field profile between the gate and drain remains below the critical breakdown field as the source-to-drain voltage is increased. Our scheme indicates promise for achieving breakdown voltage scalability to a few kV.« less
OWL: A scalable Monte Carlo simulation suite for finite-temperature study of materials
NASA Astrophysics Data System (ADS)
Li, Ying Wai; Yuk, Simuck F.; Cooper, Valentino R.; Eisenbach, Markus; Odbadrakh, Khorgolkhuu
The OWL suite is a simulation package for performing large-scale Monte Carlo simulations. Its object-oriented, modular design enables it to interface with various external packages for energy evaluations. It is therefore applicable to study the finite-temperature properties for a wide range of systems: from simple classical spin models to materials where the energy is evaluated by ab initio methods. This scheme not only allows for the study of thermodynamic properties based on first-principles statistical mechanics, it also provides a means for massive, multi-level parallelism to fully exploit the capacity of modern heterogeneous computer architectures. We will demonstrate how improved strong and weak scaling is achieved by employing novel, parallel and scalable Monte Carlo algorithms, as well as the applications of OWL to a few selected frontier materials research problems. This research was supported by the Office of Science of the Department of Energy under contract DE-AC05-00OR22725.
Wang, Min; Ma, Pengsha; Yin, Min; Lu, Linfeng; Lin, Yinyue; Chen, Xiaoyuan; Jia, Wei; Cao, Xinmin; Chang, Paichun; Li, Dongdong
2017-09-01
Antireflection (AR) at the interface between the air and incident window material is paramount to boost the performance of photovoltaic devices. 3D nanostructures have attracted tremendous interest to reduce reflection, while the structure is vulnerable to the harsh outdoor environment. Thus the AR film with improved mechanical property is desirable in an industrial application. Herein, a scalable production of flexible AR films is proposed with microsized structures by roll-to-roll imprinting process, which possesses hydrophobic property and much improved robustness. The AR films can be potentially used for a wide range of photovoltaic devices whether based on rigid or flexible substrates. As a demonstration, the AR films are integrated with commercial Si-based triple-junction thin film solar cells. The AR film works as an effective tool to control the light travel path and utilize the light inward more efficiently by exciting hybrid optical modes, which results in a broadband and omnidirectional enhanced performance.
Wang, Min; Ma, Pengsha; Lu, Linfeng; Lin, Yinyue; Chen, Xiaoyuan; Jia, Wei; Cao, Xinmin; Chang, Paichun
2017-01-01
Antireflection (AR) at the interface between the air and incident window material is paramount to boost the performance of photovoltaic devices. 3D nanostructures have attracted tremendous interest to reduce reflection, while the structure is vulnerable to the harsh outdoor environment. Thus the AR film with improved mechanical property is desirable in an industrial application. Herein, a scalable production of flexible AR films is proposed with microsized structures by roll‐to‐roll imprinting process, which possesses hydrophobic property and much improved robustness. The AR films can be potentially used for a wide range of photovoltaic devices whether based on rigid or flexible substrates. As a demonstration, the AR films are integrated with commercial Si‐based triple‐junction thin film solar cells. The AR film works as an effective tool to control the light travel path and utilize the light inward more efficiently by exciting hybrid optical modes, which results in a broadband and omnidirectional enhanced performance. PMID:28932667
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tierney, Brian D.; Choi, Sukwon; DasGupta, Sandeepan
A distributed impedance “field cage” structure is proposed and evaluated for electric field control in GaN-based, lateral high electron mobility transistors (HEMTs) operating as kilovolt-range power devices. In this structure, a resistive voltage divider is used to control the electric field throughout the active region. The structure complements earlier proposals utilizing floating field plates that did not employ resistively connected elements. Transient results, not previously reported for field plate schemes using either floating or resistively connected field plates, are presented for ramps of dV ds /dt = 100 V/ns. For both DC and transient results, the voltage between the gatemore » and drain is laterally distributed, ensuring the electric field profile between the gate and drain remains below the critical breakdown field as the source-to-drain voltage is increased. Our scheme indicates promise for achieving breakdown voltage scalability to a few kV.« less
Processing Approaches for DAS-Enabled Continuous Seismic Monitoring
NASA Astrophysics Data System (ADS)
Dou, S.; Wood, T.; Freifeld, B. M.; Robertson, M.; McDonald, S.; Pevzner, R.; Lindsey, N.; Gelvin, A.; Saari, S.; Morales, A.; Ekblaw, I.; Wagner, A. M.; Ulrich, C.; Daley, T. M.; Ajo Franklin, J. B.
2017-12-01
Distributed Acoustic Sensing (DAS) is creating a "field as laboratory" capability for seismic monitoring of subsurface changes. By providing unprecedented spatial and temporal sampling at a relatively low cost, DAS enables field-scale seismic monitoring to have durations and temporal resolutions that are comparable to those of laboratory experiments. Here we report on seismic processing approaches developed during data analyses of three case studies all using DAS-enabled seismic monitoring with applications ranging from shallow permafrost to deep reservoirs: (1) 10-hour downhole monitoring of cement curing at Otway, Australia; (2) 2-month surface monitoring of controlled permafrost thaw at Fairbanks, Alaska; (3) multi-month downhole and surface monitoring of carbon sequestration at Decatur, Illinois. We emphasize the data management and processing components relevant to DAS-based seismic monitoring, which include scalable approaches to data management, pre-processing, denoising, filtering, and wavefield decomposition. DAS has dramatically increased the data volume to the extent that terabyte-per-day data loads are now typical, straining conventional approaches to data storage and processing. To achieve more efficient use of disk space and network bandwidth, we explore improved file structures and data compression schemes. Because noise floor of DAS measurements is higher than that of conventional sensors, optimal processing workflow involving advanced denoising, deconvolution (of the source signatures), and stacking approaches are being established to maximize signal content of DAS data. The resulting workflow of data management and processing could accelerate the broader adaption of DAS for continuous monitoring of critical processes.
Ultrafast chirped optical waveform recorder using a time microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Corey Vincent
2015-04-21
A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.
Scalable Loading of a Two-Dimensional Trapped-Ion Array
2015-11-25
ion -trap array based on two crossed photo-ionization laser beams . With the use of a continuous flux of pre-cooled neutral...push laser Atomic beam Dierential pumping tube Push laser 2D-MOT 50 K Shield 4 K Shield 4 K stage Trap chip MOT laser Ion To ion pump 5s2 1S0 461...conducted a series of Ramsey experiments on a single trapped ion in the presence and absence of neu- tral atom flux as well as each of the PI laser
Implementing quantum Ricci curvature
NASA Astrophysics Data System (ADS)
Klitgaard, N.; Loll, R.
2018-05-01
Quantum Ricci curvature has been introduced recently as a new, geometric observable characterizing the curvature properties of metric spaces, without the need for a smooth structure. Besides coordinate invariance, its key features are scalability, computability, and robustness. We demonstrate that these properties continue to hold in the context of nonperturbative quantum gravity, by evaluating the quantum Ricci curvature numerically in two-dimensional Euclidean quantum gravity, defined in terms of dynamical triangulations. Despite the well-known, highly nonclassical properties of the underlying quantum geometry, its Ricci curvature can be matched well to that of a five-dimensional round sphere.
2017-06-13
with homogeneous nonagglomerated nanoparticles,20 smudge- and stain -resistant coatings, antibody bonding to phosphor particles, and more. A series...2 BBn + 11.0 Na (in benzene) --- ZrB2 + 4 NaCl + 6 NaBr (1) (2) In a typical experiment, the reactor is charged with 5 grams of anhydrous ZrCl4...21.5 mmol), 0.471 grams of boron (43.5 mmol), 2.35 grams of sodium metal (102.3 mmol) and 100 ml of anhydrous benzene in a controlled atmosphere
2012-01-01
atmosphere model, Int. J . High Perform. Comput. Appl. 26 (1) (2012) 74–89. [8] J.M. Dennis, M. Levy, R.D. Nair, H.M. Tufo, T . Voran. Towards and efficient...26] A. Klockner, T . Warburton, J . Bridge, J.S, Hesthaven, Nodal discontinuous galerkin methods on graphics processors, J . Comput. Phys. 228 (21) (2009...mode James F. Kelly, Francis X. Giraldo ⇑ Department of Applied Mathematics, Naval Postgraduate School, Monterey, CA, United States a r t i c l e i n
Polak, Rani; Pober, David M; Budd, Maggi A; Silver, Julie K; Phillips, Edward M; Abrahamson, Martin J
2017-08-01
This case series describes and examines the outcomes of a remote culinary coaching program aimed at improving nutrition through home cooking. Participants (n = 4) improved attitudes about the perceived ease of home cooking (p < 0.01) and self-efficacy to perform various culinary skills (p = 0.02); and also improved in confidence to continue online learning of culinary skills and consume healthier food. We believe this program might be a viable response to the need for effective and scalable health-related culinary interventions.
A multidimensional finite element method for CFD
NASA Technical Reports Server (NTRS)
Pepper, Darrell W.; Humphrey, Joseph W.
1991-01-01
A finite element method is used to solve the equations of motion for 2- and 3-D fluid flow. The time-dependent equations are solved explicitly using quadrilateral (2-D) and hexahedral (3-D) elements, mass lumping, and reduced integration. A Petrov-Galerkin technique is applied to the advection terms. The method requires a minimum of computational storage, executes quickly, and is scalable for execution on computer systems ranging from PCs to supercomputers.
Manyscale Computing for Sensor Processing in Support of Space Situational Awareness
NASA Astrophysics Data System (ADS)
Schmalz, M.; Chapman, W.; Hayden, E.; Sahni, S.; Ranka, S.
2014-09-01
Increasing image and signal data burden associated with sensor data processing in support of space situational awareness implies continuing computational throughput growth beyond the petascale regime. In addition to growing applications data burden and diversity, the breadth, diversity and scalability of high performance computing architectures and their various organizations challenge the development of a single, unifying, practicable model of parallel computation. Therefore, models for scalable parallel processing have exploited architectural and structural idiosyncrasies, yielding potential misapplications when legacy programs are ported among such architectures. In response to this challenge, we have developed a concise, efficient computational paradigm and software called Manyscale Computing to facilitate efficient mapping of annotated application codes to heterogeneous parallel architectures. Our theory, algorithms, software, and experimental results support partitioning and scheduling of application codes for envisioned parallel architectures, in terms of work atoms that are mapped (for example) to threads or thread blocks on computational hardware. Because of the rigor, completeness, conciseness, and layered design of our manyscale approach, application-to-architecture mapping is feasible and scalable for architectures at petascales, exascales, and above. Further, our methodology is simple, relying primarily on a small set of primitive mapping operations and support routines that are readily implemented on modern parallel processors such as graphics processing units (GPUs) and hybrid multi-processors (HMPs). In this paper, we overview the opportunities and challenges of manyscale computing for image and signal processing in support of space situational awareness applications. We discuss applications in terms of a layered hardware architecture (laboratory > supercomputer > rack > processor > component hierarchy). Demonstration applications include performance analysis and results in terms of execution time as well as storage, power, and energy consumption for bus-connected and/or networked architectures. The feasibility of the manyscale paradigm is demonstrated by addressing four principal challenges: (1) architectural/structural diversity, parallelism, and locality, (2) masking of I/O and memory latencies, (3) scalability of design as well as implementation, and (4) efficient representation/expression of parallel applications. Examples will demonstrate how manyscale computing helps solve these challenges efficiently on real-world computing systems.
Energy-absorption capability and scalability of square cross section composite tube specimens
NASA Technical Reports Server (NTRS)
Farley, Gary L.
1987-01-01
Static crushing tests were conducted on graphite/epoxy and Kevlar/epoxy square cross section tubes to study the influence of specimen geometry on the energy-absorption capability and scalability of composite materials. The tube inside width-to-wall thickness (W/t) ratio was determined to significantly affect the energy-absorption capability of composite materials. As W/t ratio decreases, the energy-absorption capability increases nonlinearly. The energy-absorption capability of Kevlar epoxy tubes was found to be geometrically scalable, but the energy-absorption capability of graphite/epoxy tubes was not geometrically scalable.
Towards Scalable Strain Gauge-Based Joint Torque Sensors
D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred
2017-01-01
During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446
SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures.
Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani
2017-04-01
Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9 m -by-10 m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user's wrist is stationary.
SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures
Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani
2018-01-01
Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9m-by-10m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user’s wrist is stationary. PMID:29683151
A probabilistic approach to randomness in geometric configuration of scalable origami structures
NASA Astrophysics Data System (ADS)
Liu, Ke; Paulino, Glaucio; Gardoni, Paolo
2015-03-01
Origami, an ancient paper folding art, has inspired many solutions to modern engineering challenges. The demand for actual engineering applications motivates further investigation in this field. Although rooted from the historic art form, many applications of origami are based on newly designed origami patterns to match the specific requirenments of an engineering problem. The application of origami to structural design problems ranges from micro-structure of materials to large scale deployable shells. For instance, some origami-inspired designs have unique properties such as negative Poisson ratio and flat foldability. However, origami structures are typically constrained by strict mathematical geometric relationships, which in reality, can be easily violated, due to, for example, random imperfections introduced during manufacturing, or non-uniform deformations under working conditions (e.g. due to non-uniform thermal effects). Therefore, the effects of uncertainties in origami-like structures need to be studied in further detail in order to provide a practical guide for scalable origami-inspired engineering designs. Through reliability and probabilistic analysis, we investigate the effect of randomness in origami structures on their mechanical properties. Dislocations of vertices of an origami structure have different impacts on different mechanical properties, and different origami designs could have different sensitivities to imperfections. Thus we aim to provide a preliminary understanding of the structural behavior of some common scalable origami structures subject to randomness in their geometric configurations in order to help transition the technology toward practical applications of origami engineering.
Towards Scalable Strain Gauge-Based Joint Torque Sensors.
Khan, Hamza; D'Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G; Cuschieri, Alfred; Semini, Claudio
2017-08-18
During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS) , the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot- MiniHyQ . This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR).
Trinity Phase 2 Open Science: CTH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruggirello, Kevin Patrick; Vogler, Tracy
CTH is an Eulerian hydrocode developed by Sandia National Laboratories (SNL) to solve a wide range of shock wave propagation and material deformation problems. Adaptive mesh refinement is also used to improve efficiency for problems with a wide range of spatial scales. The code has a history of running on a variety of computing platforms ranging from desktops to massively parallel distributed-data systems. For the Trinity Phase 2 Open Science campaign, CTH was used to study mesoscale simulations of the hypervelocity penetration of granular SiC powders. The simulations were compared to experimental data. A scaling study of CTH up tomore » 8192 KNL nodes was also performed, and several improvements were made to the code to improve the scalability.« less
Disparity : scalable anomaly detection for clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, N.; Bradshaw, R.; Lusk, E.
2008-01-01
In this paper, we describe disparity, a tool that does parallel, scalable anomaly detection for clusters. Disparity uses basic statistical methods and scalable reduction operations to perform data reduction on client nodes and uses these results to locate node anomalies. We discuss the implementation of disparity and present results of its use on a SiCortex SC5832 system.
Unstructured P2P Network Load Balance Strategy Based on Multilevel Partitioning of Hypergraph
NASA Astrophysics Data System (ADS)
Feng, Lv; Chunlin, Gao; Kaiyang, Ma
2017-05-01
With rapid development of computer performance and distributed technology, P2P-based resource sharing mode plays important role in Internet. P2P network users continued to increase so the high dynamic characteristics of the system determine that it is difficult to obtain the load of other nodes. Therefore, a dynamic load balance strategy based on hypergraph is proposed in this article. The scheme develops from the idea of hypergraph theory in multilevel partitioning. It adopts optimized multilevel partitioning algorithms to partition P2P network into several small areas, and assigns each area a supernode for the management and load transferring of the nodes in this area. In the case of global scheduling is difficult to be achieved, the priority of a number of small range of load balancing can be ensured first. By the node load balance in each small area the whole network can achieve relative load balance. The experiments indicate that the load distribution of network nodes in our scheme is obviously compacter. It effectively solves the unbalanced problems in P2P network, which also improve the scalability and bandwidth utilization of system.
NASA Astrophysics Data System (ADS)
Schaaf, Kjeld; Overeem, Ruud
2004-06-01
Moore’s law is best exploited by using consumer market hardware. In particular, the gaming industry pushes the limit of processor performance thus reducing the cost per raw flop even faster than Moore’s law predicts. Next to the cost benefits of Common-Of-The-Shelf (COTS) processing resources, there is a rapidly growing experience pool in cluster based processing. The typical Beowulf cluster of PC’s supercomputers are well known. Multiple examples exists of specialised cluster computers based on more advanced server nodes or even gaming stations. All these cluster machines build upon the same knowledge about cluster software management, scheduling, middleware libraries and mathematical libraries. In this study, we have integrated COTS processing resources and cluster nodes into a very high performance processing platform suitable for streaming data applications, in particular to implement a correlator. The required processing power for the correlator in modern radio telescopes is in the range of the larger supercomputers, which motivates the usage of supercomputer technology. Raw processing power is provided by graphical processors and is combined with an Infiniband host bus adapter with integrated data stream handling logic. With this processing platform a scalable correlator can be built with continuously growing processing power at consumer market prices.
Joseph, Kenneth; Carley, Kathleen M.
2017-01-01
The Islamic State of Iraq and ash-Sham (ISIS) continues to use social media as an essential element of its campaign to motivate support. On Twitter, ISIS’ unique ability to leverage unaffiliated sympathizers that simply retweet propaganda has been identified as a primary mechanism in their success in motivating both recruitment and “lone wolf” attacks. The present work explores a large community of Twitter users whose activity supports ISIS propaganda diffusion in varying degrees. Within this ISIS supporting community, we observe a diverse range of actor types, including fighters, propagandists, recruiters, religious scholars, and unaffiliated sympathizers. The interaction between these users offers unique insight into the people and narratives critical to ISIS’ sustainment. In their entirety, we refer to this diverse set of users as an online extremist community or OEC. We present Iterative Vertex Clustering and Classification (IVCC), a scalable analytic approach for OEC detection in annotated heterogeneous networks, and provide an illustrative case study of an online community of over 22,000 Twitter users whose online behavior directly advocates support for ISIS or contibutes to the group’s propaganda dissemination through retweets. PMID:29194446
2-micron lasing in Tm:Lu2O3 ceramic: initial operation
NASA Astrophysics Data System (ADS)
Vetrovec, John; Filgas, David M.; Smith, Carey A.; Copeland, Drew A.; Litt, Amardeep S.; Briscoe, Eldridge; Schirmer, Ernestina
2018-03-01
We report on initial lasing of Tm:Lu2O3 ceramic laser with tunable output in the vicinity of 2 μm. Tm:Lu2O3 ceramic gain materials offer a much lower saturation fluence than the traditionally used Tm:YLF and Tm:YAG materials. The gain element is pumped by 796 nm diodes via a "2-for-1" crossrelaxation energy transfer mechanism, which enables high efficiency. The high thermal conductivity of the Lu2O3 host ( 18% higher than YAG) in combination with low quantum defect of 20% supports operation at high-average power. Konoshima's ceramic fabrication process overcomes the scalability limits of single crystal sesquioxides. Tm:Lu2O3 offers wide-bandwidth amplification of ultrashort pulses in a chirped-pulse amplification (CPA) system. A laser oscillator was continuously tuned over a 230 nm range from 1890 to 2120 nm while delivering up to 43W QCW output with up to 37% efficiency. This device is intended for initial testing and later seeding of a multi-pass edge-pumped disk amplifier now being developed by Aqwest which uses composite Tm:Lu2O3 disk gain elements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Brandon C.; Becker, Andrew C.; Sobolewska, Malgosia
2014-06-10
We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placingmore » them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.« less
Improving Rural Geriatric Care Through Education: A Scalable, Collaborative Project.
Buck, Harleah G; Kolanowski, Ann; Fick, Donna; Baronner, Lawrence
2016-07-01
HOW TO OBTAIN CONTACT HOURS BY READING THIS ISSUE Instructions: 1.2 contact hours will be awarded by Villanova University College of Nursing upon successful completion of this activity. A contact hour is a unit of measurement that denotes 60 minutes of an organized learning activity. This is a learner-based activity. Villanova University College of Nursing does not require submission of your answers to the quiz. A contact hour certificate will be awarded after you register, pay the registration fee, and complete the evaluation form online at http://goo.gl/gMfXaf. In order to obtain contact hours you must: 1. Read the article, "Improving Rural Geriatric Care Through Education: A Scalable, Collaborative Project," found on pages 306-313, carefully noting any tables and other illustrative materials that are included to enhance your knowledge and understanding of the content. Be sure to keep track of the amount of time (number of minutes) you spend reading the article and completing the quiz. 2. Read and answer each question on the quiz. After completing all of the questions, compare your answers to those provided within this issue. If you have incorrect answers, return to the article for further study. 3. Go to the Villanova website to register for contact hour credit. You will be asked to provide your name, contact information, and a VISA, MasterCard, or Discover card number for payment of the $20.00 fee. Once you complete the online evaluation, a certificate will be automatically generated. This activity is valid for continuing education credit until June 30, 2019. CONTACT HOURS This activity is co-provided by Villanova University College of Nursing and SLACK Incorporated. Villanova University College of Nursing is accredited as a provider of continuing nursing education by the American Nurses Credentialing Center's Commission on Accreditation. OBJECTIVES Describe the unique nursing challenges that occur in caring for older adults in rural areas. Discuss the Improving Rural Geriatric Care through Education (iRuGCE) project, including the facilitators and challenges to its implementation. DISCLOSURE STATEMENT Neither the planners nor the author have any conflicts of interest to disclose. Rural elders are the fastest growing segment of the U.S. population, with a projected increase of 32% in the next 20 years. Shortages in geriatric-prepared workers are particularly critical in rural areas. This article describes Improving Rural Geriatric Care through Education (iRuGCE), a feasible, scalable, and collaborative continuing education project. iRuGCE was designed to improve geriatric nursing practice. Project goals were to identify, mentor, and facilitate an RN geriatric site champion in critical access hospitals (CAHs) to complete national certification in gerontological nursing, and to design a continuing education program that met the specific needs of the CAHs via delivery of three continuing education sessions per year. Evaluation of the project is promising. Preliminary results suggest that iRuGCE has a positive effect on nurse-sensitive patient satisfaction scores, such as communication with nurses, responsiveness of hospital staff, pain management, communication about medicine, discharge information, and willingness to recommend the hospital. J Contin Educ Nurs. 2016;47(7):306-313. Copyright 2016, SLACK Incorporated.
Electron beam throughput from raster to imaging
NASA Astrophysics Data System (ADS)
Zywno, Marek
2016-12-01
Two architectures of electron beam tools are presented: single beam MEBES Exara designed and built by Etec Systems for mask writing, and the Reflected E-Beam Lithography tool (REBL), designed and built by KLA-Tencor under a DARPA Agreement No. HR0011-07-9-0007. Both tools have implemented technologies not used before to achieve their goals. The MEBES X, renamed Exara for marketing purposes, used an air bearing stage running in vacuum to achieve smooth continuous scanning. The REBL used 2 dimensional imaging to distribute charge to a 4k pixel swath to achieve writing times on the order of 1 wafer per hour, scalable to throughput approaching optical projection tools. Three stage architectures were designed for continuous scanning of wafers: linear maglev, rotary maglev, and dual linear maglev.
Li, Hui; Sheeran, Jillian W; Clausen, Andrew M; Fang, Yuan-Qing; Bio, Matthew M; Bader, Scott
2017-08-01
The development of a flow chemistry process for asymmetric propargylation using allene gas as a reagent is reported. The connected continuous process of allene dissolution, lithiation, Li-Zn transmetallation, and asymmetric propargylation provides homopropargyl β-amino alcohol 1 with high regio- and diastereoselectivity in high yield. This flow process enables practical use of an unstable allenyllithium intermediate. The process uses the commercially available and recyclable (1S,2R)-N-pyrrolidinyl norephedrine as a ligand to promote the highly diastereoselective (32:1) propargylation. Judicious selection of mixers based on the chemistry requirement and real-time monitoring of the process using process analytical technology (PAT) enabled stable and scalable flow chemistry runs. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
O'Connor, Marianne; Morgan, Katy E; Bailey-Straebler, Suzanne; Fairburn, Christopher G; Cooper, Zafra
2018-06-08
One of the major barriers to the dissemination and implementation of psychological treatments is the scarcity of suitably trained therapists. A highly scalable form of Web-centered therapist training, undertaken without external support, has recently been shown to have promise in promoting therapist competence. The aim of this study was to conduct an evaluation of the acceptability and effectiveness of a scalable independent form of Web-centered training in a multinational sample of therapists and investigate the characteristics of those most likely to benefit. A cohort of eligible therapists was recruited internationally and offered access to Web-centered training in enhanced cognitive behavioral therapy, a multicomponent, evidence-based, psychological treatment for any form of eating disorder. No external support was provided during training. Therapist competence was assessed using a validated competence measure before training and after 20 weeks. A total of 806 therapists from 33 different countries expressed interest in the study, and 765 (94.9%) completed a pretraining assessment. The median number of training modules completed was 15 out of a possible 18 (interquartile range, IQR: 4-18), and 87.9% (531/604) reported that they treated at least one patient during training as recommended. Median pretraining competence score was 7 (IQR: 5-10, range: 0-19; N=765), and following training, it was 12 (IQR: 9-15, range: 0-20; N=577). The expected change in competence scores from pretraining to posttraining was 3.5 (95% CI 3.1-3.8; P<.001). After training, 52% (300/574) of therapists with complete competence data met or exceeded the competence threshold, and 45% (95% CI 41-50) of those who had not met this threshold before training did so after training. Compliance with training predicted both an increase in competence scores and meeting or exceeding the competence threshold. Expected change in competence score increased for each extra training module completed (0.19, 95% CI 0.13-0.25), and those who treated a suitable patient during training had an expected change in competence score 1.2 (95% CI 0.4-2.1) points higher than those who did not. Similarly, there was an association between meeting the competence threshold after training and the number of modules completed (odds ratio, OR=1.11, 95% CI 1.07-1.15), and treating at least one patient during training was associated with competence after training (OR=2.2, 95% CI 1.2-4.1). Independent Web-centered training can successfully train large numbers of therapists dispersed across a wide geographical area. This finding is of importance because the availability of a highly scalable method of training potentially increases the number of people who might receive effective psychological treatments. ©Marianne O'Connor, Katy E Morgan, Suzanne Bailey-Straebler, Christopher G Fairburn, Zafra Cooper. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 08.06.2018.
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.
Antireflective surface structures in glass by self-assembly of SiO2 nanoparticles and wet etching.
Maier, Thomas; Bach, David; Müllner, Paul; Hainberger, Rainer; Brückl, Hubert
2013-08-26
We describe the fabrication of an antireflective surface structure with sub-wavelength dimensions on a glass surface using scalable low-cost techniques involving sol-gel coating, thermal annealing, and wet chemical etching. The glass surface structure consists of sand dune like protrusions with 250 nm periodicity and a maximum peak-to-valley height of 120 nm. The antireflective structure increases the transmission of the glass up to 0.9% at 700 nm, and the transmission remains enhanced over a wide spectral range and for a wide range of incident angles. Our measurements reveal a strong polarization dependence of the transmission change.
The control of Pt and Ru nanoparticle size on high surface area supports.
Liu, Qiuli; Joshi, Upendra A; Über, Kevin; Regalbuto, John R
2014-12-28
Supported Ru and Pt nanoparticles are synthesized by the method of strong electrostatic adsorption and subsequently treated under different steaming-reduction conditions to achieve a series of catalysts with controlled particle sizes, ranging from 1 to 8 nm. While in the case of oxidation-reduction conditions, only Pt yielded particles ranging from 2.5 to 8 nm in size and a loss of Ru was observed. Both Ru and Pt sinter faster in air than in hydrogen. This methodology allows the control of particle size using a "production-scalable" catalyst synthesis method which can be applied to high surface area supports with common metal precursors.
Manufacturing Methods for Liposome Adjuvants.
Perrie, Yvonne; Kastner, Elisabeth; Khadke, Swapnil; Roces, Carla B; Stone, Peter
2017-01-01
A wide range of studies have shown that liposomes can act as suitable adjuvants for a range of vaccine antigens. Properties such as their amphiphilic character and biphasic nature allow them to incorporate antigens within the lipid bilayer, on the surface, or encapsulated within the inner core. However, appropriate methods for the manufacture of liposomes are limited and this has resulted in issues with cost, supply, and wider scale application of these systems. Within this chapter we explore manufacturing processes that can be used for the production of liposomal adjuvants, and we outline new manufacturing methods can that offer fast, scalable, and cost-effective production of liposomal adjuvants.
Architecture Knowledge for Evaluating Scalable Databases
2015-01-16
problems, arising from the proliferation of new data models and distributed technologies for building scalable, available data stores . Architects must...longer are relational databases the de facto standard for building data repositories. Highly distributed, scalable “ NoSQL ” databases [11] have emerged...This is especially challenging at the data storage layer. The multitude of competing NoSQL database technologies creates a complex and rapidly
Scalable and Manageable Storage Systems
2000-12-01
Despite our long- distance relationship, my brothers and sisters, Charfeddine, Amel, Ghazi, Hajer, Nabeel , and Ines overwhelmed me with more love and...that enable storage sys - tems to be more cost-effectively scalable. Furthermore, the dissertation proposes an approach to ensure automatic load...and addresses three key technical challenges to making storage sys - tems more cost-effectively scalable and manageable. 1.2 Dissertation research The
Scalable Quantum Networks for Distributed Computing and Sensing
2016-04-01
probabilistic measurement , so we developed quantum memories and guided-wave implementations of same, demonstrating controlled delay of a heralded single...Second, fundamental scalability requires a method to synchronize protocols based on quantum measurements , which are inherently probabilistic. To meet...AFRL-AFOSR-UK-TR-2016-0007 Scalable Quantum Networks for Distributed Computing and Sensing Ian Walmsley THE UNIVERSITY OF OXFORD Final Report 04/01
Adaptive UEP and Packet Size Assignment for Scalable Video Transmission over Burst-Error Channels
NASA Astrophysics Data System (ADS)
Lee, Chen-Wei; Yang, Chu-Sing; Su, Yih-Ching
2006-12-01
This work proposes an adaptive unequal error protection (UEP) and packet size assignment scheme for scalable video transmission over a burst-error channel. An analytic model is developed to evaluate the impact of channel bit error rate on the quality of streaming scalable video. A video transmission scheme, which combines the adaptive assignment of packet size with unequal error protection to increase the end-to-end video quality, is proposed. Several distinct scalable video transmission schemes over burst-error channel have been compared, and the simulation results reveal that the proposed transmission schemes can react to varying channel conditions with less and smoother quality degradation.
Scalability enhancement of AODV using local link repairing
NASA Astrophysics Data System (ADS)
Jain, Jyoti; Gupta, Roopam; Bandhopadhyay, T. K.
2014-09-01
Dynamic change in the topology of an ad hoc network makes it difficult to design an efficient routing protocol. Scalability of an ad hoc network is also one of the important criteria of research in this field. Most of the research works in ad hoc network focus on routing and medium access protocols and produce simulation results for limited-size networks. Ad hoc on-demand distance vector (AODV) is one of the best reactive routing protocols. In this article, modified routing protocols based on local link repairing of AODV are proposed. Method of finding alternate routes for next-to-next node is proposed in case of link failure. These protocols are beacon-less, means periodic hello message is removed from the basic AODV to improve scalability. Few control packet formats have been changed to accommodate suggested modification. Proposed protocols are simulated to investigate scalability performance and compared with basic AODV protocol. This also proves that local link repairing of proposed protocol improves scalability of the network. From simulation results, it is clear that scalability performance of routing protocol is improved because of link repairing method. We have tested protocols for different terrain area with approximate constant node densities and different traffic load.
Wanted: Scalable Tracers for Diffusion Measurements
2015-01-01
Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core–shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say “reinforced Ficoll” or “reinforced hyperbranched polyglycerol.” PMID:25319586
Local and global responses in complex gene regulation networks
NASA Astrophysics Data System (ADS)
Tsuchiya, Masa; Selvarajoo, Kumar; Piras, Vincent; Tomita, Masaru; Giuliani, Alessandro
2009-04-01
An exacerbated sensitivity to apparently minor stimuli and a general resilience of the entire system stay together side-by-side in biological systems. This apparent paradox can be explained by the consideration of biological systems as very strongly interconnected network systems. Some nodes of these networks, thanks to their peculiar location in the network architecture, are responsible for the sensitivity aspects, while the large degree of interconnection is at the basis of the resilience properties of the system. One relevant feature of the high degree of connectivity of gene regulation networks is the emergence of collective ordered phenomena influencing the entire genome and not only a specific portion of transcripts. The great majority of existing gene regulation models give the impression of purely local ‘hard-wired’ mechanisms disregarding the emergence of global ordered behavior encompassing thousands of genes while the general, genome wide, aspects are less known. Here we address, on a data analysis perspective, the discrimination between local and global scale regulations, this goal was achieved by means of the examination of two biological systems: innate immune response in macrophages and oscillating growth dynamics in yeast. Our aim was to reconcile the ‘hard-wired’ local view of gene regulation with a global continuous and scalable one borrowed from statistical physics. This reconciliation is based on the network paradigm in which the local ‘hard-wired’ activities correspond to the activation of specific crucial nodes in the regulation network, while the scalable continuous responses can be equated to the collective oscillations of the network after a perturbation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yan; Mohanty, Soumya D.; Jenet, Fredrick A., E-mail: ywang12@hust.edu.cn
2015-12-20
Supermassive black hole binaries are one of the primary targets of gravitational wave (GW) searches using pulsar timing arrays (PTAs). GW signals from such systems are well represented by parameterized models, allowing the standard Generalized Likelihood Ratio Test (GLRT) to be used for their detection and estimation. However, there is a dichotomy in how the GLRT can be implemented for PTAs: there are two possible ways in which one can split the set of signal parameters for semi-analytical and numerical extremization. The straightforward extension of the method used for continuous signals in ground-based GW searches, where the so-called pulsar phasemore » parameters are maximized numerically, was addressed in an earlier paper. In this paper, we report the first study of the performance of the second approach where the pulsar phases are maximized semi-analytically. This approach is scalable since the number of parameters left over for numerical optimization does not depend on the size of the PTA. Our results show that for the same array size (9 pulsars), the new method performs somewhat worse in parameter estimation, but not in detection, than the previous method where the pulsar phases were maximized numerically. The origin of the performance discrepancy is likely to be in the ill-posedness that is intrinsic to any network analysis method. However, the scalability of the new method allows the ill-posedness to be mitigated by simply adding more pulsars to the array. This is shown explicitly by taking a larger array of pulsars.« less
Blind Seer: A Scalable Private DBMS
2014-05-01
searchable index terms per DB row, in time comparable to (insecure) MySQL (many practical queries can be privately executed with work 1.2-3 times slower...than MySQL , although some queries are costlier). We support a rich query set, including searching on arbitrary boolean formulas on keywords and ranges...index terms per DB row, in time comparable to (insecure) MySQL (many practical queries can be privately executed with work 1.2-3 times slower than MySQL
Multi-scale Functional and Molecular Photoacoustic Tomography
Yao, Junjie; Xia, Jun; Wang, Lihong V.
2015-01-01
Photoacoustic tomography (PAT) combines rich optical absorption contrast with the high spatial resolution of ultrasound at depths in tissue. The high scalability of PAT has enabled anatomical imaging of biological structures ranging from organelles to organs. The inherent functional and molecular imaging capabilities of PAT have further allowed it to measure important physiological parameters and track critical cellular activities. Integration of PAT with other imaging technologies provides complementary capabilities and can potentially accelerate the clinical translation of PAT. PMID:25933617
Energy challenges in optical access and aggregation networks.
Kilper, Daniel C; Rastegarfar, Houman
2016-03-06
Scalability is a critical issue for access and aggregation networks as they must support the growth in both the size of data capacity demands and the multiplicity of access points. The number of connected devices, the Internet of Things, is growing to the tens of billions. Prevailing communication paradigms are reaching physical limitations that make continued growth problematic. Challenges are emerging in electronic and optical systems and energy increasingly plays a central role. With the spectral efficiency of optical systems approaching the Shannon limit, increasing parallelism is required to support higher capacities. For electronic systems, as the density and speed increases, the total system energy, thermal density and energy per bit are moving into regimes that become impractical to support-for example requiring single-chip processor powers above the 100 W limit common today. We examine communication network scaling and energy use from the Internet core down to the computer processor core and consider implications for optical networks. Optical switching in data centres is identified as a potential model from which scalable access and aggregation networks for the future Internet, with the application of integrated photonic devices and intelligent hybrid networking, will emerge. © 2016 The Author(s).
Biometric identification: a holistic perspective
NASA Astrophysics Data System (ADS)
Nadel, Lawrence D.
2007-04-01
Significant advances continue to be made in biometric technology. However, the global war on terrorism and our increasingly electronic society have created the societal need for large-scale, interoperable biometric capabilities that challenge the capabilities of current off-the-shelf technology. At the same time, there are concerns that large-scale implementation of biometrics will infringe our civil liberties and offer increased opportunities for identity theft. This paper looks beyond the basic science and engineering of biometric sensors and fundamental matching algorithms and offers approaches for achieving greater performance and acceptability of applications enabled with currently available biometric technologies. The discussion focuses on three primary biometric system aspects: performance and scalability, interoperability, and cost benefit. Significant improvements in system performance and scalability can be achieved through careful consideration of the following elements: biometric data quality, human factors, operational environment, workflow, multibiometric fusion, and integrated performance modeling. Application interoperability hinges upon some of the factors noted above as well as adherence to interface, data, and performance standards. However, there are times when the price of conforming to such standards can be decreased local system performance. The development of biometric performance-based cost benefit models can help determine realistic requirements and acceptable designs.
From EGEE Operations Portal towards EGI Operations Portal
NASA Astrophysics Data System (ADS)
Cordier, Hélène; L'Orphelin, Cyril; Reynaud, Sylvain; Lequeux, Olivier; Loikkanen, Sinikka; Veyre, Pierre
Grid operators in EGEE have been using a dedicated dashboard as their central operational tool, stable and scalable for the last 5 years despite continuous upgrade from specifications by users, monitoring tools or data providers. In EGEE-III, recent regionalisation of operations led the Operations Portal developers to conceive a standalone instance of this tool. We will see how the dashboard reorganization paved the way for the re-engineering of the portal itself. The outcome is an easily deployable package customized with relevant information sources and specific decentralized operational requirements. This package is composed of a generic and scalable data access mechanism, Lavoisier; a renowned php framework for configuration flexibility, Symfony and a MySQL database. VO life cycle and operational information, EGEE broadcast and Downtime notifications are next for the major reorganization until all other key features of the Operations Portal are migrated to the framework. Features specifications will be sketched at the same time to adapt to EGI requirements and to upgrade. Future work on feature regionalisation, on new advanced features or strategy planning will be tracked in EGI- Inspire through the Operations Tools Advisory Group, OTAG, where all users, customers and third parties of the Operations Portal are represented from January 2010.
Distributed controller clustering in software defined networks
Gani, Abdullah; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond
2017-01-01
Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability. PMID:28384312
Conceptual Architecture for Obtaining Cyber Situational Awareness
2014-06-01
1-893723-17-8. [10] SKYBOX SECURITY. Developer´s Guide. Skybox View. Manual.Version 11. 2010. [11] SCALABLE Network. EXata communications...E. Understanding command and control. Washington, D.C.: CCRP Publication Series, 2006. 255 p. ISBN 1-893723-17-8. • [10] SKYBOX SECURITY. Developer...s Guide. Skybox View. Manual.Version 11. 2010. • [11] SCALABLE Network. EXata communications simulation platform. Available: <http://www.scalable
Scalable Power-Component Models for Concept Testing
2011-08-17
Scalable Power-Component Models for Concept Testing, Mazzola, et al . UNCLASSIFIED: Dist A. Approved for public release 2011 NDIA GROUND VEHICLE...Power-Component Models for Concept Testing, Mazzola, et al . UNCLASSIFIED: Dist A. Approved for public release Page 2 of 8 technology that has yet...Technology Symposium (GVSETS) Scalable Power-Component Models for Concept Testing, Mazzola, et al . UNCLASSIFIED: Dist A. Approved for public release
TriG: Next Generation Scalable Spaceborne GNSS Receiver
NASA Technical Reports Server (NTRS)
Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.
2012-01-01
TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.
Microresonator-based solitons for massively parallel coherent optical communications
NASA Astrophysics Data System (ADS)
Marin-Palomo, Pablo; Kemal, Juned N.; Karpov, Maxim; Kordts, Arne; Pfeifle, Joerg; Pfeiffer, Martin H. P.; Trocha, Philipp; Wolf, Stefan; Brasch, Victor; Anderson, Miles H.; Rosenberger, Ralf; Vijayan, Kovendhan; Freude, Wolfgang; Kippenberg, Tobias J.; Koos, Christian
2017-06-01
Solitons are waveforms that preserve their shape while propagating, as a result of a balance of dispersion and nonlinearity. Soliton-based data transmission schemes were investigated in the 1980s and showed promise as a way of overcoming the limitations imposed by dispersion of optical fibres. However, these approaches were later abandoned in favour of wavelength-division multiplexing schemes, which are easier to implement and offer improved scalability to higher data rates. Here we show that solitons could make a comeback in optical communications, not as a competitor but as a key element of massively parallel wavelength-division multiplexing. Instead of encoding data on the soliton pulse train itself, we use continuous-wave tones of the associated frequency comb as carriers for communication. Dissipative Kerr solitons (DKSs) (solitons that rely on a double balance of parametric gain and cavity loss, as well as dispersion and nonlinearity) are generated as continuously circulating pulses in an integrated silicon nitride microresonator via four-photon interactions mediated by the Kerr nonlinearity, leading to low-noise, spectrally smooth, broadband optical frequency combs. We use two interleaved DKS frequency combs to transmit a data stream of more than 50 terabits per second on 179 individual optical carriers that span the entire telecommunication C and L bands (centred around infrared telecommunication wavelengths of 1.55 micrometres). We also demonstrate coherent detection of a wavelength-division multiplexing data stream by using a pair of DKS frequency combs—one as a multi-wavelength light source at the transmitter and the other as the corresponding local oscillator at the receiver. This approach exploits the scalability of microresonator-based DKS frequency comb sources for massively parallel optical communications at both the transmitter and the receiver. Our results demonstrate the potential of these sources to replace the arrays of continuous-wave lasers that are currently used in high-speed communications. In combination with advanced spatial multiplexing schemes and highly integrated silicon photonic circuits, DKS frequency combs could bring chip-scale petabit-per-second transceivers into reach.
NASA Technical Reports Server (NTRS)
Raible, Daniel E.; Dinca, Dragos; Nayfeh, Taysir H.
2012-01-01
An effective form of wireless power transmission (WPT) has been developed to enable extended mission durations, increased coverage and added capabilities for both space and terrestrial applications that may benefit from optically delivered electrical energy. The high intensity laser power beaming (HILPB) system enables long range optical 'refueling" of electric platforms such as micro unmanned aerial vehicles (MUAV), airships, robotic exploration missions and spacecraft platforms. To further advance the HILPB technology, the focus of this investigation is to determine the optimal laser wavelength to be used with the HILPB receiver, which utilizes vertical multi-junction (VMJ) photovoltaic cells. Frequency optimization of the laser system is necessary in order to maximize the conversion efficiency at continuous high intensities, and thus increase the delivered power density of the HILPB system. Initial spectral characterizations of the device performed at the NASA Glenn Research Center (GRC) indicate the approximate range of peak optical-to-electrical conversion efficiencies, but these data sets represent transient conditions under lower levels of illumination. Extending these results to high levels of steady state illumination, with attention given to the compatibility of available commercial off-the-shelf semiconductor laser sources and atmospheric transmission constraints is the primary focus of this paper. Experimental hardware results utilizing high power continuous wave (CW) semiconductor lasers at four different operational frequencies near the indicated band gap of the photovoltaic VMJ cells are presented and discussed. In addition, the highest receiver power density achieved to date is demonstrated using a single photovoltaic VMJ cell, which provided an exceptionally high electrical output of 13.6 W/sq cm at an optical-to-electrical conversion efficiency of 24 percent. These results are very promising and scalable, as a potential 1.0 sq m HILPB receiver of similar construction would be able to generate 136 kW of electrical power under similar conditions.
Performances of the PIPER scalable child human body model in accident reconstruction
Giordano, Chiara; Kleiven, Svein
2017-01-01
Human body models (HBMs) have the potential to provide significant insights into the pediatric response to impact. This study describes a scalable/posable approach to perform child accident reconstructions using the Position and Personalize Advanced Human Body Models for Injury Prediction (PIPER) scalable child HBM of different ages and in different positions obtained by the PIPER tool. Overall, the PIPER scalable child HBM managed reasonably well to predict the injury severity and location of the children involved in real-life crash scenarios documented in the medical records. The developed methodology and workflow is essential for future work to determine child injury tolerances based on the full Child Advanced Safety Project for European Roads (CASPER) accident reconstruction database. With the workflow presented in this study, the open-source PIPER scalable HBM combined with the PIPER tool is also foreseen to have implications for improved safety designs for a better protection of children in traffic accidents. PMID:29135997
Grieger, Joshua C; Soltys, Stephen M; Samulski, Richard Jude
2016-01-01
Adeno-associated virus (AAV) has shown great promise as a gene therapy vector in multiple aspects of preclinical and clinical applications. Many developments including new serotypes as well as self-complementary vectors are now entering the clinic. With these ongoing vector developments, continued effort has been focused on scalable manufacturing processes that can efficiently generate high-titer, highly pure, and potent quantities of rAAV vectors. Utilizing the relatively simple and efficient transfection system of HEK293 cells as a starting point, we have successfully adapted an adherent HEK293 cell line from a qualified clinical master cell bank to grow in animal component-free suspension conditions in shaker flasks and WAVE bioreactors that allows for rapid and scalable rAAV production. Using the triple transfection method, the suspension HEK293 cell line generates greater than 1 × 105 vector genome containing particles (vg)/cell or greater than 1 × 1014 vg/l of cell culture when harvested 48 hours post-transfection. To achieve these yields, a number of variables were optimized such as selection of a compatible serum-free suspension media that supports both growth and transfection, selection of a transfection reagent, transfection conditions and cell density. A universal purification strategy, based on ion exchange chromatography methods, was also developed that results in high-purity vector preps of AAV serotypes 1–6, 8, 9 and various chimeric capsids tested. This user-friendly process can be completed within 1 week, results in high full to empty particle ratios (>90% full particles), provides postpurification yields (>1 × 1013 vg/l) and purity suitable for clinical applications and is universal with respect to all serotypes and chimeric particles. To date, this scalable manufacturing technology has been utilized to manufacture GMP phase 1 clinical AAV vectors for retinal neovascularization (AAV2), Hemophilia B (scAAV8), giant axonal neuropathy (scAAV9), and retinitis pigmentosa (AAV2), which have been administered into patients. In addition, we report a minimum of a fivefold increase in overall vector production by implementing a perfusion method that entails harvesting rAAV from the culture media at numerous time-points post-transfection. PMID:26437810
Power-Scalable Blue-Green Bessel Beams
2016-02-23
19b. TELEPHONE NUMBER (Include area code) 02/23/2016 Final Technical JAN 2011 - DEC 2013 Power-Scalable Blue -Green Bessel Beams Siddharth Ramachandran...fiber lasers, non-traditional emission wavelengths, high-power blue -green tunabel lasers U U U SAR 11 Siddharth Ramachandran 617-353-9811 1 Power...Scalable Blue -Green Bessel Beams Siddharth Ramachandran Photonics Center, Boston University, 8 Saint Mary’s Street, Boston, MA 02215 phone: (617) 353
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing
2012-12-14
Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia Tathagata Das Haoyuan Li Timothy Hunter Scott Shenker Ion...SUBTITLE Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...time. However, current programming models for distributed stream processing are relatively low-level often leaving the user to worry about consistency of
Modular Universal Scalable Ion-trap Quantum Computer
2016-06-02
SECURITY CLASSIFICATION OF: The main goal of the original MUSIQC proposal was to construct and demonstrate a modular and universally- expandable ion...Distribution Unlimited UU UU UU UU 02-06-2016 1-Aug-2010 31-Jan-2016 Final Report: Modular Universal Scalable Ion-trap Quantum Computer The views...P.O. Box 12211 Research Triangle Park, NC 27709-2211 Ion trap quantum computation, scalable modular architectures REPORT DOCUMENTATION PAGE 11
Scalable L-infinite coding of meshes.
Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter
2010-01-01
The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.
Pirotte, Geert; Kesters, Jurgen; Verstappen, Pieter; Govaerts, Sanne; Manca, Jean; Lutsen, Laurence; Vanderzande, Dirk; Maes, Wouter
2015-10-12
Organic photovoltaics (OPV) have attracted great interest as a solar cell technology with appealing mechanical, aesthetical, and economies-of-scale features. To drive OPV toward economic viability, low-cost, large-scale module production has to be realized in combination with increased top-quality material availability and minimal batch-to-batch variation. To this extent, continuous flow chemistry can serve as a powerful tool. In this contribution, a flow protocol is optimized for the high performance benzodithiophene-thienopyrroledione copolymer PBDTTPD and the material quality is probed through systematic solar-cell evaluation. A stepwise approach is adopted to turn the batch process into a reproducible and scalable continuous flow procedure. Solar cell devices fabricated using the obtained polymer batches deliver an average power conversion efficiency of 7.2 %. Upon incorporation of an ionic polythiophene-based cathodic interlayer, the photovoltaic performance could be enhanced to a maximum efficiency of 9.1 %. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Xue, Weiqi; Sales, Salvador; Capmany, José; Mørk, Jesper
2010-03-15
In this work we demonstrate for the first time, to the best of our knowledge, a continuously tunable 360 degrees microwave phase shifter spanning a microwave bandwidth of several tens of GHz (up to 40 GHz). The proposed device exploits the phenomenon of coherent population oscillations, enhanced by optical filtering, in combination with a regeneration stage realized by four-wave mixing effects. This combination provides scalability: three hybrid stages are demonstrated but the technology allows an all-integrated device. The microwave operation frequency limitations of the suggested technique, dictated by the underlying physics, are also analyzed.
Radiologic image communication and archive service: a secure, scalable, shared approach
NASA Astrophysics Data System (ADS)
Fellingham, Linda L.; Kohli, Jagdish C.
1995-11-01
The Radiologic Image Communication and Archive (RICA) service is designed to provide a shared archive for medical images to the widest possible audience of customers. Images are acquired from a number of different modalities, each available from many different vendors. Images are acquired digitally from those modalities which support direct digital output and by digitizing films for projection x-ray exams. The RICA Central Archive receives standard DICOM 3.0 messages and data streams from the medical imaging devices at customer institutions over the public telecommunication network. RICA represents a completely scalable resource. The user pays only for what he is using today with the full assurance that as the volume of image data that he wishes to send to the archive increases, the capacity will be there to accept it. To provide this seamless scalability imposes several requirements on the RICA architecture: (1) RICA must support the full array of transport services. (2) The Archive Interface must scale cost-effectively to support local networks that range from the very small (one x-ray digitizer in a medical clinic) to the very large and complex (a large hospital with several CTs, MRs, Nuclear medicine devices, ultrasound machines, CRs, and x-ray digitizers). (3) The Archive Server must scale cost-effectively to support rapidly increasing demands for service providing storage for and access to millions of patients and hundreds of millions of images. The architecture must support the incorporation of improved technology as it becomes available to maintain performance and remain cost-effective as demand rises.
ILP-based maximum likelihood genome scaffolding
2014-01-01
Background Interest in de novo genome assembly has been renewed in the past decade due to rapid advances in high-throughput sequencing (HTS) technologies which generate relatively short reads resulting in highly fragmented assemblies consisting of contigs. Additional long-range linkage information is typically used to orient, order, and link contigs into larger structures referred to as scaffolds. Due to library preparation artifacts and erroneous mapping of reads originating from repeats, scaffolding remains a challenging problem. In this paper, we provide a scalable scaffolding algorithm (SILP2) employing a maximum likelihood model capturing read mapping uncertainty and/or non-uniformity of contig coverage which is solved using integer linear programming. A Non-Serial Dynamic Programming (NSDP) paradigm is applied to render our algorithm useful in the processing of larger mammalian genomes. To compare scaffolding tools, we employ novel quantitative metrics in addition to the extant metrics in the field. We have also expanded the set of experiments to include scaffolding of low-complexity metagenomic samples. Results SILP2 achieves better scalability throughg a more efficient NSDP algorithm than previous release of SILP. The results show that SILP2 compares favorably to previous methods OPERA and MIP in both scalability and accuracy for scaffolding single genomes of up to human size, and significantly outperforms them on scaffolding low-complexity metagenomic samples. Conclusions Equipped with NSDP, SILP2 is able to scaffold large mammalian genomes, resulting in the longest and most accurate scaffolds. The ILP formulation for the maximum likelihood model is shown to be flexible enough to handle metagenomic samples. PMID:25253180
NASA Astrophysics Data System (ADS)
Jin, Sung Hun; Dunham, Simon; Xie, Xu; Rogers, John A.
2015-09-01
Among the remarkable variety of semiconducting nanomaterials that have been discovered over the past two decades, single-walled carbon nanotubes remain uniquely well suited for applications in high-performance electronics, sensors and other technologies. The most advanced opportunities demand the ability to form perfectly aligned, horizontal arrays of purely semiconducting, chemically pristine carbon nanotubes. Here, we present strategies that offer this capability. Nanoscale thermos-capillary flows in thin-film organic coatings followed by reactive ion etching serve as highly efficient means for selectively removing metallic carbon nanotubes from electronically heterogeneous aligned arrays grown on quartz substrates. The low temperatures and unusual physics associated with this process enable robust, scalable operation, with clear potential for practical use. Especially for the purpose of selective joule heating over only metallic nanotubes, two representative platforms are proposed and confirmed. One is achieved by selective joule heating associated with thin film transistors with partial gate structure. The other is based on a simple, scalable, large-area scheme through microwave irradiation by using micro-strip dipole antennas of low work-function metals. In this study, based on purified semiconducting SWNTs, we demonstrated field effect transistors with mobility (> 1,000 cm2/Vsec) and on/off switching ratio (~10,000) with current outputs in the milliamp range. Furthermore, as one demonstration of the effectiveness over large area-scalability and simplicity, implementing the micro-wave based purification, on large arrays consisting of ~20,000 SWNTs completely removes all of the m-SWNTs (~7,000) to yield a purity of s-SWNTs that corresponds, quantitatively, to at least to 99.9925% and likely significantly higher.
Wang, Sibo; Ren, Zheng; Guo, Yanbing; ...
2016-03-21
We report the scalable three-dimensional (3-D) integration of functional nanostructures into applicable platforms represents a promising technology to meet the ever-increasing demands of fabricating high performance devices featuring cost-effectiveness, structural sophistication and multi-functional enabling. Such an integration process generally involves a diverse array of nanostructural entities (nano-entities) consisting of dissimilar nanoscale building blocks such as nanoparticles, nanowires, and nanofilms made of metals, ceramics, or polymers. Various synthetic strategies and integration methods have enabled the successful assembly of both structurally and functionally tailored nano-arrays into a unique class of monolithic devices. The performance of nano-array based monolithic devices is dictated bymore » a few important factors such as materials substrate selection, nanostructure composition and nano-architecture geometry. Therefore, the rational material selection and nano-entity manipulation during the nano-array integration process, aiming to exploit the advantageous characteristics of nanostructures and their ensembles, are critical steps towards bridging the design of nanostructure integrated monolithic devices with various practical applications. In this article, we highlight the latest research progress of the two-dimensional (2-D) and 3-D metal and metal oxide based nanostructural integrations into prototype devices applicable with ultrahigh efficiency, good robustness and improved functionality. Lastly, selective examples of nano-array integration, scalable nanomanufacturing and representative monolithic devices such as catalytic converters, sensors and batteries will be utilized as the connecting dots to display a roadmap from hierarchical nanostructural assembly to practical nanotechnology implications ranging from energy, environmental, to chemical and biotechnology areas.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Sibo; Ren, Zheng; Guo, Yanbing
We report the scalable three-dimensional (3-D) integration of functional nanostructures into applicable platforms represents a promising technology to meet the ever-increasing demands of fabricating high performance devices featuring cost-effectiveness, structural sophistication and multi-functional enabling. Such an integration process generally involves a diverse array of nanostructural entities (nano-entities) consisting of dissimilar nanoscale building blocks such as nanoparticles, nanowires, and nanofilms made of metals, ceramics, or polymers. Various synthetic strategies and integration methods have enabled the successful assembly of both structurally and functionally tailored nano-arrays into a unique class of monolithic devices. The performance of nano-array based monolithic devices is dictated bymore » a few important factors such as materials substrate selection, nanostructure composition and nano-architecture geometry. Therefore, the rational material selection and nano-entity manipulation during the nano-array integration process, aiming to exploit the advantageous characteristics of nanostructures and their ensembles, are critical steps towards bridging the design of nanostructure integrated monolithic devices with various practical applications. In this article, we highlight the latest research progress of the two-dimensional (2-D) and 3-D metal and metal oxide based nanostructural integrations into prototype devices applicable with ultrahigh efficiency, good robustness and improved functionality. Lastly, selective examples of nano-array integration, scalable nanomanufacturing and representative monolithic devices such as catalytic converters, sensors and batteries will be utilized as the connecting dots to display a roadmap from hierarchical nanostructural assembly to practical nanotechnology implications ranging from energy, environmental, to chemical and biotechnology areas.« less
Adjacent Vehicle Number-Triggered Adaptive Transmission for V2V Communications.
Wei, Yiqiao; Chen, Jingjun; Hwang, Seung-Hoon
2018-03-02
For vehicle-to-vehicle (V2V) communication, such issues as continuity and reliability still have to be solved. Specifically, it is necessary to consider a more scalable physical layer due to the high-speed mobility of vehicles and the complex channel environment. Adaptive transmission has been adapted in channel-dependent scheduling. However, it has been neglected with regards to the physical topology changes in the vehicle network. In this paper, we propose a physical topology-triggered adaptive transmission scheme which adjusts the data rate between vehicles according to the number of connectable vehicles nearby. Also, we investigate the performance of the proposed method using computer simulations and compare it with the conventional methods. The numerical results show that the proposed method can provide more continuous and reliable data transmission for V2V communications.
Hu, Ya; Peng, Kui-Qing; Liu, Lin; Qiao, Zhen; Huang, Xing; Wu, Xiao-Ling; Meng, Xiang-Min; Lee, Shuit-Tong
2014-01-13
Silicon nanowires (SiNWs) are attracting growing interest due to their unique properties and promising applications in photovoltaic devices, thermoelectric devices, lithium-ion batteries, and biotechnology. Low-cost mass production of SiNWs is essential for SiNWs-based nanotechnology commercialization. However, economic, controlled large-scale production of SiNWs remains challenging and rarely attainable. Here, we demonstrate a facile strategy capable of low-cost, continuous-flow mass production of SiNWs on an industrial scale. The strategy relies on substrate-enhanced metal-catalyzed electroless etching (MCEE) of silicon using dissolved oxygen in aqueous hydrofluoric acid (HF) solution as an oxidant. The distinct advantages of this novel MCEE approach, such as simplicity, scalability and flexibility, make it an attractive alternative to conventional MCEE methods.
Wrapping with a splash: High-speed encapsulation with ultrathin sheets
NASA Astrophysics Data System (ADS)
Kumar, Deepak; Paulsen, Joseph D.; Russell, Thomas P.; Menon, Narayanan
2018-02-01
Many complex fluids rely on surfactants to contain, protect, or isolate liquid drops in an immiscible continuous phase. Thin elastic sheets can wrap liquid drops in a spontaneous process driven by capillary forces. For encapsulation by sheets to be practically viable, a rapid, continuous, and scalable process is essential. We exploit the fast dynamics of droplet impact to achieve wrapping of oil droplets by ultrathin polymer films in a water phase. Despite the violence of splashing events, the process robustly yields wrappings that are optimally shaped to maximize the enclosed fluid volume and have near-perfect seams. We achieve wrappings of targeted three-dimensional (3D) shapes by tailoring the 2D boundary of the films and show the generality of the technique by producing both oil-in-water and water-in-oil wrappings.
An Event-Based Approach to Distributed Diagnosis of Continuous Systems
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhurry, Indranil; Biswas, Gautam; Koutsoukos, Xenofon
2010-01-01
Distributed fault diagnosis solutions are becoming necessary due to the complexity of modern engineering systems, and the advent of smart sensors and computing elements. This paper presents a novel event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, based on a qualitative abstraction of measurement deviations from the nominal behavior. We systematically derive dynamic fault signatures expressed as event-based fault models. We develop a distributed diagnoser design algorithm that uses these models for designing local event-based diagnosers based on global diagnosability analysis. The local diagnosers each generate globally correct diagnosis results locally, without a centralized coordinator, and by communicating a minimal number of measurements between themselves. The proposed approach is applied to a multi-tank system, and results demonstrate a marked improvement in scalability compared to a centralized approach.
Adjacent Vehicle Number-Triggered Adaptive Transmission for V2V Communications
Wei, Yiqiao; Chen, Jingjun
2018-01-01
For vehicle-to-vehicle (V2V) communication, such issues as continuity and reliability still have to be solved. Specifically, it is necessary to consider a more scalable physical layer due to the high-speed mobility of vehicles and the complex channel environment. Adaptive transmission has been adapted in channel-dependent scheduling. However, it has been neglected with regards to the physical topology changes in the vehicle network. In this paper, we propose a physical topology-triggered adaptive transmission scheme which adjusts the data rate between vehicles according to the number of connectable vehicles nearby. Also, we investigate the performance of the proposed method using computer simulations and compare it with the conventional methods. The numerical results show that the proposed method can provide more continuous and reliable data transmission for V2V communications. PMID:29498646
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Barton
2014-06-30
Peta-scale computing environments pose significant challenges for both system and application developers and addressing them required more than simply scaling up existing tera-scale solutions. Performance analysis tools play an important role in gaining this understanding, but previous monolithic tools with fixed feature sets have not sufficed. Instead, this project worked on the design, implementation, and evaluation of a general, flexible tool infrastructure supporting the construction of performance tools as “pipelines” of high-quality tool building blocks. These tool building blocks provide common performance tool functionality, and are designed for scalability, lightweight data acquisition and analysis, and interoperability. For this project, wemore » built on Open|SpeedShop, a modular and extensible open source performance analysis tool set. The design and implementation of such a general and reusable infrastructure targeted for petascale systems required us to address several challenging research issues. All components needed to be designed for scale, a task made more difficult by the need to provide general modules. The infrastructure needed to support online data aggregation to cope with the large amounts of performance and debugging data. We needed to be able to map any combination of tool components to each target architecture. And we needed to design interoperable tool APIs and workflows that were concrete enough to support the required functionality, yet provide the necessary flexibility to address a wide range of tools. A major result of this project is the ability to use this scalable infrastructure to quickly create tools that match with a machine architecture and a performance problem that needs to be understood. Another benefit is the ability for application engineers to use the highly scalable, interoperable version of Open|SpeedShop, which are reassembled from the tool building blocks into a flexible, multi-user interface set of tools. This set of tools targeted at Office of Science Leadership Class computer systems and selected Office of Science application codes. We describe the contributions made by the team at the University of Wisconsin. The project built on the efforts in Open|SpeedShop funded by DOE/NNSA and the DOE/NNSA Tri-Lab community, extended Open|Speedshop to the Office of Science Leadership Class Computing Facilities, and addressed new challenges found on these cutting edge systems. Work done under this project at Wisconsin can be divided into two categories, new algorithms and techniques for debugging, and foundation infrastructure work on our Dyninst binary analysis and instrumentation toolkits and MRNet scalability infrastructure.« less
The BACnet Campus Challenge - Part 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masica, Ken; Tom, Steve
Here, the BACnet protocol was designed to achieve interoperability among building automation vendors and evolve over time to include new functionality as well as support new communication technologies such as the Ethernet and IP protocols as they became prevalent and economical in the market place. For large multi-building, multi-vendor campus environments, standardizing on the BACnet protocol as an implementation strategy can be a key component in meeting the challenge of an interoperable, flexible, and scalable building automation system. The interoperability of BACnet is especially important when large campuses with legacy equipment have DDC upgrades to facilities performed over different timemore » frames and use different contractors that install equipment from different vendors under the guidance of different campus HVAC project managers. In these circumstances, BACnet can serve as a common foundation for interoperability when potential variability exists in approaches to the design-build process by numerous parties over time. Likewise, BACnet support for a range of networking protocols and technologies can be a key strategy for achieving flexible and scalable automation systems as campuses and enterprises expand networking infrastructures using standard interoperable protocols like IP and Ethernet.« less
Favors, Zachary; Bay, Hamed Hosseini; Mutlu, Zafer; Ahmed, Kazi; Ionescu, Robert; Ye, Rachel; Ozkan, Mihrimah; Ozkan, Cengiz S
2015-02-06
The need for more energy dense and scalable Li-ion battery electrodes has become increasingly pressing with the ushering in of more powerful portable electronics and electric vehicles (EVs) requiring substantially longer range capabilities. Herein, we report on the first synthesis of nano-silicon paper electrodes synthesized via magnesiothermic reduction of electrospun SiO2 nanofiber paper produced by an in situ acid catalyzed polymerization of tetraethyl orthosilicate (TEOS) in-flight. Free-standing carbon-coated Si nanofiber binderless electrodes produce a capacity of 802 mAh g(-1) after 659 cycles with a Coulombic efficiency of 99.9%, which outperforms conventionally used slurry-prepared graphite anodes by over two times on an active material basis. Silicon nanofiber paper anodes offer a completely binder-free and Cu current collector-free approach to electrode fabrication with a silicon weight percent in excess of 80%. The absence of conductive powder additives, metallic current collectors, and polymer binders in addition to the high weight percent silicon all contribute to significantly increasing capacity at the cell level.
Yuan, Dajun; Lin, Wei; Guo, Rui; Wong, C P; Das, Suman
2012-06-01
Scalable fabrication of carbon nanotube (CNT) bundles is essential to future advances in several applications. Here, we report on the development of a simple, two-step method for fabricating vertically aligned and periodically distributed CNT bundles and periodically porous CNT films at the sub-micron scale. The method involves laser interference ablation (LIA) of an iron film followed by CNT growth via iron-catalyzed chemical vapor deposition. CNT bundles with square widths ranging from 0.5 to 1.5 µm in width, and 50-200 µm in length, are grown atop the patterned catalyst over areas spanning 8 cm(2). The CNT bundles exhibit a high degree of control over square width, orientation, uniformity, and periodicity. This simple scalable method of producing well-placed and oriented CNT bundles demonstrates a high application potential for wafer-scale integration of CNT structures into various device applications, including IC interconnects, field emitters, sensors, batteries, and optoelectronics, etc.
Large-Scale, Exhaustive Lattice-Based Structural Auditing of SNOMED CT
NASA Astrophysics Data System (ADS)
Zhang, Guo-Qiang
One criterion for the well-formedness of ontologies is that their hierarchical structure form a lattice. Formal Concept Analysis (FCA) has been used as a technique for assessing the quality of ontologies, but is not scalable to large ontologies such as SNOMED CT. We developed a methodology called Lattice-based Structural Auditing (LaSA), for auditing biomedical ontologies, implemented through automated SPARQL queries, in order to exhaustively identify all non-lattice pairs in SNOMED CT. The percentage of non-lattice pairs ranges from 0 to 1.66 among the 19 SNOMED CT hierarchies. Preliminary manual inspection of a limited portion of the 518K non-lattice pairs, among over 34 million candidate pairs, revealed inconsistent use of precoordination in SNOMED CT, but also a number of false positives. Our results are consistent with those based on FCA, with the advantage that the LaSA computational pipeline is scalable and applicable to ontological systems consisting mostly of taxonomic links. This work is based on collaboration with Olivier Bodenreider from the National Library of Medicine, Bethesda, USA.
Gigwa-Genotype investigator for genome-wide analyses.
Sempéré, Guilhem; Philippe, Florian; Dereeper, Alexis; Ruiz, Manuel; Sarah, Gautier; Larmande, Pierre
2016-06-06
Exploring the structure of genomes and analyzing their evolution is essential to understanding the ecological adaptation of organisms. However, with the large amounts of data being produced by next-generation sequencing, computational challenges arise in terms of storage, search, sharing, analysis and visualization. This is particularly true with regards to studies of genomic variation, which are currently lacking scalable and user-friendly data exploration solutions. Here we present Gigwa, a web-based tool that provides an easy and intuitive way to explore large amounts of genotyping data by filtering it not only on the basis of variant features, including functional annotations, but also on genotype patterns. The data storage relies on MongoDB, which offers good scalability properties. Gigwa can handle multiple databases and may be deployed in either single- or multi-user mode. In addition, it provides a wide range of popular export formats. The Gigwa application is suitable for managing large amounts of genomic variation data. Its user-friendly web interface makes such processing widely accessible. It can either be simply deployed on a workstation or be used to provide a shared data portal for a given community of researchers.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2016-10-01
Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.
Han, Joong Tark; Kim, Byung Kuk; Woo, Jong Seok; Jang, Jeong In; Cho, Joon Young; Jeong, Hee Jin; Jeong, Seung Yol; Seo, Seon Hee; Lee, Geon-Woong
2017-03-01
Directly printed superhydrophobic surfaces containing conducting nanomaterials can be used for a wide range of applications in terms of nonwetting, anisotropic wetting, and electrical conductivity. Here, we demonstrated that direct-printable and flexible superhydrophobic surfaces were fabricated on flexible substrates via with an ultrafacile and scalable screen printing with carbon nanotube (CNT)-based conducting pastes. A polydimethylsiloxane (PDMS)-polyethylene glycol (PEG) copolymer was used as an additive for conducting pastes to realize the printability of the conducting paste as well as the hydrophobicity of the printed surface. The screen-printed conducting surfaces showed a high water contact angle (WCA) (>150°) and low contact angle hysteresis (WCA < 5°) at 25 wt % PDMS-PEG copolymer in the paste, and they have an electrical conductivity of over 1000 S m -1 . Patterned superhydrophobic surfaces also showed sticky superhydrophobic characteristics and were used to transport water droplets. Moreover, fabricated films on metal meshes were used for an oil/water separation filter, and liquid evaporation behavior was investigated on the superhydrophobic and conductive thin-film heaters by applying direct current voltage to the film.
Al-Ashmouny, Khaled M; Chang, Sun-Il; Yoon, Euisik
2012-10-01
We report an analog front-end prototype designed in 0.25 μm CMOS process for hybrid integration into 3-D neural recording microsystems. For scaling towards massive parallel neural recording, the prototype has investigated some critical circuit challenges in power, area, interface, and modularity. We achieved extremely low power consumption of 4 μW/channel, optimized energy efficiency using moderate inversion in low-noise amplifiers (K of 5.98 × 10⁸ or NEF of 2.9), and minimized asynchronous interface (only 2 per 16 channels) for command and data capturing. We also implemented adaptable operations including programmable-gain amplification, power-scalable sampling (up to 50 kS/s/channel), wide configuration range (9-bit) for programmable gain and bandwidth, and 5-bit site selection capability (selecting 16 out of 128 sites). The implemented front-end module has achieved a reduction in noise-energy-area product by a factor of 5-25 times as compared to the state-of-the-art analog front-end approaches reported to date.
A scalable population code for time in the striatum.
Mello, Gustavo B M; Soares, Sofia; Paton, Joseph J
2015-05-04
To guide behavior and learn from its consequences, the brain must represent time over many scales. Yet, the neural signals used to encode time in the seconds-to-minute range are not known. The striatum is a major input area of the basal ganglia associated with learning and motor function. Previous studies have also shown that the striatum is necessary for normal timing behavior. To address how striatal signals might be involved in timing, we recorded from striatal neurons in rats performing an interval timing task. We found that neurons fired at delays spanning tens of seconds and that this pattern of responding reflected the interaction between time and the animals' ongoing sensorimotor state. Surprisingly, cells rescaled responses in time when intervals changed, indicating that striatal populations encoded relative time. Moreover, time estimates decoded from activity predicted timing behavior as animals adjusted to new intervals, and disrupting striatal function led to a decrease in timing performance. These results suggest that striatal activity forms a scalable population code for time, providing timing signals that animals use to guide their actions. Copyright © 2015 Elsevier Ltd. All rights reserved.
The BACnet Campus Challenge - Part 1
Masica, Ken; Tom, Steve
2015-12-01
Here, the BACnet protocol was designed to achieve interoperability among building automation vendors and evolve over time to include new functionality as well as support new communication technologies such as the Ethernet and IP protocols as they became prevalent and economical in the market place. For large multi-building, multi-vendor campus environments, standardizing on the BACnet protocol as an implementation strategy can be a key component in meeting the challenge of an interoperable, flexible, and scalable building automation system. The interoperability of BACnet is especially important when large campuses with legacy equipment have DDC upgrades to facilities performed over different timemore » frames and use different contractors that install equipment from different vendors under the guidance of different campus HVAC project managers. In these circumstances, BACnet can serve as a common foundation for interoperability when potential variability exists in approaches to the design-build process by numerous parties over time. Likewise, BACnet support for a range of networking protocols and technologies can be a key strategy for achieving flexible and scalable automation systems as campuses and enterprises expand networking infrastructures using standard interoperable protocols like IP and Ethernet.« less
Population-based structural variation discovery with Hydra-Multi.
Lindberg, Michael R; Hall, Ira M; Quinlan, Aaron R
2015-04-15
Current strategies for SNP and INDEL discovery incorporate sequence alignments from multiple individuals to maximize sensitivity and specificity. It is widely accepted that this approach also improves structural variant (SV) detection. However, multisample SV analysis has been stymied by the fundamental difficulties of SV calling, e.g. library insert size variability, SV alignment signal integration and detecting long-range genomic rearrangements involving disjoint loci. Extant tools suffer from poor scalability, which limits the number of genomes that can be co-analyzed and complicates analysis workflows. We have developed an approach that enables multisample SV analysis in hundreds to thousands of human genomes using commodity hardware. Here, we describe Hydra-Multi and measure its accuracy, speed and scalability using publicly available datasets provided by The 1000 Genomes Project and by The Cancer Genome Atlas (TCGA). Hydra-Multi is written in C++ and is freely available at https://github.com/arq5x/Hydra. aaronquinlan@gmail.com or ihall@genome.wustl.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Grasso, J. R.; Bachèlery, P.
Self-organized systems are often used to describe natural phenomena where power laws and scale invariant geometry are observed. The Piton de la Fournaise volcano shows power-law behavior in many aspects. These include the temporal distribution of eruptions, the frequency-size distributions of induced earthquakes, dikes, fissures, lava flows and interflow periods, all evidence of self-similarity over a finite scale range. We show that the bounds to scale-invariance can be used to derive geomechanical constraints on both the volcano structure and the volcano mechanics. We ascertain that the present magma bodies are multi-lens reservoirs in a quasi-eruptive condition, i.e. a marginally critical state. The scaling organization of dynamic fluid-induced observables on the volcano, such as fluid induced earthquakes, dikes and surface fissures, appears to be controlled by underlying static hierarchical structure (geology) similar to that proposed for fluid circulations in human physiology. The emergence of saturation lengths for the scalable volcanic observable argues for the finite scalability of complex naturally self-organized critical systems, including volcano dynamics.
NASA Astrophysics Data System (ADS)
Boott, Charlotte E.; Gwyther, Jessica; Harniman, Robert L.; Hayward, Dominic W.; Manners, Ian
2017-08-01
The preparation of well-defined nanoparticles based on soft matter, using solution-processing techniques on a commercially viable scale, is a major challenge of widespread importance. Self-assembly of block copolymers in solvents that selectively solvate one of the segments provides a promising route to core-corona nanoparticles (micelles) with a wide range of potential uses. Nevertheless, significant limitations to this approach also exist. For example, the solution processing of block copolymers generally follows a separate synthesis step and is normally performed at high dilution. Moreover, non-spherical micelles—which are promising for many applications—are generally difficult to access, samples are polydisperse and precise dimensional control is not possible. Here we demonstrate the formation of platelet and cylindrical micelles at concentrations up to 25% solids via a one-pot approach—starting from monomers—that combines polymerization-induced and crystallization-driven self-assembly. We also show that performing the procedure in the presence of small seed micelles allows the scalable formation of low dispersity samples of cylindrical micelles of controlled length up to three micrometres.
Askar, Khalid; Leo, Sin-Yen; Xu, Can; Liu, Danielle; Jiang, Peng
2016-11-15
Here we report a rapid and scalable bottom-up technique for layer-by-layer (LBL) assembling near-infrared-active colloidal photonic crystals consisting of large (⩾1μm) silica microspheres. By combining a new electrostatics-assisted colloidal transferring approach with spontaneous colloidal crystallization at an air/water interface, we have demonstrated that the crystal transfer speed of traditional Langmuir-Blodgett-based colloidal assembly technologies can be enhanced by nearly 2 orders of magnitude. Importantly, the crystalline quality of the resultant photonic crystals is not compromised by this rapid colloidal assembly approach. They exhibit thickness-dependent near-infrared stop bands and well-defined Fabry-Perot fringes in the specular transmission and reflection spectra, which match well with the theoretical calculations using a scalar-wave approximation model and Fabry-Perot analysis. This simple yet scalable bottom-up technology can significantly improve the throughput in assembling large-area, multilayer colloidal crystals, which are of great technological importance in a variety of optical and non-optical applications ranging from all-optical integrated circuits to tissue engineering. Copyright © 2016 Elsevier Inc. All rights reserved.
Participatory monitoring to connect local and global priorities for forest restoration.
Evans, Kristen; Guariguata, Manuel R; Brancalion, Pedro H S
2018-06-01
New global initiatives to restore forest landscapes present an unparalleled opportunity to reverse deforestation and forest degradation. Participatory monitoring could play a crucial role in providing accountability, generating local buy in, and catalyzing learning in monitoring systems that need scalability and adaptability to a range of local sites. We synthesized current knowledge from literature searches and interviews to provide lessons for the development of a scalable, multisite participatory monitoring system. Studies show that local people can collect accurate data on forest change, drivers of change, threats to reforestation, and biophysical and socioeconomic impacts that remote sensing cannot. They can do this at one-third the cost of professionals. Successful participatory monitoring systems collect information on a few simple indicators, respond to local priorities, provide appropriate incentives for participation, and catalyze learning and decision making based on frequent analyses and multilevel interactions with other stakeholders. Participatory monitoring could provide a framework for linking global, national, and local needs, aspirations, and capacities for forest restoration. © 2018 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lorenz, Daniel; Wolf, Felix
2016-02-17
The PRIMA-X (Performance Retargeting of Instrumentation, Measurement, and Analysis Technologies for Exascale Computing) project is the successor of the DOE PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing) project, which addressed the challenge of creating a core measurement infrastructure that would serve as a common platform for both integrating leading parallel performance systems (notably TAU and Scalasca) and developing next-generation scalable performance tools. The PRIMA-X project shifts the focus away from refactorization of robust performance tools towards a re-targeting of the parallel performance measurement and analysis architecture for extreme scales. The massive concurrency, asynchronous execution dynamics,more » hardware heterogeneity, and multi-objective prerequisites (performance, power, resilience) that identify exascale systems introduce fundamental constraints on the ability to carry forward existing performance methodologies. In particular, there must be a deemphasis of per-thread observation techniques to significantly reduce the otherwise unsustainable flood of redundant performance data. Instead, it will be necessary to assimilate multi-level resource observations into macroscopic performance views, from which resilient performance metrics can be attributed to the computational features of the application. This requires a scalable framework for node-level and system-wide monitoring and runtime analyses of dynamic performance information. Also, the interest in optimizing parallelism parameters with respect to performance and energy drives the integration of tool capabilities in the exascale environment further. Initially, PRIMA-X was a collaborative project between the University of Oregon (lead institution) and the German Research School for Simulation Sciences (GRS). Because Prof. Wolf, the PI at GRS, accepted a position as full professor at Technische Universität Darmstadt (TU Darmstadt) starting February 1st, 2015, the project ended at GRS on January 31st, 2015. This report reflects the work accomplished at GRS until then. The work of GRS is expected to be continued at TU Darmstadt. The first main accomplishment of GRS is the design of different thread-level aggregation techniques. We created a prototype capable of aggregating the thread-level information in performance profiles using these techniques. The next step will be the integration of the most promising techniques into the Score-P measurement system and their evaluation. The second main accomplishment is a substantial increase of Score-P’s scalability, achieved by improving the design of the system-tree representation in Score-P’s profile format. We developed a new representation and a distributed algorithm to create the scalable system tree representation. Finally, we developed a lightweight approach to MPI wait-state profiling. Former algorithms either needed piggy-backing, which can cause significant runtime overhead, or tracing, which comes with its own set of scaling challenges. Our approach works with local data only and, thus, is scalable and has very little overhead.« less
Long-Range Big Quantum-Data Transmission.
Zwerger, M; Pirker, A; Dunjko, V; Briegel, H J; Dür, W
2018-01-19
We introduce an alternative type of quantum repeater for long-range quantum communication with improved scaling with the distance. We show that by employing hashing, a deterministic entanglement distillation protocol with one-way communication, one obtains a scalable scheme that allows one to reach arbitrary distances, with constant overhead in resources per repeater station, and ultrahigh rates. In practical terms, we show that, also with moderate resources of a few hundred qubits at each repeater station, one can reach intercontinental distances. At the same time, a measurement-based implementation allows one to tolerate high loss but also operational and memory errors of the order of several percent per qubit. This opens the way for long-distance communication of big quantum data.
Demonstration of the advanced photovoltaic solar array
NASA Technical Reports Server (NTRS)
Kurland, R. M.; Stella, P. M.
1991-01-01
The Advanced Photovoltaic Solar Array (APSA) design is reviewed. The testing results and performance estimates are summarized. The APSA design represents a critical intermediate milestone for the NASA Office of Aeronautics, Exploration, and Technology (OAET) goal of 300 W/kg at Beginning Of Life (BOL), with specific performance characteristics of 130 W/kg (BOL) and 100 W/kg at End Of Life (EOL) for a 10 year geosynchronous (GEO) 10 kW (BOL) space power system. The APSA wing design is scalable over a power range of 1 to 15 kW and is suitable for a full range of missions including Low Earth Orbit (LEO), orbital transfer from LEO to GEO and interplanetary out to 5 AU.
Long-Range Big Quantum-Data Transmission
NASA Astrophysics Data System (ADS)
Zwerger, M.; Pirker, A.; Dunjko, V.; Briegel, H. J.; Dür, W.
2018-01-01
We introduce an alternative type of quantum repeater for long-range quantum communication with improved scaling with the distance. We show that by employing hashing, a deterministic entanglement distillation protocol with one-way communication, one obtains a scalable scheme that allows one to reach arbitrary distances, with constant overhead in resources per repeater station, and ultrahigh rates. In practical terms, we show that, also with moderate resources of a few hundred qubits at each repeater station, one can reach intercontinental distances. At the same time, a measurement-based implementation allows one to tolerate high loss but also operational and memory errors of the order of several percent per qubit. This opens the way for long-distance communication of big quantum data.
Multi-Purpose, Application-Centric, Scalable I/O Proxy Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, M. C.
2015-06-15
MACSio is a Multi-purpose, Application-Centric, Scalable I/O proxy application. It is designed to support a number of goals with respect to parallel I/O performance testing and benchmarking including the ability to test and compare various I/O libraries and I/O paradigms, to predict scalable performance of real applications and to help identify where improvements in I/O performance can be made within the HPC I/O software stack.
SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop.
Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo
2014-01-01
Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig's scalability over many computing nodes and illustrate its use with example scripts. Available under the open source MIT license at http://sourceforge.net/projects/seqpig/
Scalable Implementation of Finite Elements by NASA _ Implicit (ScIFEi)
NASA Technical Reports Server (NTRS)
Warner, James E.; Bomarito, Geoffrey F.; Heber, Gerd; Hochhalter, Jacob D.
2016-01-01
Scalable Implementation of Finite Elements by NASA (ScIFEN) is a parallel finite element analysis code written in C++. ScIFEN is designed to provide scalable solutions to computational mechanics problems. It supports a variety of finite element types, nonlinear material models, and boundary conditions. This report provides an overview of ScIFEi (\\Sci-Fi"), the implicit solid mechanics driver within ScIFEN. A description of ScIFEi's capabilities is provided, including an overview of the tools and features that accompany the software as well as a description of the input and output le formats. Results from several problems are included, demonstrating the efficiency and scalability of ScIFEi by comparing to finite element analysis using a commercial code.
Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications
NASA Astrophysics Data System (ADS)
Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei
2007-04-01
In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.
An MPI-based MoSST core dynamics model
NASA Astrophysics Data System (ADS)
Jiang, Weiyuan; Kuang, Weijia
2008-09-01
Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shamis, Pavel; Graham, Richard L; Gorentla Venkata, Manjunath
The scalability and performance of collective communication operations limit the scalability and performance of many scientific applications. This paper presents two new blocking and nonblocking Broadcast algorithms for communicators with arbitrary communication topology, and studies their performance. These algorithms benefit from increased concurrency and a reduced memory footprint, making them suitable for use on large-scale systems. Measuring small, medium, and large data Broadcasts on a Cray-XT5, using 24,576 MPI processes, the Cheetah algorithms outperform the native MPI on that system by 51%, 69%, and 9%, respectively, at the same process count. These results demonstrate an algorithmic approach to the implementationmore » of the important class of collective communications, which is high performing, scalable, and also uses resources in a scalable manner.« less
Architecture for Improving Terrestrial Logistics Based on the Web of Things
Castro, Miguel; Jara, Antonio J.; Skarmeta, Antonio
2012-01-01
Technological advances for improving supply chain efficiency present three key challenges for managing goods: tracking, tracing and monitoring (TTM), in order to satisfy the requirements for products such as perishable goods where the European Legislations requires them to ship within a prescribed temperature range to ensure freshness and suitability for consumption. The proposed system integrates RFID for tracking and tracing through a distributed architecture developed for heavy goods vehicles, and the sensors embedded in the SunSPOT platform for monitoring the goods transported based on the concept of the Internet of Things. This paper presents how the Internet of Things is integrated for improving terrestrial logistics offering a comprehensive and flexible architecture, with high scalability, according to the specific needs for reaching an item-level continuous monitoring solution. The major contribution from this work is the optimization of the Embedded Web Services based on RESTful (Web of Things) for the access to TTM services at any time during the transportation of goods. Specifically, it has been extended the monitoring patterns such as observe and blockwise transfer for the requirements from the continuous conditional monitoring, and for the transfer of full inventories and partial ones based on conditional queries. In definitive, this work presents an evolution of the previous TTM solutions, which were limited to trailer identification and environment monitoring, to a solution which is able to provide an exhaustive item-level monitoring, required for several use cases. This exhaustive monitoring has required new communication capabilities through the Web of Things, which has been optimized with the use and improvement of a set of communications patterns. PMID:22778657
Architecture for improving terrestrial logistics based on the Web of Things.
Castro, Miguel; Jara, Antonio J; Skarmeta, Antonio
2012-01-01
Technological advances for improving supply chain efficiency present three key challenges for managing goods: tracking, tracing and monitoring (TTM), in order to satisfy the requirements for products such as perishable goods where the European Legislations requires them to ship within a prescribed temperature range to ensure freshness and suitability for consumption. The proposed system integrates RFID for tracking and tracing through a distributed architecture developed for heavy goods vehicles, and the sensors embedded in the SunSPOT platform for monitoring the goods transported based on the concept of the Internet of Things. This paper presents how the Internet of Things is integrated for improving terrestrial logistics offering a comprehensive and flexible architecture, with high scalability, according to the specific needs for reaching an item-level continuous monitoring solution. The major contribution from this work is the optimization of the Embedded Web Services based on RESTful (Web of Things) for the access to TTM services at any time during the transportation of goods. Specifically, it has been extended the monitoring patterns such as observe and blockwise transfer for the requirements from the continuous conditional monitoring, and for the transfer of full inventories and partial ones based on conditional queries. In definitive, this work presents an evolution of the previous TTM solutions, which were limited to trailer identification and environment monitoring, to a solution which is able to provide an exhaustive item-level monitoring, required for several use cases. This exhaustive monitoring has required new communication capabilities through the Web of Things, which has been optimized with the use and improvement of a set of communications patterns.
Compact multispectral photodiode arrays using micropatterned dichroic filters
NASA Astrophysics Data System (ADS)
Chandler, Eric V.; Fish, David E.
2014-05-01
The next generation of multispectral instruments requires significant improvements in both spectral band customization and portability to support the widespread deployment of application-specific optical sensors. The benefits of spectroscopy are well established for numerous applications including biomedical instrumentation, industrial sorting and sensing, chemical detection, and environmental monitoring. In this paper, spectroscopic (and by extension hyperspectral) and multispectral measurements are considered. The technology, tradeoffs, and application fits of each are evaluated. In the majority of applications, monitoring 4-8 targeted spectral bands of optimized wavelength and bandwidth provides the necessary spectral contrast and correlation. An innovative approach integrates precision spectral filters at the photodetector level to enable smaller sensors, simplify optical designs, and reduce device integration costs. This method supports user-defined spectral bands to create application-specific sensors in a small footprint with scalable cost efficiencies. A range of design configurations, filter options and combinations are presented together with typical applications ranging from basic multi-band detection to stringent multi-channel fluorescence measurement. An example implementation packages 8 narrowband silicon photodiodes into a 9x9mm ceramic LCC (leadless chip carrier) footprint. This package is designed for multispectral applications ranging from portable color monitors to purpose- built OEM industrial and scientific instruments. Use of an eight-channel multispectral photodiode array typically eliminates 10-20 components from a device bill-of-materials (BOM), streamlining the optical path and shrinking the footprint by 50% or more. A stepwise design approach for multispectral sensors is discussed - including spectral band definition, optical design tradeoffs and constraints, and device integration from prototype through scalable volume production. Additional customization options are explored for application-specific OEM sensors integrated into portable devices using multispectral photodiode arrays.
Scalable graphene production: perspectives and challenges of plasma applications
NASA Astrophysics Data System (ADS)
Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth
2016-05-01
Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various sizes reaching hundreds of square millimetres, and the thickness varying from a monolayer to 10-20 layers. Additional factors such as electrical voltage and current, not available in thermal CVD processes could potentially lead to better scalability, flexibility and control of the plasma-based processes. Advantages and disadvantages of various systems are also considered.
Scalable graphene production: perspectives and challenges of plasma applications.
Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth
2016-05-19
Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various sizes reaching hundreds of square millimetres, and the thickness varying from a monolayer to 10-20 layers. Additional factors such as electrical voltage and current, not available in thermal CVD processes could potentially lead to better scalability, flexibility and control of the plasma-based processes. Advantages and disadvantages of various systems are also considered.
Control and design of multiple unmanned air vehicles for persistent surveillance
NASA Astrophysics Data System (ADS)
Nigam, Nikhil
Control of multiple autonomous aircraft for search and exploration, is a topic of current research interest for applications such as weather monitoring, geographical surveys, search and rescue, tactical reconnaissance, and extra-terrestrial exploration, and the need to distribute sensing is driven by considerations of efficiency, reliability, cost and scalability. Hence, this problem has been extensively studied in the fields of controls and artificial intelligence. The task of persistent surveillance is different from a coverage/exploration problem, in that all areas need to be continuously searched, minimizing the time between visitations to each region in the target space. This distinction does not allow a straightforward application of most exploration techniques to the problem, although ideas from these methods can still be used. The use of aerial vehicles is motivated by their ability to cover larger spaces and their relative insensitivity to terrain. However, the dynamics of Unmanned Air Vehicles (UAVs) adds complexity to the control problem. Most of the work in the literature decouples the vehicle dynamics and control policies, but their interaction is particularly interesting for a surveillance mission. Stochastic environments and UAV failures further enrich the problem by requiring the control policies to be robust, and this aspect is particularly important for hardware implementations. For a persistent mission, it becomes imperative to consider the range/endurance constraints of the vehicles. The coupling of the control policy with the endurance constraints of the vehicles is an aspect that has not been sufficiently explored. Design of UAVs for desirable mission performance is also an issue of considerable significance. The use of a single monolithic optimization for such a problem has practical limitations, and decomposition-based design is a potential alternative. In this research high-level control policies are devised, that are scalable, reliable, efficient, and robust to changes in the environment. Most of the existing techniques that carry performance guarantees are not scalable or robust to changes. The scalable techniques are often heuristic in nature, resulting in lack of reliability and performance. Our policies are tested in a multi-UAV simulation environment developed for this problem, and shown to be near-optimal in spite of being completely reactive in nature. We explicitly account for the coupling between aircraft dynamics and control policies as well, and suggest modifications to improve performance under dynamic constraints. A smart refueling policy is also developed to account for limited endurance, and large performance benefits are observed. The method is based on the solution of a linear program that can be efficiently solved online in a distributed setting, unlike previous work. The Vehicle Swarm Technology Laboratory (VSTL), a hardware testbed developed at Boeing Research and Technology for evaluating swarm of UAVs, is described next and used to test the control strategy in a real-world scenario. The simplicity and robustness of the strategy allows easy implementation and near replication of the performance observed in simulation. Finally, an architecture for system-of-systems design based on Collaborative Optimization (CO) is presented. Earlier work coupling operations and design has used frameworks that make certain assumptions not valid for this problem. The efficacy of our approach is illustrated through preliminary design results, and extension to more realistic settings is also demonstrated.
Tezaur, Irina K.; Tuminaro, Raymond S.; Perego, Mauro; ...
2015-01-01
We examine the scalability of the recently developed Albany/FELIX finite-element based code for the first-order Stokes momentum balance equations for ice flow. We focus our analysis on the performance of two possible preconditioners for the iterative solution of the sparse linear systems that arise from the discretization of the governing equations: (1) a preconditioner based on the incomplete LU (ILU) factorization, and (2) a recently-developed algebraic multigrid (AMG) preconditioner, constructed using the idea of semi-coarsening. A strong scalability study on a realistic, high resolution Greenland ice sheet problem reveals that, for a given number of processor cores, the AMG preconditionermore » results in faster linear solve times but the ILU preconditioner exhibits better scalability. In addition, a weak scalability study is performed on a realistic, moderate resolution Antarctic ice sheet problem, a substantial fraction of which contains floating ice shelves, making it fundamentally different from the Greenland ice sheet problem. We show that as the problem size increases, the performance of the ILU preconditioner deteriorates whereas the AMG preconditioner maintains scalability. This is because the linear systems are extremely ill-conditioned in the presence of floating ice shelves, and the ill-conditioning has a greater negative effect on the ILU preconditioner than on the AMG preconditioner.« less
Page, Trevor; Dubina, Henry; Fillipi, Gabriele; Guidat, Roland; Patnaik, Saroj; Poechlauer, Peter; Shering, Phil; Guinn, Martin; Mcdonnell, Peter; Johnston, Craig
2015-03-01
This white paper focuses on equipment, and analytical manufacturers' perspectives, regarding the challenges of continuous pharmaceutical manufacturing across five prompt questions. In addition to valued input from several vendors, commentary was provided from experienced pharmaceutical representatives, who have installed various continuous platforms. Additionally, a small medium enterprise (SME) perspective was obtained through interviews. A range of technical challenges is outlined, including: the presence of particles, equipment scalability, fouling (and cleaning), technology derisking, specific analytical challenges, and the general requirement of improved technical training. Equipment and analytical companies can make a significant contribution to help the introduction of continuous technology. A key point is that many of these challenges exist in batch processing and are not specific to continuous processing. Backward compatibility of software is not a continuous issue per se. In many cases, there is available learning from other industries. Business models and opportunities through outsourced development partners are also highlighted. Agile smaller companies and academic groups have a key role to play in developing skills, working collaboratively in partnerships, and focusing on solving relevant industry challenges. The precompetitive space differs for vendor companies compared with large pharmaceuticals. Currently, there is no strong consensus around a dominant continuous design, partly because of business dynamics and commercial interests. A more structured common approach to process design and hardware and software standardization would be beneficial, with initial practical steps in modeling. Conclusions include a digestible systems approach, accessible and published business cases, and increased user, academic, and supplier collaboration. This mirrors US FDA direction. The concept of silos in pharmaceutical companies is a common theme throughout the white papers. In the equipment domain, this is equally prevalent among a broad range of companies, mainly focusing on discrete areas. As an example, the flow chemistry and secondary drug product communities are almost entirely disconnected. Control and Process Analytical Technologies (PAT) companies are active in both domains. The equipment actors are a very diverse group with a few major Original Equipment Manufacturers (OEM) players and a variety of SME, project providers, integrators, upstream downstream providers, and specialist PAT. In some cases, partnerships or alliances are formed to increase critical mass. This white paper has focused on small molecules; equipment associated with biopharmaceuticals is covered in a separate white paper. More specifics on equipment detail are provided in final dosage form and drug substance white papers. The equipment and analytical development from laboratory to pilot to production is important, with a variety of sensors and complexity reducing with scale. The importance of robust processing rather than overcomplex control strategy mitigation is important. A search of nonacademic literature highlights, with a few notable exceptions, a relative paucity of material. Much focuses on the economics and benefits of continuous, rather than specifics of equipment issues. The disruptive nature of continuous manufacturing represents either an opportunity or a threat for many companies, so the incentive to change equipment varies. Also, for many companies, the pharmaceutical sector is not actually the dominant sector in terms of sales. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Page, Trevor; Dubina, Henry; Fillipi, Gabriele; Guidat, Roland; Patnaik, Saroj; Poechlauer, Peter; Shering, Phil; Guinn, Martin; Mcdonnell, Peter; Johnston, Craig
2015-03-01
This white paper focuses on equipment, and analytical manufacturers' perspectives, regarding the challenges of continuous pharmaceutical manufacturing across five prompt questions. In addition to valued input from several vendors, commentary was provided from experienced pharmaceutical representatives, who have installed various continuous platforms. Additionally, a small medium enterprise (SME) perspective was obtained through interviews. A range of technical challenges is outlined, including: the presence of particles, equipment scalability, fouling (and cleaning), technology derisking, specific analytical challenges, and the general requirement of improved technical training. Equipment and analytical companies can make a significant contribution to help the introduction of continuous technology. A key point is that many of these challenges exist in batch processing and are not specific to continuous processing. Backward compatibility of software is not a continuous issue per se. In many cases, there is available learning from other industries. Business models and opportunities through outsourced development partners are also highlighted. Agile smaller companies and academic groups have a key role to play in developing skills, working collaboratively in partnerships, and focusing on solving relevant industry challenges. The precompetitive space differs for vendor companies compared with large pharmaceuticals. Currently, there is no strong consensus around a dominant continuous design, partly because of business dynamics and commercial interests. A more structured common approach to process design and hardware and software standardization would be beneficial, with initial practical steps in modeling. Conclusions include a digestible systems approach, accessible and published business cases, and increased user, academic, and supplier collaboration. This mirrors US FDA direction. The concept of silos in pharmaceutical companies is a common theme throughout the white papers. In the equipment domain, this is equally prevalent among a broad range of companies, mainly focusing on discrete areas. As an example, the flow chemistry and secondary drug product communities are almost entirely disconnected. Control and Process Analytical Technologies (PAT) companies are active in both domains. The equipment actors are a very diverse group with a few major Original Equipment Manufacturers (OEM) players and a variety of SME, project providers, integrators, upstream downstream providers, and specialist PAT. In some cases, partnerships or alliances are formed to increase critical mass. This white paper has focused on small molecules; equipment associated with biopharmaceuticals is covered in a separate white paper. More specifics on equipment detail are provided in final dosage form and drug substance white papers. The equipment and analytical development from laboratory to pilot to production is important, with a variety of sensors and complexity reducing with scale. The importance of robust processing rather than overcomplex control strategy mitigation is important. A search of nonacademic literature highlights, with a few notable exceptions, a relative paucity of material. Much focuses on the economics and benefits of continuous, rather than specifics of equipment issues. The disruptive nature of continuous manufacturing represents either an opportunity or a threat for many companies, so the incentive to change equipment varies. Also, for many companies, the pharmaceutical sector is not actually the dominant sector in terms of sales. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.
Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui
A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.
BactoGeNIE: A large-scale comparative genome visualization for big displays
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...
2015-08-13
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
BactoGeNIE: a large-scale comparative genome visualization for big displays
2015-01-01
Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021
BactoGeNIE: A large-scale comparative genome visualization for big displays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
NASA Astrophysics Data System (ADS)
Bucay, Igal; Helal, Ahmed; Dunsky, David; Leviyev, Alex; Mallavarapu, Akhila; Sreenivasan, S. V.; Raizen, Mark
2017-04-01
Ionization of atoms and molecules is an important process in many applications and processes such as mass spectrometry. Ionization is typically accomplished by electron bombardment, and while it is scalable to large volumes, is also very inefficient due to the small cross section of electron-atom collisions. Photoionization methods can be highly efficient, but are not scalable due to the small ionization volume. Electric field ionization is accomplished using ultra-sharp conducting tips biased to a few kilovolts, but suffers from a low ionization volume and tip fabrication limitations. We report on our progress towards an efficient, robust, and scalable method of atomic and molecular ionization using orderly arrays of sharp, gold-doped silicon nanowires. As demonstrated in earlier work, the presence of the gold greatly enhances the ionization probability, which was attributed to an increase in available acceptor surface states. We present here a novel process used to fabricate the nanowire array, results of simulations aimed at optimizing the configuration of the array, and our progress towards demonstrating efficient and scalable ionization.
Medusa: A Scalable MR Console Using USB
Stang, Pascal P.; Conolly, Steven M.; Santos, Juan M.; Pauly, John M.; Scott, Greig C.
2012-01-01
MRI pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multi-channel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the Universal Serial Bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multi-channel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications. PMID:21954200
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
From field notes to data portal - An operational QA/QC framework for tower networks
NASA Astrophysics Data System (ADS)
Sturtevant, C.; Hackley, S.; Meehan, T.; Roberti, J. A.; Holling, G.; Bonarrigo, S.
2016-12-01
Quality assurance and control (QA/QC) is one of the most important yet challenging aspects of producing research-quality data. This is especially so for environmental sensor networks collecting numerous high-frequency measurement streams at distributed sites. Here, the quality issues are multi-faceted, including sensor malfunctions, unmet theoretical assumptions, and measurement interference from the natural environment. To complicate matters, there are often multiple personnel managing different sites or different steps in the data flow. For large, centrally managed sensor networks such as NEON, the separation of field and processing duties is in the extreme. Tower networks such as Ameriflux, ICOS, and NEON continue to grow in size and sophistication, yet tools for robust, efficient, scalable QA/QC have lagged. Quality control remains a largely manual process relying on visual inspection of the data. In addition, notes of observed measurement interference or visible problems are often recorded on paper without an explicit pathway to data flagging during processing. As such, an increase in network size requires a near-proportional increase in personnel devoted to QA/QC, quickly stressing the human resources available. There is a need for a scalable, operational QA/QC framework that combines the efficiency and standardization of automated tests with the power and flexibility of visual checks, and includes an efficient communication pathway from field personnel to data processors to end users. Here we propose such a framework and an accompanying set of tools in development, including a mobile application template for recording tower maintenance and an R/shiny application for efficiently monitoring and synthesizing data quality issues. This framework seeks to incorporate lessons learned from the Ameriflux community and provide tools to aid continued network advancements.
Adjustable Spin-Spin Interaction with 171Yb+ ions and Addressing of a Quantum Byte
NASA Astrophysics Data System (ADS)
Wunderlich, Christof
2015-05-01
Trapped atomic ions are a well-advanced physical system for investigating fundamental questions of quantum physics and for quantum information science and its applications. When contemplating the scalability of trapped ions for quantum information science one notes that the use of laser light for coherent operations gives rise to technical and also physical issues that can be remedied by replacing laser light by microwave (MW) and radio-frequency (RF) radiation employing suitably modified ion traps. Magnetic gradient induced coupling (MAGIC) makes it possible to coherently manipulate trapped ions using exclusively MW and RF radiation. After introducing the general concept of MAGIC, I shall report on recent experimental progress using 171Yb+ ions, confined in a suitable Paul trap, as effective spin-1/2 systems interacting via MAGIC. Entangling gates between non-neighbouring ions will be presented. The spin-spin coupling strength is variable and can be adjusted by variation of the secular trap frequency. In general, executing a quantum gate with a single qubit, or a subset of qubits, affects the quantum states of all other qubits. This reduced fidelity of the whole quantum register may preclude scalability. We demonstrate addressing of individual qubits within a quantum byte (eight qubits interacting via MAGIC) using MW radiation and measure the error induced in all non-addressed qubits (cross-talk) associated with the application of single-qubit gates. The measured cross-talk is on the order 10-5 and therefore below the threshold commonly agreed sufficient to efficiently realize fault-tolerant quantum computing. Furthermore, experimental results on continuous and pulsed dynamical decoupling (DD) for protecting quantum memories and quantum gates against decoherence will be briefly discussed. Finally, I report on using continuous DD to realize a broadband ultrasensitive single-atom magnetometer.
NASA Astrophysics Data System (ADS)
Wang, Wei; Ruiz, Isaac; Lee, Ilkeun; Zaera, Francisco; Ozkan, Mihrimah; Ozkan, Cengiz S.
2015-04-01
Optimization of the electrode/electrolyte double-layer interface is a key factor for improving electrode performance of aqueous electrolyte based supercapacitors (SCs). Here, we report the improved functionality of carbon materials via a non-invasive, high-throughput, and inexpensive UV generated ozone (UV-ozone) treatment. This process allows precise tuning of the graphene and carbon nanotube hybrid foam (GM) transitionally from ultrahydrophobic to hydrophilic within 60 s. The continuous tuning of surface energy can be controlled by simply varying the UV-ozone exposure time, while the ozone-oxidized carbon nanostructure maintains its integrity. Symmetric SCs based on the UV-ozone treated GM foam demonstrated enhanced rate performance. This technique can be readily applied to other CVD-grown carbonaceous materials by taking advantage of its ease of processing, low cost, scalability, and controllability.Optimization of the electrode/electrolyte double-layer interface is a key factor for improving electrode performance of aqueous electrolyte based supercapacitors (SCs). Here, we report the improved functionality of carbon materials via a non-invasive, high-throughput, and inexpensive UV generated ozone (UV-ozone) treatment. This process allows precise tuning of the graphene and carbon nanotube hybrid foam (GM) transitionally from ultrahydrophobic to hydrophilic within 60 s. The continuous tuning of surface energy can be controlled by simply varying the UV-ozone exposure time, while the ozone-oxidized carbon nanostructure maintains its integrity. Symmetric SCs based on the UV-ozone treated GM foam demonstrated enhanced rate performance. This technique can be readily applied to other CVD-grown carbonaceous materials by taking advantage of its ease of processing, low cost, scalability, and controllability. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr06795a
The Scalable Checkpoint/Restart Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, A.
The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less
Foam separation of Rhodamine-G and Evans Blue using a simple separatory bottle system.
Dasarathy, Dhweeja; Ito, Yoichiro
2017-09-29
A simple separatory glass bottle was used to improve separation effectiveness and cost efficiency while simultaneously creating a simpler system for separating biological compounds. Additionally, it was important to develop a scalable separation method so this would be applicable to both analytical and preparative separations. Compared to conventional foam separation methods, this method easily forms stable dry foam which ensures high purity of yielded fractions. A negatively charged surfactant, sodium dodecyl sulfate (SDS), was used as the ligand to carry a positively charged Rhodamine-G, leaving a negatively charged Evans Blue in the bottle. The performance of the separatory bottle was tested for separating Rhodamine-G from Evans Blue with sample sizes ranged from 1 to 12mg in preparative separations and 1-20μg in analytical separations under optimum conditions. These conditions including N 2 gas pressure, spinning speed of contents with a magnetic stirrer, concentration of the ligand, volume of the solvent, and concentration of the sample, were all modified and optimized. Based on the calculations at their peak absorbances, Rhodamine-G and Evans Blue were efficiently separated in times ranging from 1h to 3h, depending on sample volume. Optimal conditions were found to be 60psi N 2 pressure and 2mM SDS for the affinity ligand. This novel separation method will allow for rapid separation of biological compounds while simultaneously being scalable and cost effective. Published by Elsevier B.V.
Robot-Beacon Distributed Range-Only SLAM for Resource-Constrained Operation
Torres-González, Arturo; Martínez-de Dios, Jose Ramiro; Ollero, Anibal
2017-01-01
This work deals with robot-sensor network cooperation where sensor nodes (beacons) are used as landmarks for Range-Only (RO) Simultaneous Localization and Mapping (SLAM). Most existing RO-SLAM techniques consider beacons as passive devices disregarding the sensing, computational and communication capabilities with which they are actually endowed. SLAM is a resource-demanding task. Besides the technological constraints of the robot and beacons, many applications impose further resource consumption limitations. This paper presents a scalable distributed RO-SLAM scheme for resource-constrained operation. It is capable of exploiting robot-beacon cooperation in order to improve SLAM accuracy while meeting a given resource consumption bound expressed as the maximum number of measurements that are integrated in SLAM per iteration. The proposed scheme combines a Sparse Extended Information Filter (SEIF) SLAM method, in which each beacon gathers and integrates robot-beacon and inter-beacon measurements, and a distributed information-driven measurement allocation tool that dynamically selects the measurements that are integrated in SLAM, balancing uncertainty improvement and resource consumption. The scheme adopts a robot-beacon distributed approach in which each beacon participates in the selection, gathering and integration in SLAM of robot-beacon and inter-beacon measurements, resulting in significant estimation accuracies, resource-consumption efficiency and scalability. It has been integrated in an octorotor Unmanned Aerial System (UAS) and evaluated in 3D SLAM outdoor experiments. The experimental results obtained show its performance and robustness and evidence its advantages over existing methods. PMID:28425946
Robot-Beacon Distributed Range-Only SLAM for Resource-Constrained Operation.
Torres-González, Arturo; Martínez-de Dios, Jose Ramiro; Ollero, Anibal
2017-04-20
This work deals with robot-sensor network cooperation where sensor nodes (beacons) are used as landmarks for Range-Only (RO) Simultaneous Localization and Mapping (SLAM). Most existing RO-SLAM techniques consider beacons as passive devices disregarding the sensing, computational and communication capabilities with which they are actually endowed. SLAM is a resource-demanding task. Besides the technological constraints of the robot and beacons, many applications impose further resource consumption limitations. This paper presents a scalable distributed RO-SLAM scheme for resource-constrained operation. It is capable of exploiting robot-beacon cooperation in order to improve SLAM accuracy while meeting a given resource consumption bound expressed as the maximum number of measurements that are integrated in SLAM per iteration. The proposed scheme combines a Sparse Extended Information Filter (SEIF) SLAM method, in which each beacon gathers and integrates robot-beacon and inter-beacon measurements, and a distributed information-driven measurement allocation tool that dynamically selects the measurements that are integrated in SLAM, balancing uncertainty improvement and resource consumption. The scheme adopts a robot-beacon distributed approach in which each beacon participates in the selection, gathering and integration in SLAM of robot-beacon and inter-beacon measurements, resulting in significant estimation accuracies, resource-consumption efficiency and scalability. It has been integrated in an octorotor Unmanned Aerial System (UAS) and evaluated in 3D SLAM outdoor experiments. The experimental results obtained show its performance and robustness and evidence its advantages over existing methods.
Overview on new diode lasers for defense applications
NASA Astrophysics Data System (ADS)
Neukum, Joerg
2012-11-01
Diode lasers have a broad wavelength range, from the visible to beyond 2.2μm. This allows for various applications in the defense sector, ranging from classic pumping of DPSSL in range finders or target designators, up to pumping directed energy weapons in the 50+ kW range. Also direct diode applications for illumination above 1.55μm, or direct IR countermeasures are of interest. Here an overview is given on some new wavelengths and applications which are recently under discussion. In this overview the following aspects are reviewed: • High Power CW pumps at 808 / 880 / 940nm • Pumps for DPAL - Diode Pumped Alkali Lasers • High Power Diode Lasers in the range < 1.0 μm • Scalable Mini-Bar concept for high brightness fiber coupled modules • The Light Weight Fiber Coupled module based on the Mini-Bar concept Overall, High Power Diode Lasers offer many ways to be used in new applications in the defense market.
Lubricant-infused nanoparticulate coatings assembled by layer-by-layer deposition
Sunny, Steffi; Vogel, Nicolas; Howell, Caitlin; ...
2014-09-01
Omniphobic coatings are designed to repel a wide range of liquids without leaving stains on the surface. A practical coating should exhibit stable repellency, show no interference with color or transparency of the underlying substrate and, ideally, be deposited in a simple process on arbitrarily shaped surfaces. We use layer-by-layer (LbL) deposition of negatively charged silica nanoparticles and positively charged polyelectrolytes to create nanoscale surface structures that are further surface-functionalized with fluorinated silanes and infiltrated with fluorinated oil, forming a smooth, highly repellent coating on surfaces of different materials and shapes. We show that four or more LbL cycles introducemore » sufficient surface roughness to effectively immobilize the lubricant into the nanoporous coating and provide a stable liquid interface that repels water, low-surface-tension liquids and complex fluids. The absence of hierarchical structures and the small size of the silica nanoparticles enables complete transparency of the coating, with light transmittance exceeding that of normal glass. The coating is mechanically robust, maintains its repellency after exposure to continuous flow for several days and prevents adsorption of streptavidin as a model protein. As a result, the LbL process is conceptually simple, of low cost, environmentally benign, scalable, automatable and therefore may present an efficient synthetic route to non-fouling materials.« less
Secure Service Proxy: A CoAP(s) Intermediary for a Securer and Smarter Web of Things
Van den Abeele, Floris; Moerman, Ingrid; Demeester, Piet
2017-01-01
As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP): a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things. PMID:28696393
An Architectural Concept for Intrusion Tolerance in Air Traffic Networks
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Miner, Paul S.
2003-01-01
The goal of an intrusion tolerant network is to continue to provide predictable and reliable communication in the presence of a limited num ber of compromised network components. The behavior of a compromised network component ranges from a node that no longer responds to a nod e that is under the control of a malicious entity that is actively tr ying to cause other nodes to fail. Most current data communication ne tworks do not include support for tolerating unconstrained misbehavio r of components in the network. However, the fault tolerance communit y has developed protocols that provide both predictable and reliable communication in the presence of the worst possible behavior of a limited number of nodes in the system. One may view a malicious entity in a communication network as a node that has failed and is behaving in an arbitrary manner. NASA/Langley Research Center has developed one such fault-tolerant computing platform called SPIDER (Scalable Proces sor-Independent Design for Electromagnetic Resilience). The protocols and interconnection mechanisms of SPIDER may be adapted to large-sca le, distributed communication networks such as would be required for future Air Traffic Management systems. The predictability and reliabi lity guarantees provided by the SPIDER protocols have been formally v erified. This analysis can be readily adapted to similar network stru ctures.
Self-assembly of highly efficient, broadband plasmonic absorbers for solar steam generation.
Zhou, Lin; Tan, Yingling; Ji, Dengxin; Zhu, Bin; Zhang, Pei; Xu, Jun; Gan, Qiaoqiang; Yu, Zongfu; Zhu, Jia
2016-04-01
The study of ideal absorbers, which can efficiently absorb light over a broad range of wavelengths, is of fundamental importance, as well as critical for many applications from solar steam generation and thermophotovoltaics to light/thermal detectors. As a result of recent advances in plasmonics, plasmonic absorbers have attracted a lot of attention. However, the performance and scalability of these absorbers, predominantly fabricated by the top-down approach, need to be further improved to enable widespread applications. We report a plasmonic absorber which can enable an average measured absorbance of ~99% across the wavelengths from 400 nm to 10 μm, the most efficient and broadband plasmonic absorber reported to date. The absorber is fabricated through self-assembly of metallic nanoparticles onto a nanoporous template by a one-step deposition process. Because of its efficient light absorption, strong field enhancement, and porous structures, which together enable not only efficient solar absorption but also significant local heating and continuous stream flow, plasmonic absorber-based solar steam generation has over 90% efficiency under solar irradiation of only 4-sun intensity (4 kW m(-2)). The pronounced light absorption effect coupled with the high-throughput self-assembly process could lead toward large-scale manufacturing of other nanophotonic structures and devices.
Lubricant-Infused Nanoparticulate Coatings Assembled by Layer-by-Layer Deposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunny, S; Vogel, N; Howell, C
2014-09-01
Omniphobic coatings are designed to repel a wide range of liquids without leaving stains on the surface. A practical coating should exhibit stable repellency, show no interference with color or transparency of the underlying substrate and, ideally, be deposited in a simple process on arbitrarily shaped surfaces. We use layer-by-layer (LbL) deposition of negatively charged silica nanoparticles and positively charged polyelectrolytes to create nanoscale surface structures that are further surface-functionalized with fluorinated silanes and infiltrated with fluorinated oil, forming a smooth, highly repellent coating on surfaces of different materials and shapes. We show that four or more LbL cycles introducemore » sufficient surface roughness to effectively immobilize the lubricant into the nanoporous coating and provide a stable liquid interface that repels water, low-surface-tension liquids and complex fluids. The absence of hierarchical structures and the small size of the silica nanoparticles enables complete transparency of the coating, with light transmittance exceeding that of normal glass. The coating is mechanically robust, maintains its repellency after exposure to continuous flow for several days and prevents adsorption of streptavidin as a model protein. The LbL process is conceptually simple, of low cost, environmentally benign, scalable, automatable and therefore may present an efficient synthetic route to non-fouling materials.« less
Walczak, Karl A.; Segev, Gideon; Larson, David M.; ...
2017-02-17
Safe and practical solar-driven hydrogen generators must be capable of efficient and stable operation under diurnal cycling with full separation of gaseous H 2 and O 2 products. In this paper, a novel architecture that fulfills all of these requirements is presented. The approach is inherently scalable and provides versatility for operation under diverse electrolyte and lighting conditions. The concept is validated using a 1 cm 2 triple-junction photovoltaic cell with its illuminated photocathode protected by a composite coating comprising an organic encapsulant with an embedded catalytic support. The device is compatible with operation under conditions ranging from 1 Mmore » H 2SO 4 to 1 M KOH, enabling flexibility in selection of semiconductor, electrolyte, membrane, and catalyst. Stable operation at a solar-to-hydrogen conversion efficiency of >10% is demonstrated under continuous operation, as well as under diurnal light cycling for at least 4 d, with simulated sunlight. Operational characteristics are validated by extended time outdoor testing. A membrane ensures products are separated, with nonexplosive gas streams generated for both alkaline and acidic systems. Finally, analysis of operational characteristics under different lighting conditions is enabled by comparison of a device model to experimental data.« less
Secure Service Proxy: A CoAP(s) Intermediary for a Securer and Smarter Web of Things.
Van den Abeele, Floris; Moerman, Ingrid; Demeester, Piet; Hoebeke, Jeroen
2017-07-11
As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP): a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things.
Self-assembly of highly efficient, broadband plasmonic absorbers for solar steam generation
Zhou, Lin; Tan, Yingling; Ji, Dengxin; Zhu, Bin; Zhang, Pei; Xu, Jun; Gan, Qiaoqiang; Yu, Zongfu; Zhu, Jia
2016-01-01
The study of ideal absorbers, which can efficiently absorb light over a broad range of wavelengths, is of fundamental importance, as well as critical for many applications from solar steam generation and thermophotovoltaics to light/thermal detectors. As a result of recent advances in plasmonics, plasmonic absorbers have attracted a lot of attention. However, the performance and scalability of these absorbers, predominantly fabricated by the top-down approach, need to be further improved to enable widespread applications. We report a plasmonic absorber which can enable an average measured absorbance of ~99% across the wavelengths from 400 nm to 10 μm, the most efficient and broadband plasmonic absorber reported to date. The absorber is fabricated through self-assembly of metallic nanoparticles onto a nanoporous template by a one-step deposition process. Because of its efficient light absorption, strong field enhancement, and porous structures, which together enable not only efficient solar absorption but also significant local heating and continuous stream flow, plasmonic absorber–based solar steam generation has over 90% efficiency under solar irradiation of only 4-sun intensity (4 kW m−2). The pronounced light absorption effect coupled with the high-throughput self-assembly process could lead toward large-scale manufacturing of other nanophotonic structures and devices. PMID:27152335
Fluri, David A.; Tonge, Peter D.; Song, Hannah; Baptista, Ricardo P.; Shakiba, Nika; Shukla, Shreya; Clarke, Geoffrey; Nagy, Andras; Zandstra, Peter W.
2016-01-01
We demonstrate derivation of induced pluripotent stem cells (iPSCs) from terminally differentiated mouse cells in serum- and feeder-free stirred suspension cultures. Temporal analysis of global gene expression revealed high correlations between cells reprogrammed in suspension and cells reprogrammed in adhesion-dependent conditions. Suspension (S) reprogrammed iPSCs (SiPSCs) could be differentiated into all three germ layers in vitro and contributed to chimeric embryos in vivo. SiPSC generation allowed for efficient selection of reprogramming factor expressing cells based on their differential survival and proliferation in suspension. Seamless integration of SiPSC reprogramming and directed differentiation enabled the scalable production of functionally and phenotypically defined cardiac cells in a continuous single cell- and small aggregate-based process. This method is an important step towards the development of a robust PSC generation, expansion and differentiation technology. PMID:22447133
Characterization of MoS2-Graphene Composites for High-Performance Coin Cell Supercapacitors.
Bissett, Mark A; Kinloch, Ian A; Dryfe, Robert A W
2015-08-12
Two-dimensional materials, such as graphene and molybdenum disulfide (MoS2), can greatly increase the performance of electrochemical energy storage devices because of the combination of high surface area and electrical conductivity. Here, we have investigated the performance of solution exfoliated MoS2 thin flexible membranes as supercapacitor electrodes in a symmetrical coin cell arrangement using an aqueous electrolyte (Na2SO4). By adding highly conductive graphene to form nanocomposite membranes, it was possible to increase the specific capacitance by reducing the resistivity of the electrode and altering the morphology of the membrane. With continued charge/discharge cycles the performance of the membranes was found to increase significantly (up to 800%), because of partial re-exfoliation of the layered material with continued ion intercalation, as well as increasing the specific capacitance through intercalation pseudocapacitance. These results demonstrate a simple and scalable application of layered 2D materials toward electrochemical energy storage.
Numerical simulations of electrohydrodynamic evolution of thin polymer films
NASA Astrophysics Data System (ADS)
Borglum, Joshua Christopher
Recently developed needleless electrospinning and electrolithography are two successful techniques that have been utilized extensively for low-cost, scalable, and continuous nano-fabrication. Rational understanding of the electrohydrodynamic principles underneath these nano-manufacturing methods is crucial to fabrication of continuous nanofibers and patterned thin films. This research project is to formulate robust, high-efficiency finite-difference Fourier spectral methods to simulate the electrohydrodynamic evolution of thin polymer films. Two thin-film models were considered and refined. The first was based on reduced lubrication theory; the second further took into account the effect of solvent drying and dewetting of the substrate. Fast Fourier Transform (FFT) based spectral method was integrated into the finite-difference algorithms for fast, accurately solving the governing nonlinear partial differential equations. The present methods have been used to examine the dependencies of the evolving surface features of the thin films upon the model parameters. The present study can be used for fast, controllable nanofabrication.
Pharmaceutical spray drying: solid-dose process technology platform for the 21st century.
Snyder, Herman E
2012-07-01
Requirement for precise control of solid-dosage particle properties created with a scalable process technology are continuing to expand in the pharmaceutical industry. Alternate methods of drug delivery, limited active drug substance solubility and the need to improve drug product stability under room-temperature conditions are some of the pharmaceutical applications that can benefit from spray-drying technology. Used widely for decades in other industries with production rates up to several tons per hour, pharmaceutical uses for spray drying are expanding beyond excipient production and solvent removal from crystalline material. Creation of active pharmaceutical-ingredient particles with combinations of unique target properties are now more common. This review of spray-drying technology fundamentals provides a brief perspective on the internal process 'mechanics', which combine with both the liquid and solid properties of a formulation to enable high-throughput, continuous manufacturing of precision powder properties.
Self-assembled fibre optoelectronics with discrete translational symmetry
Rein, Michael; Levy, Etgar; Gumennik, Alexander; Abouraddy, Ayman F.; Joannopoulos, John; Fink, Yoel
2016-01-01
Fibres with electronic and photonic properties are essential building blocks for functional fabrics with system level attributes. The scalability of thermal fibre drawing approach offers access to large device quantities, while constraining the devices to be translational symmetric. Lifting this symmetry to create discrete devices in fibres will increase their utility. Here, we draw, from a macroscopic preform, fibres that have three parallel internal non-contacting continuous domains; a semiconducting glass between two conductors. We then heat the fibre and generate a capillary fluid instability, resulting in the selective transformation of the cylindrical semiconducting domain into discrete spheres while keeping the conductive domains unchanged. The cylindrical-to-spherical expansion bridges the continuous conducting domains to create ∼104 self-assembled, electrically contacted and entirely packaged discrete spherical devices per metre of fibre. The photodetection and Mie resonance dependent response are measured by illuminating the fibre while connecting its ends to an electrical readout. PMID:27698454
Self-assembled fibre optoelectronics with discrete translational symmetry.
Rein, Michael; Levy, Etgar; Gumennik, Alexander; Abouraddy, Ayman F; Joannopoulos, John; Fink, Yoel
2016-10-04
Fibres with electronic and photonic properties are essential building blocks for functional fabrics with system level attributes. The scalability of thermal fibre drawing approach offers access to large device quantities, while constraining the devices to be translational symmetric. Lifting this symmetry to create discrete devices in fibres will increase their utility. Here, we draw, from a macroscopic preform, fibres that have three parallel internal non-contacting continuous domains; a semiconducting glass between two conductors. We then heat the fibre and generate a capillary fluid instability, resulting in the selective transformation of the cylindrical semiconducting domain into discrete spheres while keeping the conductive domains unchanged. The cylindrical-to-spherical expansion bridges the continuous conducting domains to create ∼10 4 self-assembled, electrically contacted and entirely packaged discrete spherical devices per metre of fibre. The photodetection and Mie resonance dependent response are measured by illuminating the fibre while connecting its ends to an electrical readout.
Cotton-textile-enabled flexible self-sustaining power packs via roll-to-roll fabrication
Gao, Zan; Bumgardner, Clifton; Song, Ningning; Zhang, Yunya; Li, Jingjing; Li, Xiaodong
2016-01-01
With rising energy concerns, efficient energy conversion and storage devices are required to provide a sustainable, green energy supply. Solar cells hold promise as energy conversion devices due to their utilization of readily accessible solar energy; however, the output of solar cells can be non-continuous and unstable. Therefore, it is necessary to combine solar cells with compatible energy storage devices to realize a stable power supply. To this end, supercapacitors, highly efficient energy storage devices, can be integrated with solar cells to mitigate the power fluctuations. Here, we report on the development of a solar cell-supercapacitor hybrid device as a solution to this energy requirement. A high-performance, cotton-textile-enabled asymmetric supercapacitor is integrated with a flexible solar cell via a scalable roll-to-roll manufacturing approach to fabricate a self-sustaining power pack, demonstrating its potential to continuously power future electronic devices. PMID:27189776
Mao, Yiyin; Li, Junwei; Cao, Wei; Ying, Yulong; Sun, Luwei; Peng, Xinsheng
2014-03-26
The scalable fabrication of continuous and defect-free metal-organic framework (MOF) films on the surface of polymeric hollow fibers, departing from ceramic supported or dense composite membranes, is a huge challenge. The critical way is to reduce the growth temperature of MOFs in aqueous or ethanol solvents. In the present work, a pressure-assisted room temperature growth strategy was carried out to fabricate continuous and well-intergrown HKUST-1 films on a polymer hollow fiber by using solid copper hydroxide nanostrands as the copper source within 40 min. These HKUST-1 films/polyvinylidenefluoride (PVDF) hollow fiber composite membranes exhibit good separation performance for binary gases with selectivity 116% higher than Knudsen values via both inside-out and outside-in modes. This provides a new way to enable for scale-up preparation of HKUST-1/polymer hollow fiber membranes, due to its superior economic and ecological advantages.
A boundedness result for the direct heuristic dynamic programming.
Liu, Feng; Sun, Jian; Si, Jennie; Guo, Wentao; Mei, Shengwei
2012-08-01
Approximate/adaptive dynamic programming (ADP) has been studied extensively in recent years for its potential scalability to solve large state and control space problems, including those involving continuous states and continuous controls. The applicability of ADP algorithms, especially the adaptive critic designs has been demonstrated in several case studies. Direct heuristic dynamic programming (direct HDP) is one of the ADP algorithms inspired by the adaptive critic designs. It has been shown applicable to industrial scale, realistic and complex control problems. In this paper, we provide a uniformly ultimately boundedness (UUB) result for the direct HDP learning controller under mild and intuitive conditions. By using a Lyapunov approach we show that the estimation errors of the learning parameters or the weights in the action and critic networks remain UUB. This result provides a useful controller convergence guarantee for the first time for the direct HDP design. Copyright © 2012 Elsevier Ltd. All rights reserved.
Integrating distributed multimedia systems and interactive television networks
NASA Astrophysics Data System (ADS)
Shvartsman, Alex A.
1996-01-01
Recent advances in networks, storage and video delivery systems are about to make commercial deployment of interactive multimedia services over digital television networks a reality. The emerging components individually have the potential to satisfy the technical requirements in the near future. However, no single vendor is offering a complete end-to-end commercially-deployable and scalable interactive multimedia applications systems over digital/analog television systems. Integrating a large set of maturing sub-assemblies and interactive multimedia applications is a major task in deploying such systems. Here we deal with integration issues, requirements and trade-offs in building delivery platforms and applications for interactive television services. Such integration efforts must overcome lack of standards, and deal with unpredictable development cycles and quality problems of leading- edge technology. There are also the conflicting goals of optimizing systems for video delivery while enabling highly interactive distributed applications. It is becoming possible to deliver continuous video streams from specific sources, but it is difficult and expensive to provide the ability to rapidly switch among multiple sources of video and data. Finally, there is the ever- present challenge of integrating and deploying expensive systems whose scalability and extensibility is limited, while ensuring some resiliency in the face of inevitable changes. This proceedings version of the paper is an extended abstract.
Regional Mapping of Plantation Extent Using Multisensor Imagery
NASA Astrophysics Data System (ADS)
Torbick, N.; Ledoux, L.; Hagen, S.; Salas, W.
2016-12-01
Industrial forest plantations are expanding rapidly across the tropics and monitoring extent is critical for understanding environmental and socioeconomic impacts. In this study, new, multisensor imagery were evaluated and integrated to extract the strengths of each sensor for mapping plantation extent at regional scales. Three distinctly different landscapes with multiple plantation types were chosen to consider scalability and transferability. These were Tanintharyi, Myanmar, West Kalimantan, Indonesia, and southern Ghana. Landsat-8 Operational Land Imager (OLI), Phased Array L-band Synthetic Aperture Radar-2 (PALSAR-2), and Sentinel-1A images were fused within a Classification and Regression Tree (CART) framework using random forest and high-resolution surveys. Multi-criteria evaluations showed both L-and C-band gamma nought γ° backscatter decibel (dB), Landsat reflectance ρλ, and texture indices were useful for distinguishing oil palm and rubber plantations from other land types. The classification approach identified 750,822 ha or 23% of the Taninathryi, Myanmar, and 216,086 ha or 25% of western West Kalimantan as plantation with very high cross validation accuracy. The mapping approach was scalable and transferred well across the different geographies and plantation types. As archives for Sentinel-1, Landsat-8, and PALSAR-2 continue to grow, mapping plantation extent and dynamics at moderate resolution over large regions should be feasible.
Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Jian; Hamidouche, Khaled; Zheng, Jie
2015-08-05
Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemicmore » evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.« less
Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process
Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping
2016-01-01
Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs’ RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user’s location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area. PMID:26999139
Scalability of an endoluminal spring for distraction enterogenesis.
Rouch, Joshua D; Huynh, Nhan; Scott, Andrew; Chiang, Elvin; Wu, Benjamin M; Shekherdimian, Shant; Dunn, James C Y
2016-12-01
Techniques of distraction enterogenesis have been explored to provide increased intestinal length to treat short bowel syndrome (SBS). Self-expanding, polycaprolactone (PCL) springs have been shown to lengthen bowel in small animal models. Their feasibility in larger animal models is a critical step before clinical use. Juvenile mini-Yucatan pigs underwent jejunal isolation or blind ending Roux-en-y jejunojejunostomy with insertion of either a PCL spring or a sham PCL tube. Extrapolated from our spring characteristics in rodents, proportional increases in spring constant and size were made for porcine intestine. Jejunal segments with 7mm springs with k between 9 and 15N/m demonstrated significantly increased lengthening in isolated segment and Roux-en-y models. Complications were noted in only two animals, both using high spring constant k>17N/m. Histologically, lengthened segments in the isolated and Roux models demonstrated significantly increased muscularis thickness and crypt depth. Restoration of lengthened, isolated segments back into continuity was technically feasible after 6weeks. Self-expanding, endoluminal PCL springs, which exert up to 0.6N force, safely achieve significant intestinal lengthening in a translatable, large-animal model. These spring characteristics may provide a scalable model for the treatment of SBS in children. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tolba, Khaled Ibrahim; Morgenthal, Guido
2018-01-01
This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.
Design of an H.264/SVC resilient watermarking scheme
NASA Astrophysics Data System (ADS)
Van Caenegem, Robrecht; Dooms, Ann; Barbarien, Joeri; Schelkens, Peter
2010-01-01
The rapid dissemination of media technologies has lead to an increase of unauthorized copying and distribution of digital media. Digital watermarking, i.e. embedding information in the multimedia signal in a robust and imperceptible manner, can tackle this problem. Recently, there has been a huge growth in the number of different terminals and connections that can be used to consume multimedia. To tackle the resulting distribution challenges, scalable coding is often employed. Scalable coding allows the adaptation of a single bit-stream to varying terminal and transmission characteristics. As a result of this evolution, watermarking techniques that are robust against scalable compression become essential in order to control illegal copying. In this paper, a watermarking technique resilient against scalable video compression using the state-of-the-art H.264/SVC codec is therefore proposed and evaluated.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash
2003-01-01
Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.
SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop
Schumacher, André; Pireddu, Luca; Niemenmaa, Matti; Kallio, Aleksi; Korpelainen, Eija; Zanetti, Gianluigi; Heljanko, Keijo
2014-01-01
Summary: Hadoop MapReduce-based approaches have become increasingly popular due to their scalability in processing large sequencing datasets. However, as these methods typically require in-depth expertise in Hadoop and Java, they are still out of reach of many bioinformaticians. To solve this problem, we have created SeqPig, a library and a collection of tools to manipulate, analyze and query sequencing datasets in a scalable and simple manner. SeqPigscripts use the Hadoop-based distributed scripting engine Apache Pig, which automatically parallelizes and distributes data processing tasks. We demonstrate SeqPig’s scalability over many computing nodes and illustrate its use with example scripts. Availability and Implementation: Available under the open source MIT license at http://sourceforge.net/projects/seqpig/ Contact: andre.schumacher@yahoo.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24149054
Evaluation of 3D printed anatomically scalable transfemoral prosthetic knee.
Ramakrishnan, Tyagi; Schlafly, Millicent; Reed, Kyle B
2017-07-01
This case study compares a transfemoral amputee's gait while using the existing Ossur Total Knee 2000 and our novel 3D printed anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee is 3D printed out of a carbon-fiber and nylon composite that has a gear-mesh coupling with a hard-stop weight-actuated locking mechanism aided by a cross-linked four-bar spring mechanism. This design can be scaled using anatomical dimensions of a human femur and tibia to have a unique fit for each user. The transfemoral amputee who was tested is high functioning and walked on the Computer Assisted Rehabilitation Environment (CAREN) at a self-selected pace. The motion capture and force data that was collected showed that there were distinct differences in the gait dynamics. The data was used to perform the Combined Gait Asymmetry Metric (CGAM), where the scores revealed that the overall asymmetry of the gait on the Ossur Total Knee was more asymmetric than the anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee had higher peak knee flexion that caused a large step time asymmetry. This made walking on the anatomically scalable transfemoral prosthetic knee more strenuous due to the compensatory movements in adapting to the different dynamics. This can be overcome by tuning the cross-linked spring mechanism to emulate the dynamics of the subject better. The subject stated that the knee would be good for daily use and has the potential to be adapted as a running knee.
Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...
2018-01-30
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Zulian, Patrick; Benson, Thomas
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
LAGRANGE: LAser GRavitational-wave ANtenna in GEodetic Orbit
NASA Astrophysics Data System (ADS)
Buchman, S.; Conklin, J. W.; Balakrishnan, K.; Aguero, V.; Alfauwaz, A.; Aljadaan, A.; Almajed, M.; Altwaijry, H.; Saud, T. A.; Byer, R. L.; Bower, K.; Costello, B.; Cutler, G. D.; DeBra, D. B.; Faied, D. M.; Foster, C.; Genova, A. L.; Hanson, J.; Hooper, K.; Hultgren, E.; Klavins, A.; Lantz, B.; Lipa, J. A.; Palmer, A.; Plante, B.; Sanchez, H. S.; Saraf, S.; Schaechter, D.; Shu, K.; Smith, E.; Tenerelli, D.; Vanbezooijen, R.; Vasudevan, G.; Williams, S. D.; Worden, S. P.; Zhou, J.; Zoellner, A.
2013-01-01
We describe a new space gravitational wave observatory design called LAG-RANGE that maintains all important LISA science at about half the cost and with reduced technical risk. It consists of three drag-free spacecraft in a geocentric formation. Fixed antennas allow continuous contact with the Earth, solving the problem of communications bandwidth and latency. A 70 mm diameter sphere with a 35 mm gap to its enclosure serves as the single inertial reference per spacecraft, operating in “true” drag-free mode (no test mass forcing). Other advantages are: a simple caging design based on the DISCOS 1972 drag-free mission, an all optical read-out with pm fine and nm coarse sensors, and the extensive technology heritage from the Honeywell gyroscopes, and the DISCOS and Gravity Probe B drag-free sensors. An Interferometric Measurement System, designed with reflective optics and a highly stabilized frequency standard, performs the ranging between test masses and requires a single optical bench with one laser per spacecraft. Two 20 cm diameter telescopes per spacecraft, each with infield pointing, incorporate novel technology developed for advanced optical systems by Lockheed Martin, who also designed the spacecraft based on a multi-flight proven bus structure. Additional technological advancements include updated drag-free propulsion, thermal control, charge management systems, and materials. LAGRANGE subsystems are designed to be scalable and modular, making them interchangeable with those of LISA or other gravitational science missions. We plan to space qualify critical technologies on small and nano satellite flights, with the first launch (UV-LED Sat) in 2013.
The advanced photovoltaic solar array program
NASA Technical Reports Server (NTRS)
Kurland, R. M.; Stella, Paul M.
1989-01-01
The background and development status of an ultralightweight flexible-blanket flatpack, fold-out solar array is presented. It is scheduled for prototype demonstration in late 1989. The Advanced Photovoltaic Solar Array (APSA) design represents a critical intermediate milestone of the goal of 300 W/kg at beginning-of-life (BOL) with specific performance characteristics of 130 W/kg (BOL) and 100 W/kg at end-of-life (EOL) for a 10-year geosynchronous geostationary earth orbit 10-kW (BOL) space power system. The APSA wing design is scalable over a power range of 2 to 15 kW and is suitable for a full range of missions including Low Earth Orbit (LEO), orbital transfer from LEO to geostationary earth orbit and interplanetary flight.
Combined plasma gas-phase synthesis and colloidal processing of InP/ZnS core/shell nanocrystals.
Gresback, Ryan; Hue, Ryan; Gladfelter, Wayne L; Kortshagen, Uwe R
2011-01-12
Indium phosphide nanocrystals (InP NCs) with diameters ranging from 2 to 5 nm were synthesized with a scalable, flow-through, nonthermal plasma process at a rate ranging from 10 to 40 mg/h. The NC size is controlled through the plasma operating parameters, with the residence time of the gas in the plasma region strongly influencing the NC size. The NC size distribution is narrow with the standard deviation being less than 20% of the mean NC size. Zinc sulfide (ZnS) shells were grown around the plasma-synthesized InP NCs in a liquid phase reaction. Photoluminescence with quantum yields as high as 15% were observed for the InP/ZnS core-shell NCs.
Combined plasma gas-phase synthesis and colloidal processing of InP/ZnS core/shell nanocrystals
2011-01-01
Indium phosphide nanocrystals (InP NCs) with diameters ranging from 2 to 5 nm were synthesized with a scalable, flow-through, nonthermal plasma process at a rate ranging from 10 to 40 mg/h. The NC size is controlled through the plasma operating parameters, with the residence time of the gas in the plasma region strongly influencing the NC size. The NC size distribution is narrow with the standard deviation being less than 20% of the mean NC size. Zinc sulfide (ZnS) shells were grown around the plasma-synthesized InP NCs in a liquid phase reaction. Photoluminescence with quantum yields as high as 15% were observed for the InP/ZnS core-shell NCs. PMID:21711589
Combined plasma gas-phase synthesis and colloidal processing of InP/ZnS core/shell nanocrystals
NASA Astrophysics Data System (ADS)
Gresback, Ryan; Hue, Ryan; Gladfelter, Wayne L.; Kortshagen, Uwe R.
2011-12-01
Indium phosphide nanocrystals (InP NCs) with diameters ranging from 2 to 5 nm were synthesized with a scalable, flow-through, nonthermal plasma process at a rate ranging from 10 to 40 mg/h. The NC size is controlled through the plasma operating parameters, with the residence time of the gas in the plasma region strongly influencing the NC size. The NC size distribution is narrow with the standard deviation being less than 20% of the mean NC size. Zinc sulfide (ZnS) shells were grown around the plasma-synthesized InP NCs in a liquid phase reaction. Photoluminescence with quantum yields as high as 15% were observed for the InP/ZnS core-shell NCs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, C
2009-11-12
In FY09 they will (1) complete the implementation, verification, calibration, and sensitivity and scalability analysis of the in-cell virus replication model; (2) complete the design of the cell culture (cell-to-cell infection) model; (3) continue the research, design, and development of their bioinformatics tools: the Web-based structure-alignment-based sequence variability tool and the functional annotation of the genome database; (4) collaborate with the University of California at San Francisco on areas of common interest; and (5) submit journal articles that describe the in-cell model with simulations and the bioinformatics approaches to evaluation of genome variability and fitness.
Extraordinary Corrosion Protection from Polymer-Clay Nanobrick Wall Thin Films.
Schindelholz, Eric J; Spoerke, Erik D; Nguyen, Hai-Duy; Grunlan, Jaime C; Qin, Shuang; Bufford, Daniel C
2018-06-20
Metals across all industries demand anticorrosion surface treatments and drive a continual need for high-performing and low-cost coatings. Here we demonstrate polymer-clay nanocomposite thin films as a new class of transparent conformal barrier coatings for protection in corrosive atmospheres. Films assembled via layer-by-layer deposition, as thin as 90 nm, are shown to reduce copper corrosion rates by >1000× in an aggressive H 2 S atmosphere. These multilayer nanobrick wall coatings hold promise as high-performing anticorrosion treatment alternatives to costlier, more toxic, and less scalable thin films, such as graphene, hexavalent chromium, or atomic-layer-deposited metal oxides.
2012-03-09
guides/ ranger-user-guide. [3] T . Davies, M. J . P. Cullen, A. J . Malcolm, M. H. Mawson , A. Staniforth, A. A. White, and N. Wood. A new dynamical core...element Ωe, a finite-dimensional approximation qN is formed by expanding q(x, t ) in basis functions ψj (x) such that q (e) N (x, t ) = MN ∑ j =1 ψj(x)q (e... j ( t ) (14) where MN = (N + 1) 3 is the number of nodes per element, N is the order of the basis functions, and the superscript (e) denotes element
Fabrication of Scalable Indoor Light Energy Harvester and Study for Agricultural IoT Applications
NASA Astrophysics Data System (ADS)
Watanabe, M.; Nakamura, A.; Kunii, A.; Kusano, K.; Futagawa, M.
2015-12-01
A scalable indoor light energy harvester was fabricated by microelectromechanical system (MEMS) and printing hybrid technology and evaluated for agricultural IoT applications under different environmental input power density conditions, such as outdoor farming under the sun, greenhouse farming under scattered lighting, and a plant factory under LEDs. We fabricated and evaluated a dye- sensitized-type solar cell (DSC) as a low cost and “scalable” optical harvester device. We developed a transparent conductive oxide (TCO)-less process with a honeycomb metal mesh substrate fabricated by MEMS technology. In terms of the electrical and optical properties, we achieved scalable harvester output power by cell area sizing. Second, we evaluated the dependence of the input power scalable characteristics on the input light intensity, spectrum distribution, and light inlet direction angle, because harvested environmental input power is unstable. The TiO2 fabrication relied on nanoimprint technology, which was designed for optical optimization and fabrication, and we confirmed that the harvesters are robust to a variety of environments. Finally, we studied optical energy harvesting applications for agricultural IoT systems. These scalable indoor light harvesters could be used in many applications and situations in smart agriculture.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Theoretical and Empirical Analysis of a Spatial EA Parallel Boosting Algorithm.
Kamath, Uday; Domeniconi, Carlotta; De Jong, Kenneth
2018-01-01
Many real-world problems involve massive amounts of data. Under these circumstances learning algorithms often become prohibitively expensive, making scalability a pressing issue to be addressed. A common approach is to perform sampling to reduce the size of the dataset and enable efficient learning. Alternatively, one customizes learning algorithms to achieve scalability. In either case, the key challenge is to obtain algorithmic efficiency without compromising the quality of the results. In this article we discuss a meta-learning algorithm (PSBML) that combines concepts from spatially structured evolutionary algorithms (SSEAs) with concepts from ensemble and boosting methodologies to achieve the desired scalability property. We present both theoretical and empirical analyses which show that PSBML preserves a critical property of boosting, specifically, convergence to a distribution centered around the margin. We then present additional empirical analyses showing that this meta-level algorithm provides a general and effective framework that can be used in combination with a variety of learning classifiers. We perform extensive experiments to investigate the trade-off achieved between scalability and accuracy, and robustness to noise, on both synthetic and real-world data. These empirical results corroborate our theoretical analysis, and demonstrate the potential of PSBML in achieving scalability without sacrificing accuracy.
Progressive Dictionary Learning with Hierarchical Predictive Structure for Scalable Video Coding.
Dai, Wenrui; Shen, Yangmei; Xiong, Hongkai; Jiang, Xiaoqian; Zou, Junni; Taubman, David
2017-04-12
Dictionary learning has emerged as a promising alternative to the conventional hybrid coding framework. However, the rigid structure of sequential training and prediction degrades its performance in scalable video coding. This paper proposes a progressive dictionary learning framework with hierarchical predictive structure for scalable video coding, especially in low bitrate region. For pyramidal layers, sparse representation based on spatio-temporal dictionary is adopted to improve the coding efficiency of enhancement layers (ELs) with a guarantee of reconstruction performance. The overcomplete dictionary is trained to adaptively capture local structures along motion trajectories as well as exploit the correlations between neighboring layers of resolutions. Furthermore, progressive dictionary learning is developed to enable the scalability in temporal domain and restrict the error propagation in a close-loop predictor. Under the hierarchical predictive structure, online learning is leveraged to guarantee the training and prediction performance with an improved convergence rate. To accommodate with the stateof- the-art scalable extension of H.264/AVC and latest HEVC, standardized codec cores are utilized to encode the base and enhancement layers. Experimental results show that the proposed method outperforms the latest SHVC and HEVC simulcast over extensive test sequences with various resolutions.
Privacy-Aware Location Database Service for Granular Queries
NASA Astrophysics Data System (ADS)
Kiyomoto, Shinsaku; Martin, Keith M.; Fukushima, Kazuhide
Future mobile markets are expected to increasingly embrace location-based services. This paper presents a new system architecture for location-based services, which consists of a location database and distributed location anonymizers. The service is privacy-aware in the sense that the location database always maintains a degree of anonymity. The location database service permits three different levels of query and can thus be used to implement a wide range of location-based services. Furthermore, the architecture is scalable and employs simple functions that are similar to those found in general database systems.
Decerns: A framework for multi-criteria decision analysis
Yatsalo, Boris; Didenko, Vladimir; Gritsyuk, Sergey; ...
2015-02-27
A new framework, Decerns, for multicriteria decision analysis (MCDA) of a wide range of practical problems on risk management is introduced. Decerns framework contains a library of modules that are the basis for two scalable systems: DecernsMCDA for analysis of multicriteria problems, and DecernsSDSS for multicriteria analysis of spatial options. DecernsMCDA includes well known MCDA methods and original methods for uncertainty treatment based on probabilistic approaches and fuzzy numbers. As a result, these MCDA methods are described along with a case study on analysis of multicriteria location problem.
Open system environment procurement
NASA Technical Reports Server (NTRS)
Fisher, Gary
1994-01-01
Relationships between the request for procurement (RFP) process and open system environment (OSE) standards are described. A guide was prepared to help Federal agency personnel overcome problems in writing an adequate statement of work and developing realistic evaluation criteria when transitioning to an OSE. The guide contains appropriate decision points and transition strategies for developing applications that are affordable, scalable and interoperable across a broad range of computing environments. While useful, the guide does not eliminate the requirement that agencies posses in-depth expertise in software development, communications, and database technology in order to evaluate open systems.
A thermophone on porous polymeric substrate
NASA Astrophysics Data System (ADS)
Chitnis, G.; Kim, A.; Song, S. H.; Jessop, A. M.; Bolton, J. S.; Ziaie, B.
2012-07-01
In this Letter, we present a simple, low-temperature method for fabricating a wide-band (>80 kHz) thermo-acoustic sound generator on a porous polymeric substrate. We were able to achieve up to 80 dB of sound pressure level with an input power of 0.511 W. No significant surface temperature increase was observed in the device even at an input power level of 2.5 W. Wide-band ultrasonic performance, simplicity of structure, and scalability of the fabrication process make this device suitable for many ranging and imaging applications.
Editorial: from plant biotechnology to bio-based products.
Stöger, Eva
2013-10-01
From plant biotechnology to bio-based products - this Special Issue of Biotechnology Journal is dedicated to plant biotechnology and is edited by Prof. Eva Stöger (University of Natural Resources and Life Sciences, Vienna, Austria). The Special Issue covers a wide range of topics in plant biotechnology, including metabolic engineering of biosynthesis pathways in plants; taking advantage of the scalability of the plant system for the production of innovative materials; as well as the regulatory challenges and society acceptance of plant biotechnology. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Biotechnological synthesis of functional nanomaterials.
Lloyd, Jonathan R; Byrne, James M; Coker, Victoria S
2011-08-01
Biological systems, especially those using microorganisms, have the potential to offer cheap, scalable and highly tunable green synthetic routes for the production of the latest generation of nanomaterials. Recent advances in the biotechnological synthesis of functional nano-scale materials are described. These nanomaterials range from catalysts to novel inorganic antimicrobials, nanomagnets, remediation agents and quantum dots for electronic and optical devices. Where possible, the roles of key biological macromolecules in controlling production of the nanomaterials are highlighted, and also technological limitations that must be addressed for widespread implementation are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Nanotechnology Presentation Agenda
NASA Technical Reports Server (NTRS)
2005-01-01
Working at the atomic, molecular and supra-molecular levels, in the length scale of approximately 1 - 100 nm range, in order to understand, create and use materials, devices and systems with fundamentally new properties and functions because of their small structure. NNI definition encourages new contributions that were not possible.before. Novel phenomena, properties and functions at nanoscale,which are non scalable outside of the nm domain. The ability to measure / control / manipulate matter at the nanoscale in order to change those properties and functions. Integration along length scales, and fields of application.
NASA Astrophysics Data System (ADS)
Shahzad, Muhammad A.
1999-02-01
With the emergence of data warehousing, Decision support systems have evolved to its best. At the core of these warehousing systems lies a good database management system. Database server, used for data warehousing, is responsible for providing robust data management, scalability, high performance query processing and integration with other servers. Oracle being the initiator in warehousing servers, provides a wide range of features for facilitating data warehousing. This paper is designed to review the features of data warehousing - conceptualizing the concept of data warehousing and, lastly, features of Oracle servers for implementing a data warehouse.
Scalable and continuous fabrication of bio-inspired dry adhesives with a thermosetting polymer.
Lee, Sung Ho; Kim, Sung Woo; Kang, Bong Su; Chang, Pahn-Shick; Kwak, Moon Kyu
2018-04-04
Many research groups have developed unique micro/nano-structured dry adhesives by mimicking the foot of the gecko with the use of molding methods. Through these previous works, polydimethylsiloxane (PDMS) has been developed and become the most commonly used material for making artificial dry adhesives. The material properties of PDMS are well suited for making dry adhesives, such as conformal contacts with almost zero preload, low elastic moduli for stickiness, and easy cleaning with low surface energy. From a performance point of view, dry adhesives made with PDMS can be highly advantageous but are limited by its low productivity, as production takes an average of approximately two hours. Given the low productivity of PDMS, some research groups have developed dry adhesives using UV-curable materials, which are capable of continuous roll-to-roll production processes. However, UV-curable materials were too rigid to produce good adhesion. Thus, we established a PDMS continuous-production system to achieve good productivity and adhesion performance. We designed a thermal roll-imprinting lithography (TRL) system for the continuous production of PDMS microstructures by shortening the curing time by controlling the curing temperature (the production speed is up to 150 mm min-1). Dry adhesives composed of PDMS were fabricated continuously via the TRL system.
Gobalasingham, Nemal S; Carlé, Jon E; Krebs, Frederik C; Thompson, Barry C; Bundgaard, Eva; Helgesen, Martin
2017-11-01
Continuous flow methods are utilized in conjunction with direct arylation polymerization (DArP) for the scaled synthesis of the roll-to-roll compatible polymer, poly[(2,5-bis(2-hexyldecyloxy)phenylene)-alt-(4,7-di(thiophen-2-yl)-benzo[c][1,2,5]thiadiazole)] (PPDTBT). PPDTBT is based on simple, inexpensive, and scalable monomers using thienyl-flanked benzothiadiazole as the acceptor, which is the first β-unprotected substrate to be used in continuous flow via DArP, enabling critical evaluation of the suitability of this emerging synthetic method for minimizing defects and for the scaled synthesis of high-performance materials. To demonstrate the usefulness of the method, DArP-prepared PPDTBT via continuous flow synthesis is employed for the preparation of indium tin oxide (ITO)-free and flexible roll-coated solar cells to achieve a power conversion efficiency of 3.5% for 1 cm 2 devices, which is comparable to the performance of PPDTBT polymerized through Stille cross coupling. These efforts demonstrate the distinct advantages of the continuous flow protocol with DArP avoiding use of toxic tin chemicals, reducing the associated costs of polymer upscaling, and minimizing batch-to-batch variations for high-quality material. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Providing scalable system software for high-end simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenberg, D.
1997-12-31
Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.
NASA Technical Reports Server (NTRS)
Luke, Edward Allen
1993-01-01
Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.
Vapor-fed bio-hybrid fuel cell.
Benyamin, Marcus S; Jahnke, Justin P; Mackie, David M
2017-01-01
Concentration and purification of ethanol and other biofuels from fermentations are energy-intensive processes, with amplified costs at smaller scales. To circumvent the need for these processes, and to potentially reduce transportation costs as well, we have previously investigated bio-hybrid fuel cells (FCs), in which a fermentation and FC are closely coupled. However, long-term operation requires strictly preventing the fermentation and FC from harming each other. We introduce here the concept of the vapor-fed bio-hybrid FC as a means of continuously extracting power from ongoing fermentations at ambient conditions. By bubbling a carrier gas (N 2 ) through a yeast fermentation and then through a direct ethanol FC, we protect the FC anode from the catalyst poisons in the fermentation (which are non-volatile), and also protect the yeast from harmful FC products (notably acetic acid) and from build-up of ethanol. Since vapor-fed direct ethanol FCs at ambient conditions have never been systematically characterized (in contrast to vapor-fed direct methanol FCs), we first assess the effects on output power and conversion efficiency of ethanol concentration, vapor flow rate, and FC voltage. The results fit a continuous stirred-tank reactor model. Over a wide range of ethanol partial pressures (2-8 mmHg), power densities are comparable to those for liquid-fed direct ethanol FCs at the same temperature, with power densities >2 mW/cm 2 obtained. We then demonstrate the continuous operation of a vapor-fed bio-hybrid FC with fermentation for 5 months, with no indication of performance degradation due to poisoning (of either the FC or the fermentation). It is further shown that the system is stable, recovering quickly from disturbances or from interruptions in maintenance. The vapor-fed bio-hybrid FC enables extraction of power from dilute bio-ethanol streams without costly concentration and purification steps. The concept should be scalable to both large and small operations and should be generalizable to other biofuels and waste-to-energy systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janjusic, Tommy; Kartsaklis, Christos
Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an applicationmore » s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).« less
A novel processing platform for post tape out flows
NASA Astrophysics Data System (ADS)
Vu, Hien T.; Kim, Soohong; Word, James; Cai, Lynn Y.
2018-03-01
As the computational requirements for post tape out (PTO) flows increase at the 7nm and below technology nodes, there is a need to increase the scalability of the computational tools in order to reduce the turn-around time (TAT) of the flows. Utilization of design hierarchy has been one proven method to provide sufficient partitioning to enable PTO processing. However, as the data is processed through the PTO flow, its effective hierarchy is reduced. The reduction is necessary to achieve the desired accuracy. Also, the sequential nature of the PTO flow is inherently non-scalable. To address these limitations, we are proposing a quasi-hierarchical solution that combines multiple levels of parallelism to increase the scalability of the entire PTO flow. In this paper, we describe the system and present experimental results demonstrating the runtime reduction through scalable processing with thousands of computational cores.
Scalable architecture for a room temperature solid-state quantum information processor.
Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D
2012-04-24
The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.
Scalable free energy calculation of proteins via multiscale essential sampling
NASA Astrophysics Data System (ADS)
Moritsugu, Kei; Terada, Tohru; Kidera, Akinori
2010-12-01
A multiscale simulation method, "multiscale essential sampling (MSES)," is proposed for calculating free energy surface of proteins in a sizable dimensional space with good scalability. In MSES, the configurational sampling of a full-dimensional model is enhanced by coupling with the accelerated dynamics of the essential degrees of freedom. Applying the Hamiltonian exchange method to MSES can remove the biasing potential from the coupling term, deriving the free energy surface of the essential degrees of freedom. The form of the coupling term ensures good scalability in the Hamiltonian exchange. As a test application, the free energy surface of the folding process of a miniprotein, chignolin, was calculated in the continuum solvent model. Results agreed with the free energy surface derived from the multicanonical simulation. Significantly improved scalability with the MSES method was clearly shown in the free energy calculation of chignolin in explicit solvent, which was achieved without increasing the number of replicas in the Hamiltonian exchange.
An Efficient, Scalable and Robust P2P Overlay for Autonomic Communication
NASA Astrophysics Data System (ADS)
Li, Deng; Liu, Hui; Vasilakos, Athanasios
The term Autonomic Communication (AC) refers to self-managing systems which are capable of supporting self-configuration, self-healing and self-optimization. However, information reflection and collection, lack of centralized control, non-cooperation and so on are just some of the challenges within AC systems. Since many self-* properties (e.g. selfconfiguration, self-optimization, self-healing, and self-protecting) are achieved by a group of autonomous entities that coordinate in a peer-to-peer (P2P) fashion, it has opened the door to migrating research techniques from P2P systems. P2P's meaning can be better understood with a set of key characteristics similar to AC: Decentralized organization, Self-organizing nature (i.e. adaptability), Resource sharing and aggregation, and Fault-tolerance. However, not all P2P systems are compatible with AC. Unstructured systems are designed more specifically than structured systems for the heterogeneous Internet environment, where the nodes' persistence and availability are not guaranteed. Motivated by the challenges in AC and based on comprehensive analysis of popular P2P applications, three correlative standards for evaluating the compatibility of a P2P system with AC are presented in this chapter. According to these standards, a novel Efficient, Scalable and Robust (ESR) P2P overlay is proposed. Differing from current structured and unstructured, or meshed and tree-like P2P overlay, the ESR is a whole new three dimensional structure to improve the efficiency of routing, while information exchanges take in immediate neighbors with local information to make the system scalable and fault-tolerant. Furthermore, rather than a complex game theory or incentive mechanism, asimple but effective punish mechanism has been presented based on a new ID structure which can guarantee the continuity of each node's record in order to discourage negative behavior on an autonomous environment as AC.
Integrated Metamaterials and Nanophotonics in CMOS-Compatible Materials
NASA Astrophysics Data System (ADS)
Reshef, Orad
This thesis explores scalable nanophotonic devices in integrated, CMOS-compatible platforms. Our investigation focuses on two main projects: studying the material properties of integrated titanium dioxide (TiO2), and studying integrated metamaterials in silicon-on-insulator (SOI) technologies. We first describe the nanofabrication process for TiO2 photonic integrated circuits. We use this procedure to demonstrate polycrystalline anatase TiO2 ring resonators with high quality factors. We measure the thermo-optic coefficient of TiO2 and determine that it is negative, a unique property among CMOS-compatible dielectric photonic platforms. We also derive a transfer function for ring resonators in the presence of reflections and demonstrate using full-wave simulations that these reflections produce asymmetries in the resonances. For the second half of the dissertation, we design and demonstrate an SOI-based photonic-Dirac-cone metamaterial. Using a prism composed of this metamaterial, we measure its index of refraction and unambiguously determine that it is zero. Next, we take a single channel of this metamaterial to form a waveguide. Using interferometry, we independently confirm that the waveguide in this configuration preserves the dispersion profile of the aggregate medium, with a zero phase advance. We also characterize the waveguide, determining its propagation loss. Finally, we perform simulations to study nonlinear optical phenomena in zero-index media. We find that an isotropic refractive index near zero relaxes certain phase-matching constraints, allowing for more flexible configurations of nonlinear devices with dramatically reduced footprints. The outcomes of this work enable higher quality fabrication of scalable nanophotonic devices for use in nonlinear applications with passive temperature compensation. These devices are CMOS-compatible and can be integrated vertically for compact, device-dense industrial applications. It also provides access to a versatile, scalable and integrated medium with a refractive index that can be continuously engineered between n = -0.20 and n = +0.50. This opens the door to applications in high-precision interferometry, sensing, quantum information technologies and compact nonlinear applications.
Scalable Conjunction Processing using Spatiotemporally Indexed Ephemeris Data
NASA Astrophysics Data System (ADS)
Budianto-Ho, I.; Johnson, S.; Sivilli, R.; Alberty, C.; Scarberry, R.
2014-09-01
The collision warnings produced by the Joint Space Operations Center (JSpOC) are of critical importance in protecting U.S. and allied spacecraft against destructive collisions and protecting the lives of astronauts during space flight. As the Space Surveillance Network (SSN) improves its sensor capabilities for tracking small and dim space objects, the number of tracked objects increases from thousands to hundreds of thousands of objects, while the number of potential conjunctions increases with the square of the number of tracked objects. Classical filtering techniques such as apogee and perigee filters have proven insufficient. Novel and orders of magnitude faster conjunction analysis algorithms are required to find conjunctions in a timely manner. Stellar Science has developed innovative filtering techniques for satellite conjunction processing using spatiotemporally indexed ephemeris data that efficiently and accurately reduces the number of objects requiring high-fidelity and computationally-intensive conjunction analysis. Two such algorithms, one based on the k-d Tree pioneered in robotics applications and the other based on Spatial Hash Tables used in computer gaming and animation, use, at worst, an initial O(N log N) preprocessing pass (where N is the number of tracked objects) to build large O(N) spatial data structures that substantially reduce the required number of O(N^2) computations, substituting linear memory usage for quadratic processing time. The filters have been implemented as Open Services Gateway initiative (OSGi) plug-ins for the Continuous Anomalous Orbital Situation Discriminator (CAOS-D) conjunction analysis architecture. We have demonstrated the effectiveness, efficiency, and scalability of the techniques using a catalog of 100,000 objects, an analysis window of one day, on a 64-core computer with 1TB shared memory. Each algorithm can process the full catalog in 6 minutes or less, almost a twenty-fold performance improvement over the baseline implementation running on the same machine. We will present an overview of the algorithms and results that demonstrate the scalability of our concepts.
Schmideder, Andreas; Severin, Timm Steffen; Cremer, Johannes Heinrich; Weuster-Botz, Dirk
2015-09-20
A pH-controlled parallel stirred-tank bioreactor system was modified for parallel continuous cultivation on a 10 mL-scale by connecting multichannel peristaltic pumps for feeding and medium removal with micro-pipes (250 μm inner diameter). Parallel chemostat processes with Escherichia coli as an example showed high reproducibility with regard to culture volume and flow rates as well as dry cell weight, dissolved oxygen concentration and pH control at steady states (n=8, coefficient of variation <5%). Reliable estimation of kinetic growth parameters of E. coli was easily achieved within one parallel experiment by preselecting ten different steady states. Scalability of milliliter-scale steady state results was demonstrated by chemostat studies with a stirred-tank bioreactor on a liter-scale. Thus, parallel and continuously operated stirred-tank bioreactors on a milliliter-scale facilitate timesaving and cost reducing steady state studies with microorganisms. The applied continuous bioreactor system overcomes the drawbacks of existing miniaturized bioreactors, like poor mass transfer and insufficient process control. Copyright © 2015 Elsevier B.V. All rights reserved.
Control and Measurement of an Xmon with the Quantum Socket
NASA Astrophysics Data System (ADS)
McConkey, T. G.; Bejanin, J. H.; Earnest, C. T.; McRae, C. R. H.; Rinehart, J. R.; Weides, M.; Mariantoni, M.
The implementation of superconducting quantum processors is rapidly reaching scalability limitations. Extensible electronics and wiring solutions for superconducting quantum bits (qubits) are among the most imminent issues to be tackled. The necessity to substitute planar electrical interconnects (e.g., wire bonds) with three-dimensional wires is emerging as a fundamental pillar towards scalability. In a previous work, we have shown that three-dimensional wires housed in a suitable package, named the quantum socket, can be utilized to measure high-quality superconducting resonators. In this work, we set out to test the quantum socket with actual superconducting qubits to verify its suitability as a wiring solution in the development of an extensible quantum computing architecture. To this end, we have designed and fabricated a series of Xmon qubits. The qubits range in frequency from about 6 to 7 GHz with anharmonicity of 200 MHz and can be tuned by means of Z pulses. Controlling tunable Xmons will allow us to verify whether the three-dimensional wires contact resistance is low enough for qubit operation. Qubit T1 and T2 times and single qubit gate fidelities are compared against current standards in the field.
Multi-Layer Approach for the Detection of Selective Forwarding Attacks
Alajmi, Naser; Elleithy, Khaled
2015-01-01
Security breaches are a major threat in wireless sensor networks (WSNs). WSNs are increasingly used due to their broad range of important applications in both military and civilian domains. WSNs are prone to several types of security attacks. Sensor nodes have limited capacities and are often deployed in dangerous locations; therefore, they are vulnerable to different types of attacks, including wormhole, sinkhole, and selective forwarding attacks. Security attacks are classified as data traffic and routing attacks. These security attacks could affect the most significant applications of WSNs, namely, military surveillance, traffic monitoring, and healthcare. Therefore, there are different approaches to detecting security attacks on the network layer in WSNs. Reliability, energy efficiency, and scalability are strong constraints on sensor nodes that affect the security of WSNs. Because sensor nodes have limited capabilities in most of these areas, selective forwarding attacks cannot be easily detected in networks. In this paper, we propose an approach to selective forwarding detection (SFD). The approach has three layers: MAC pool IDs, rule-based processing, and anomaly detection. It maintains the safety of data transmission between a source node and base station while detecting selective forwarding attacks. Furthermore, the approach is reliable, energy efficient, and scalable. PMID:26610499
The up-scaling of ecosystem functions in a heterogeneous world
NASA Astrophysics Data System (ADS)
Lohrer, Andrew M.; Thrush, Simon F.; Hewitt, Judi E.; Kraan, Casper
2015-05-01
Earth is in the midst of a biodiversity crisis that is impacting the functioning of ecosystems and the delivery of valued goods and services. However, the implications of large scale species losses are often inferred from small scale ecosystem functioning experiments with little knowledge of how the dominant drivers of functioning shift across scales. Here, by integrating observational and manipulative experimental field data, we reveal scale-dependent influences on primary productivity in shallow marine habitats, thus demonstrating the scalability of complex ecological relationships contributing to coastal marine ecosystem functioning. Positive effects of key consumers (burrowing urchins, Echinocardium cordatum) on seafloor net primary productivity (NPP) elucidated by short-term, single-site experiments persisted across multiple sites and years. Additional experimentation illustrated how these effects amplified over time, resulting in greater primary producer biomass sediment chlorophyll a content (Chla) in the longer term, depending on climatic context and habitat factors affecting the strengths of mutually reinforcing feedbacks. The remarkable coherence of results from small and large scales is evidence of real-world ecosystem function scalability and ecological self-organisation. This discovery provides greater insights into the range of responses to broad-scale anthropogenic stressors in naturally heterogeneous environmental settings.
Six-Tube Freezable Radiator Testing and Model Correlation
NASA Technical Reports Server (NTRS)
Lillibridge, Sean; Navarro, Moses
2011-01-01
Freezable radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the spacecraft s surroundings and because of different thermal loads rejected during different mission phases. However, freezing and thawing (recovering) a freezable radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. These predictions are a critical step in gaining the capability to quickly design and produce optimized freezable radiators for a range of mission requirements. This paper builds upon previous efforts made to correlate a Thermal Desktop(TradeMark) model with empirical testing data from two test articles, with additional model modifications and empirical data from a sub-component radiator for a full scale design. Two working fluids were tested, namely MultiTherm WB-58 and a 50-50 mixture of DI water and Amsoil ANT.
Six-Tube Freezable Radiator Testing and Model Correlation
NASA Technical Reports Server (NTRS)
Lilibridge, Sean T.; Navarro, Moses
2012-01-01
Freezable Radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the spacecraft?s surroundings and because of different thermal loads rejected during different mission phases. However, freezing and thawing (recov ering) a freezable radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. These predictions are a critical step in gaining the capability to quickly design and produce optimized freezable radiators for a range of mission requirements. This paper builds upon previous efforts made to correlate a Thermal Desktop(TM) model with empirical testing data from two test articles, with additional model modifications and empirical data from a sub-component radiator for a full scale design. Two working fluids were tested: MultiTherm WB-58 and a 50-50 mixture of DI water and Amsoil ANT.
The up-scaling of ecosystem functions in a heterogeneous world
Lohrer, Andrew M.; Thrush, Simon F.; Hewitt, Judi E.; Kraan, Casper
2015-01-01
Earth is in the midst of a biodiversity crisis that is impacting the functioning of ecosystems and the delivery of valued goods and services. However, the implications of large scale species losses are often inferred from small scale ecosystem functioning experiments with little knowledge of how the dominant drivers of functioning shift across scales. Here, by integrating observational and manipulative experimental field data, we reveal scale-dependent influences on primary productivity in shallow marine habitats, thus demonstrating the scalability of complex ecological relationships contributing to coastal marine ecosystem functioning. Positive effects of key consumers (burrowing urchins, Echinocardium cordatum) on seafloor net primary productivity (NPP) elucidated by short-term, single-site experiments persisted across multiple sites and years. Additional experimentation illustrated how these effects amplified over time, resulting in greater primary producer biomass sediment chlorophyll a content (Chla) in the longer term, depending on climatic context and habitat factors affecting the strengths of mutually reinforcing feedbacks. The remarkable coherence of results from small and large scales is evidence of real-world ecosystem function scalability and ecological self-organisation. This discovery provides greater insights into the range of responses to broad-scale anthropogenic stressors in naturally heterogeneous environmental settings. PMID:25993477
Solvent-Based Synthesis of Nano-Bi0.85Sb0.15 for Low-Temperature Thermoelectric Applications
NASA Astrophysics Data System (ADS)
Kaspar, K.; Fritsch, K.; Habicht, K.; Willenberg, B.; Hillebrecht, H.
2017-01-01
In this study we show a preparation method for nanostructured Bi0.85Sb0.15 powders via a chemical reduction route in a polyol medium, yielding material with particle sizes of 20-150 nm in scalable amounts. The powders were consolidated by spark plasma sintering (SPS) in order to maintain the nanostructure. To investigate influence of the sinter process, the powders were characterized by x-ray diffraction (XRD), energy dispersive x-ray spectroscopy (EDX), and scanning electron microscopy (SEM) measurements before and after SPS. Transport properties, Seebeck effect, and thermal conductivity were determined in the low temperature range below 300 K. The samples showed excellent thermal conductivity of 2.3-2.6 W/m × K at 300 K and Seebeck coefficients from -97 μV/K to -107 μV/K at 300 K with a maximum of -141 μV/K at 110 K, thus leading to ZT values of up to 0.31 at room temperature. The results show that Bi-Sb-alloys are promising materials for low-temperature applications. Our wet chemical approach gives access to scalable amounts of nano-material with increased homogeneity and good thermoelectric properties after SPS.
A molecular quantum spin network controlled by a single qubit.
Schlipf, Lukas; Oeckinghaus, Thomas; Xu, Kebiao; Dasari, Durga Bhaktavatsala Rao; Zappe, Andrea; de Oliveira, Felipe Fávaro; Kern, Bastian; Azarkh, Mykhailo; Drescher, Malte; Ternes, Markus; Kern, Klaus; Wrachtrup, Jörg; Finkler, Amit
2017-08-01
Scalable quantum technologies require an unprecedented combination of precision and complexity for designing stable structures of well-controllable quantum systems on the nanoscale. It is a challenging task to find a suitable elementary building block, of which a quantum network can be comprised in a scalable way. We present the working principle of such a basic unit, engineered using molecular chemistry, whose collective control and readout are executed using a nitrogen vacancy (NV) center in diamond. The basic unit we investigate is a synthetic polyproline with electron spins localized on attached molecular side groups separated by a few nanometers. We demonstrate the collective readout and coherent manipulation of very few (≤ 6) of these S = 1/2 electronic spin systems and access their direct dipolar coupling tensor. Our results show that it is feasible to use spin-labeled peptides as a resource for a molecular qubit-based network, while at the same time providing simple optical readout of single quantum states through NV magnetometry. This work lays the foundation for building arbitrary quantum networks using well-established chemistry methods, which has many applications ranging from mapping distances in single molecules to quantum information processing.
A scalable and flexible hybrid energy storage system design and implementation
NASA Astrophysics Data System (ADS)
Kim, Younghyun; Koh, Jason; Xie, Qing; Wang, Yanzhi; Chang, Naehyuck; Pedram, Massoud
2014-06-01
Energy storage systems (ESS) are becoming one of the most important components that noticeably change overall system performance in various applications, ranging from the power grid infrastructure to electric vehicles (EV) and portable electronics. However, a homogeneous ESS is subject to limited characteristics in terms of cost, efficiency, lifetime, etc., by the energy storage technology that comprises the ESS. On the other hand, hybrid ESS (HESS) are a viable solution for a practical ESS with currently available technologies as they have potential to overcome such limitations by exploiting only advantages of heterogeneous energy storage technologies while hiding their drawbacks. However, the HESS concept basically mandates sophisticated design and control to actually make the benefits happen. The HESS architecture should be able to provide controllability of many parts, which are often fixed in homogeneous ESS, and novel management policies should be able to utilize the control features. This paper introduces a complete design practice of a HESS prototype to demonstrate scalability, flexibility, and energy efficiency. It is composed of three heterogenous energy storage elements: lead-acid batteries, lithium-ion batteries, and supercapacitors. We demonstrate a novel system control methodology and enhanced energy efficiency through this design practice.
Multi-Layer Approach for the Detection of Selective Forwarding Attacks.
Alajmi, Naser; Elleithy, Khaled
2015-11-19
Security breaches are a major threat in wireless sensor networks (WSNs). WSNs are increasingly used due to their broad range of important applications in both military and civilian domains. WSNs are prone to several types of security attacks. Sensor nodes have limited capacities and are often deployed in dangerous locations; therefore, they are vulnerable to different types of attacks, including wormhole, sinkhole, and selective forwarding attacks. Security attacks are classified as data traffic and routing attacks. These security attacks could affect the most significant applications of WSNs, namely, military surveillance, traffic monitoring, and healthcare. Therefore, there are different approaches to detecting security attacks on the network layer in WSNs. Reliability, energy efficiency, and scalability are strong constraints on sensor nodes that affect the security of WSNs. Because sensor nodes have limited capabilities in most of these areas, selective forwarding attacks cannot be easily detected in networks. In this paper, we propose an approach to selective forwarding detection (SFD). The approach has three layers: MAC pool IDs, rule-based processing, and anomaly detection. It maintains the safety of data transmission between a source node and base station while detecting selective forwarding attacks. Furthermore, the approach is reliable, energy efficient, and scalable.
Map of Life - A Dashboard for Monitoring Planetary Species Distributions
NASA Astrophysics Data System (ADS)
Jetz, W.
2016-12-01
Geographic information about biodiversity is vital for understanding the many services nature provides and their potential changes, yet remains unreliable and often insufficient. By integrating a wide range of knowledge about species distributions and their dynamics over time, Map of Life supports global biodiversity education, monitoring, research and decision-making. Built on a scalable web platform geared for large biodiversity and environmental data, Map of Life endeavors provides species range information globally and species lists for any area. With data and technology provided by NASA and Google Earth Engine, tools under development use remote sensing-based environmental layers to enable on-the-fly predictions of species distributions, range changes, and early warning signals for threatened species. The ultimate vision is a globally connected, collaborative knowledge- and tool-base for regional and local biodiversity decision-making, education, monitoring, and projection. For currently available tools, more information and to follow progress, go to MOL.org.
Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface
NASA Astrophysics Data System (ADS)
Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry
2007-04-01
As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.
Level-2 Milestone 3504: Scalable Applications Preparations and Outreach for the Sequoia ID (Dawn)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futral, W. Scott; Gyllenhaal, John C.; Hedges, Richard M.
2010-07-02
This report documents LLNL SAP project activities in anticipation of the ASC Sequoia system, ASC L2 milestone 3504: Scalable Applications Preparations and Outreach for the Sequoia ID (Dawn), due June 30, 2010.
Scalable Metadata Management for a Large Multi-Source Seismic Data Repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaylord, J. M.; Dodge, D. A.; Magana-Zook, S. A.
In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity.
Ambient-aware continuous care through semantic context dissemination.
Ongenae, Femke; Famaey, Jeroen; Verstichel, Stijn; De Zutter, Saar; Latré, Steven; Ackaert, Ann; Verhoeve, Piet; De Turck, Filip
2014-12-04
The ultimate ambient-intelligent care room contains numerous sensors and devices to monitor the patient, sense and adjust the environment and support the staff. This sensor-based approach results in a large amount of data, which can be processed by current and future applications, e.g., task management and alerting systems. Today, nurses are responsible for coordinating all these applications and supplied information, which reduces the added value and slows down the adoption rate.The aim of the presented research is the design of a pervasive and scalable framework that is able to optimize continuous care processes by intelligently reasoning on the large amount of heterogeneous care data. The developed Ontology-based Care Platform (OCarePlatform) consists of modular components that perform a specific reasoning task. Consequently, they can easily be replicated and distributed. Complex reasoning is achieved by combining the results of different components. To ensure that the components only receive information, which is of interest to them at that time, they are able to dynamically generate and register filter rules with a Semantic Communication Bus (SCB). This SCB semantically filters all the heterogeneous care data according to the registered rules by using a continuous care ontology. The SCB can be distributed and a cache can be employed to ensure scalability. A prototype implementation is presented consisting of a new-generation nurse call system supported by a localization and a home automation component. The amount of data that is filtered and the performance of the SCB are evaluated by testing the prototype in a living lab. The delay introduced by processing the filter rules is negligible when 10 or fewer rules are registered. The OCarePlatform allows disseminating relevant care data for the different applications and additionally supports composing complex applications from a set of smaller independent components. This way, the platform significantly reduces the amount of information that needs to be processed by the nurses. The delay resulting from processing the filter rules is linear in the amount of rules. Distributed deployment of the SCB and using a cache allows further improvement of these performance results.
A Centrifugal Contactor Design to Facilitate Remote Replacement
DOE Office of Scientific and Technical Information (OSTI.GOV)
David H. Meikrantz; Jack. D. Law; Troy G. Garn
2011-03-01
Advanced designs of nuclear fuel recycling and radioactive waste treatment plants are expected to include more ambitious goals for solvent extraction based separations including; higher separations efficiency, high-level waste minimization, and a greater focus on continuous processes to minimize cost and footprint. Therefore, Annular Centrifugal Contactors (ACCs) are destined to play a more important role for such future processing schemes. This work continues the development of remote designs for ACCs that can process the large throughputs needed for future nuclear fuel recycling and radioactive waste treatment plants. A three stage, 12.5 cm diameter rotor module has been constructed and ismore » being evaluated for use in highly radioactive environments. This prototype assembly employs three standard CINC V-05 clean-in-place (CIP) units modified for remote service and replacement via new methods of connection for solution inlets, outlets, drain and CIP. Hydraulic testing and functional checks were successfully conducted and then the prototype was evaluated for remote handling and maintenance. Removal and replacement of the center position V-05R contactor in the three stage assembly was demonstrated using an overhead rail mounted PaR manipulator. Initial evaluation indicates a viable new design for interconnecting and cleaning individual stages while retaining the benefits of commercially reliable ACC equipment. Replacement of a single stage via remote manipulators and tools is estimated to take about 30 minutes, perhaps fast enough to support a contactor change without loss of process equilibrium. The design presented in this work is scalable to commercial ACC models from V-05 to V-20 with total throughput rates ranging from 20 to 650 liters per minute.« less
Scalable Domain Decomposed Monte Carlo Particle Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Matthew Joseph
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malony, Allen D; Shende, Sameer
This is the final progress report for the FastOS (Phase 2) (FastOS-2) project with Argonne National Laboratory and the University of Oregon (UO). The project started at UO on July 1, 2008 and ran until April 30, 2010, at which time a six-month no-cost extension began. The FastOS-2 work at UO delivered excellent results in all research work areas: * scalable parallel monitoring * kernel-level performance measurement * parallel I/0 system measurement * large-scale and hybrid application performance measurement * onlne scalable performance data reduction and analysis * binary instrumentation
Scalable cloud without dedicated storage
NASA Astrophysics Data System (ADS)
Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.
2015-05-01
We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.
Scalable Robust Principal Component Analysis Using Grassmann Averages.
Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J
2016-11-01
In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.
NASA Astrophysics Data System (ADS)
Plaza, Antonio; Plaza, Javier; Paz, Abel
2010-10-01
Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.
: A Scalable and Transparent System for Simulating MPI Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2010-01-01
is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less
Transportation Network Topologies
NASA Technical Reports Server (NTRS)
Holmes, Bruce J.; Scott, John M.
2004-01-01
A discomforting reality has materialized on the transportation scene: our existing air and ground infrastructures will not scale to meet our nation's 21st century demands and expectations for mobility, commerce, safety, and security. The consequence of inaction is diminished quality of life and economic opportunity in the 21st century. Clearly, new thinking is required for transportation that can scale to meet to the realities of a networked, knowledge-based economy in which the value of time is a new coin of the realm. This paper proposes a framework, or topology, for thinking about the problem of scalability of the system of networks that comprise the aviation system. This framework highlights the role of integrated communication-navigation-surveillance systems in enabling scalability of future air transportation networks. Scalability, in this vein, is a goal of the recently formed Joint Planning and Development Office for the Next Generation Air Transportation System. New foundations for 21PstP thinking about air transportation are underpinned by several technological developments in the traditional aircraft disciplines as well as in communication, navigation, surveillance and information systems. Complexity science and modern network theory give rise to one of the technological developments of importance. Scale-free (i.e., scalable) networks represent a promising concept space for modeling airspace system architectures, and for assessing network performance in terms of scalability, efficiency, robustness, resilience, and other metrics. The paper offers an air transportation system topology as framework for transportation system innovation. Successful outcomes of innovation in air transportation could lay the foundations for new paradigms for aircraft and their operating capabilities, air transportation system architectures, and airspace architectures and procedural concepts. The topology proposed considers air transportation as a system of networks, within which strategies for scalability of the topology may be enabled by technologies and policies. In particular, the effects of scalable ICNS concepts are evaluated within this proposed topology. Alternative business models are appearing on the scene as the old centralized hub-and-spoke model reaches the limits of its scalability. These models include growth of point-to-point scheduled air transportation service (e.g., the RJ phenomenon and the 'Southwest Effect'). Another is a new business model for on-demand, widely distributed, air mobility in jet taxi services. The new businesses forming around this vision are targeting personal air mobility to virtually any of the thousands of origins and destinations throughout suburban, rural, and remote communities and regions. Such advancement in air mobility has many implications for requirements for airports, airspace, and consumers. These new paradigms could support scalable alternatives for the expansion of future air mobility to more consumers in more places.
Transportation Network Topologies
NASA Technical Reports Server (NTRS)
Holmes, Bruce J.; Scott, John
2004-01-01
A discomforting reality has materialized on the transportation scene: our existing air and ground infrastructures will not scale to meet our nation's 21st century demands and expectations for mobility, commerce, safety, and security. The consequence of inaction is diminished quality of life and economic opportunity in the 21st century. Clearly, new thinking is required for transportation that can scale to meet to the realities of a networked, knowledge-based economy in which the value of time is a new coin of the realm. This paper proposes a framework, or topology, for thinking about the problem of scalability of the system of networks that comprise the aviation system. This framework highlights the role of integrated communication-navigation-surveillance systems in enabling scalability of future air transportation networks. Scalability, in this vein, is a goal of the recently formed Joint Planning and Development Office for the Next Generation Air Transportation System. New foundations for 21st thinking about air transportation are underpinned by several technological developments in the traditional aircraft disciplines as well as in communication, navigation, surveillance and information systems. Complexity science and modern network theory give rise to one of the technological developments of importance. Scale-free (i.e., scalable) networks represent a promising concept space for modeling airspace system architectures, and for assessing network performance in terms of scalability, efficiency, robustness, resilience, and other metrics. The paper offers an air transportation system topology as framework for transportation system innovation. Successful outcomes of innovation in air transportation could lay the foundations for new paradigms for aircraft and their operating capabilities, air transportation system architectures, and airspace architectures and procedural concepts. The topology proposed considers air transportation as a system of networks, within which strategies for scalability of the topology may be enabled by technologies and policies. In particular, the effects of scalable ICNS concepts are evaluated within this proposed topology. Alternative business models are appearing on the scene as the old centralized hub-and-spoke model reaches the limits of its scalability. These models include growth of point-to-point scheduled air transportation service (e.g., the RJ phenomenon and the Southwest Effect). Another is a new business model for on-demand, widely distributed, air mobility in jet taxi services. The new businesses forming around this vision are targeting personal air mobility to virtually any of the thousands of origins and destinations throughout suburban, rural, and remote communities and regions. Such advancement in air mobility has many implications for requirements for airports, airspace, and consumers. These new paradigms could support scalable alternatives for the expansion of future air mobility to more consumers in more places.
Sahoo, Satya S; Wei, Annan; Valdez, Joshua; Wang, Li; Zonjy, Bilal; Tatsuoka, Curtis; Loparo, Kenneth A; Lhatoo, Samden D
2016-01-01
The recent advances in neurological imaging and sensing technologies have led to rapid increase in the volume, rate of data generation, and variety of neuroscience data. This "neuroscience Big data" represents a significant opportunity for the biomedical research community to design experiments using data with greater timescale, large number of attributes, and statistically significant data size. The results from these new data-driven research techniques can advance our understanding of complex neurological disorders, help model long-term effects of brain injuries, and provide new insights into dynamics of brain networks. However, many existing neuroinformatics data processing and analysis tools were not built to manage large volume of data, which makes it difficult for researchers to effectively leverage this available data to advance their research. We introduce a new toolkit called NeuroPigPen that was developed using Apache Hadoop and Pig data flow language to address the challenges posed by large-scale electrophysiological signal data. NeuroPigPen is a modular toolkit that can process large volumes of electrophysiological signal data, such as Electroencephalogram (EEG), Electrocardiogram (ECG), and blood oxygen levels (SpO2), using a new distributed storage model called Cloudwave Signal Format (CSF) that supports easy partitioning and storage of signal data on commodity hardware. NeuroPigPen was developed with three design principles: (a) Scalability-the ability to efficiently process increasing volumes of data; (b) Adaptability-the toolkit can be deployed across different computing configurations; and (c) Ease of programming-the toolkit can be easily used to compose multi-step data processing pipelines using high-level programming constructs. The NeuroPigPen toolkit was evaluated using 750 GB of electrophysiological signal data over a variety of Hadoop cluster configurations ranging from 3 to 30 Data nodes. The evaluation results demonstrate that the toolkit is highly scalable and adaptable, which makes it suitable for use in neuroscience applications as a scalable data processing toolkit. As part of the ongoing extension of NeuroPigPen, we are developing new modules to support statistical functions to analyze signal data for brain connectivity research. In addition, the toolkit is being extended to allow integration with scientific workflow systems. NeuroPigPen is released under BSD license at: https://sites.google.com/a/case.edu/neuropigpen/.
Towards Rapid Re-Certification Using Formal Analysis
2015-07-22
profiles will help ensure that information assurance requirements are commensurate with risk and scalable based on an application’s changing external...20 Scalability Evaluation .......................................................................................................... 22...agility in certification processes. Software re-certification processes require significant expenditure in order to provide evidence of information
Mavrommatis, Kostas
2017-12-22
DOE JGI's Kostas Mavrommatis, chair of the Scalability of Comparative Analysis, Novel Algorithms and Tools panel, at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.
2004-10-01
MONITORING AGENCY NAME(S) AND ADDRESS(ES) Defense Advanced Research Projects Agency AFRL/IFTC 3701 North Fairfax Drive...Scalable Parallel Libraries for Large-Scale Concurrent Applications," Technical Report UCRL -JC-109251, Lawrence Livermore National Laboratory
NASA Technical Reports Server (NTRS)
Stoica, A.; Keymeulen, D.; Zebulum, R. S.; Ferguson, M. I.
2003-01-01
This paper describes scalability issues of evolutionary-driven automatic synthesis of electronic circuits. The article begins by reviewing the concepts of circuit evolution and discussing the limitations of this technique when trying to achieve more complex systems.
Gate-tunable electron interaction in high-κ dielectric films
Kondovych, Svitlana; Luk’yanchuk, Igor; Baturina, Tatyana I.; ...
2017-02-20
The two-dimensional (2D) logarithmic character of Coulomb interaction between charges and the resulting logarithmic confinement is a remarkable inherent property of high dielectric constant (high-k) thin films with far reaching implications. Most and foremost, this is the charge Berezinskii-Kosterlitz-Thouless transition with the notable manifestation, low-temperature superinsulating topological phase. Here we show that the range of the confinement can be tuned by the external gate electrode and unravel a variety of electrostatic interactions in high-k films. Lastly, our findings open a unique laboratory for the in-depth study of topological phase transitions and a plethora of related phenomena, ranging from criticality ofmore » quantum metal- and superconductor-insulator transitions to the effects of charge-trapping and Coulomb scalability in memory nanodevices.« less
Nanodiamonds as platforms for biology and medicine.
Man, Han B; Ho, Dean
2013-02-01
Nanoparticles possess a wide range of exceptional properties applicable to biology and medicine. In particular, nanodiamonds (NDs) are being studied extensively because they possess unique characteristics that make them suitable as platforms for diagnostics and therapeutics. This carbon-based material (2-8 nm) is medically relevant because it unites several key properties necessary for clinical applications, such as stability and compatibility in biological environments, and scalability in production. Research by the Ho group and others has yielded ND particles with a variety of capabilities ranging from delivery of chemotherapeutic drugs to targeted labeling and uptake studies. In addition, encouraging new findings have demonstrated the ability for NDs to effectively treat chemoresistant tumors in vivo. In this review, we highlight the progress made toward bringing nanodiamonds from the bench to the bedside.
Microchip laser mid-infrared supercontinuum laser source based on an As2Se3 fiber.
Gattass, Rafael R; Brandon Shaw, L; Sanghera, Jasbinder S
2014-06-15
We report on a proof of concept for a compact supercontinuum source for the mid-infrared wavelength range based on a microchip laser and nonlinear conversion inside a selenide-based optical fiber. The spectrum extends from 3.74 to 4.64 μm at -10 dB from the peak and 3.65 to 4.9 μm at -20 dB from the peak; emitting beyond the wavelength range that periodically poled lithium niobate (PPLN) starts to display a power penalty. Wavelength conversion occurs inside the core of a single-mode fiber, resulting in a high-brightness emission source. A maximum average power of 5 mW was demonstrated, but the architecture is scalable to higher average powers.
Vacuum Deployment and Testing of a 4-Quadrant Scalable Inflatable Solar Sail System
NASA Technical Reports Server (NTRS)
Lichodziejewski, David; Derbes, Billy; Galena, Daisy; Friese, Dave
2005-01-01
Solar sails reflect photons streaming from the sun and transfer momentum to the sail. The thrust, though small, is continuous and acts for the life of the mission without the need for propellant. Recent advances in materials and ultra-low mass gossamer structures have enabled a host of useful missions utilizing solar sail propulsion. The team of L'Garde, Jet Propulsion Laboratories, Ball Aerospace, and Langley Research Center, under the direction of the NASA In-Space Propulsion office, has been developing a scalable solar sail configuration to address NASA s future space propulsion needs. The baseline design currently in development and testing was optimized around the 1 AU solar sentinel mission. Featuring inflatably deployed sub-T(sub g), rigidized beam components, the 10,000 sq m sail and support structure weighs only 47.5 kg, including margin, yielding an areal density of 4.8 g/sq m. Striped sail architecture, net/membrane sail design, and L'Garde's conical boom deployment technique allows scalability without high mass penalties. This same structural concept can be scaled to meet and exceed the requirements of a number of other useful NASA missions. This paper discusses the interim accomplishments of phase 3 of a 3-phase NASA program to advance the technology readiness level (TRL) of the solar sail system from 3 toward a technology readiness level of 6 in 2005. Under earlier phases of the program many test articles have been fabricated and tested successfully. Most notably an unprecedented 4-quadrant 10 m solar sail ground test article was fabricated, subjected to launch environment tests, and was successfully deployed under simulated space conditions at NASA Plum Brook s 30m vacuum facility. Phase 2 of the program has seen much development and testing of this design validating assumptions, mass estimates, and predicted mission scalability. Under Phase 3 a much larger 20 m square test article including subscale vane has been fabricated and tested. A 20 m system ambient deployment has been successfully conducted after enduring Delta-2 launch environment testing. The program will culminate in a vacuum deployment of a 20 m subscale test article at the NASA Glenn s Plum Brook 30 m vacuum test facility to bring the TRL level as close to 6 as possible in 1 g. This focused program will pave the way for a flight experiment of this highly efficient space propulsion technology.
Micropatterned Pyramidal Ionic Gels for Sensing Broad-Range Pressures with High Sensitivity.
Cho, Sung Hwan; Lee, Seung Won; Yu, Seunggun; Kim, Hyeohn; Chang, Sooho; Kang, Donyoung; Hwang, Ihn; Kang, Han Sol; Jeong, Beomjin; Kim, Eui Hyuk; Cho, Suk Man; Kim, Kang Lib; Lee, Hyungsuk; Shim, Wooyoung; Park, Cheolmin
2017-03-22
The development of pressure sensors that are effective over a broad range of pressures is crucial for the future development of electronic skin applicable to the detection of a wide pressure range from acoustic wave to dynamic human motion. Here, we present flexible capacitive pressure sensors that incorporate micropatterned pyramidal ionic gels to enable ultrasensitive pressure detection. Our devices show superior pressure-sensing performance, with a broad sensing range from a few pascals up to 50 kPa, with fast response times of <20 ms and a low operating voltage of 0.25 V. Since high-dielectric-constant ionic gels were employed as constituent sensing materials, an unprecedented sensitivity of 41 kPa -1 in the low-pressure regime of <400 Pa could be realized in the context of a metal-insulator-metal platform. This broad-range capacitive pressure sensor allows for the efficient detection of pressure from a variety of sources, including sound waves, a lightweight object, jugular venous pulses, radial artery pulses, and human finger touch. This platform offers a simple, robust approach to low-cost, scalable device design, enabling practical applications of electronic skin.
Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks
Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok
2016-01-01
Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN). PMID:27907113
The Quantum Socket: Wiring for Superconducting Qubits - Part 3
NASA Astrophysics Data System (ADS)
Mariantoni, M.; Bejianin, J. H.; McConkey, T. G.; Rinehart, J. R.; Bateman, J. D.; Earnest, C. T.; McRae, C. H.; Rohanizadegan, Y.; Shiri, D.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.
The implementation of a quantum computer requires quantum error correction codes, which allow to correct errors occurring on physical quantum bits (qubits). Ensemble of physical qubits will be grouped to form a logical qubit with a lower error rate. Reaching low error rates will necessitate a large number of physical qubits. Thus, a scalable qubit architecture must be developed. Superconducting qubits have been used to realize error correction. However, a truly scalable qubit architecture has yet to be demonstrated. A critical step towards scalability is the realization of a wiring method that allows to address qubits densely and accurately. A quantum socket that serves this purpose has been designed and tested at microwave frequencies. In this talk, we show results where the socket is used at millikelvin temperatures to measure an on-chip superconducting resonator. The control electronics is another fundamental element for scalability. We will present a proposal based on the quantum socket to interconnect a classical control hardware to a superconducting qubit hardware, where both are operated at millikelvin temperatures.
Peterson, Kevin J.; Pathak, Jyotishman
2014-01-01
Automated execution of electronic Clinical Quality Measures (eCQMs) from electronic health records (EHRs) on large patient populations remains a significant challenge, and the testability, interoperability, and scalability of measure execution are critical. The High Throughput Phenotyping (HTP; http://phenotypeportal.org) project aligns with these goals by using the standards-based HL7 Health Quality Measures Format (HQMF) and Quality Data Model (QDM) for measure specification, as well as Common Terminology Services 2 (CTS2) for semantic interpretation. The HQMF/QDM representation is automatically transformed into a JBoss® Drools workflow, enabling horizontal scalability via clustering and MapReduce algorithms. Using Project Cypress, automated verification metrics can then be produced. Our results show linear scalability for nine executed 2014 Center for Medicare and Medicaid Services (CMS) eCQMs for eligible professionals and hospitals for >1,000,000 patients, and verified execution correctness of 96.4% based on Project Cypress test data of 58 eCQMs. PMID:25954459
Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks.
Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok
2016-01-01
Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN).
A Systems Approach to Scalable Transportation Network Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2006-01-01
Emerging needs in transportation network modeling and simulation are raising new challenges with respect to scal-ability of network size and vehicular traffic intensity, speed of simulation for simulation-based optimization, and fidel-ity of vehicular behavior for accurate capture of event phe-nomena. Parallel execution is warranted to sustain the re-quired detail, size and speed. However, few parallel simulators exist for such applications, partly due to the challenges underlying their development. Moreover, many simulators are based on time-stepped models, which can be computationally inefficient for the purposes of modeling evacuation traffic. Here an approach is presented to de-signing a simulator with memory andmore » speed efficiency as the goals from the outset, and, specifically, scalability via parallel execution. The design makes use of discrete event modeling techniques as well as parallel simulation meth-ods. Our simulator, called SCATTER, is being developed, incorporating such design considerations. Preliminary per-formance results are presented on benchmark road net-works, showing scalability to one million vehicles simu-lated on one processor.« less
Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.
Koch, S; Bosch, H; Giereth, M; Ertl, T
2011-05-01
Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.