Sample records for efficiently process large

  1. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  2. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  3. Overview of processing activities aimed at higher efficiencies and economical production

    NASA Technical Reports Server (NTRS)

    Bickler, D. B.

    1985-01-01

    An overview of processing activities aimed at higher efficiencies and economical production were presented. Present focus is on low-cost process technology for higher-efficiency cells of up to 18% or higher. Process development concerns center on the use of less than optimum silicon sheet, the control of production yields, and making uniformly efficient large-area cells. High-efficiency cell factors that require process development are bulk material perfection, very shallow junction formation, front-surface passivation, and finely detailed metallization. Better bulk properties of the silicon sheet and the keeping of those qualities throughout large areas during cell processing are required so that minority carrier lifetimes are maintained and cell performance is not degraded by high doping levels. When very shallow junctions are formed, the process must be sensitive to metallizatin punch-through, series resisitance in the cell, and control of dopant leaching during surface passivation. There is a need to determine the sensitivity to processing by mathematical modeling and experimental activities.

  4. Modification in digestive processing strategies to reduce toxic trace metal uptake in a marine bivalve

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Decho, A.W.; Luoma, S.N.

    1994-12-31

    Bivalves possess two major digestion pathways for processing food particles: a rapid ``intestinal`` pathway where digestion is largely extracellular; and a slower ``glandular`` pathway where digestion is largely intracellular. The slower glandular pathway often results in more efficient absorption of carbon but also more efficient uptake of certain metals (e.g. Cr associated with bacteria). In the bivalve Potamocorbula amurensis, large portions (> 90%) of bacteria are selectively routed to the glandular pathway. This results in efficient C uptake but also efficient uptake of associated Cr. The authors further determined if prolonged exposure to Cr-contaminated bacteria would result in high Crmore » uptake by animals or whether mechanisms exist to reduce Cr exposure and uptake. Bivalves were exposed to natural food + added bacteria (with or without added Cr) for a 6-day period, then pulse-chase experiments were conducted to quantify digestive processing and % absorption efficiencies (%AE) of bacterial Cr. Bivalves compensate at low (2--5 ug/g sed) Cr by reducing overall food ingestion, while digestive processing of food remains statistically similar to controls. At high Cr (200--500 ug/g sed) there are marked decreases in % bacteria processed by glandular digestion. This results in lower overall %AE of Cr. The results suggest that bivalves under natural conditions might balance efficient carbon sequestration against avoiding uptake of potentially toxic metals associated the food.« less

  5. A vacuum flash-assisted solution process for high-efficiency large-area perovskite solar cells

    NASA Astrophysics Data System (ADS)

    Li, Xiong; Bi, Dongqin; Yi, Chenyi; Décoppet, Jean-David; Luo, Jingshan; Zakeeruddin, Shaik Mohammed; Hagfeldt, Anders; Grätzel, Michael

    2016-07-01

    Metal halide perovskite solar cells (PSCs) currently attract enormous research interest because of their high solar-to-electric power conversion efficiency (PCE) and low fabrication costs, but their practical development is hampered by difficulties in achieving high performance with large-size devices. We devised a simple vacuum flash-assisted solution processing method to obtain shiny, smooth, crystalline perovskite films of high electronic quality over large areas. This enabled us to fabricate solar cells with an aperture area exceeding 1 square centimeter, a maximum efficiency of 20.5%, and a certified PCE of 19.6%. By contrast, the best certified PCE to date is 15.6% for PSCs of similar size. We demonstrate that the reproducibility of the method is excellent and that the cells show virtually no hysteresis. Our approach enables the realization of highly efficient large-area PSCs for practical deployment.

  6. High-Temperature-Short-Time Annealing Process for High-Performance Large-Area Perovskite Solar Cells.

    PubMed

    Kim, Minjin; Kim, Gi-Hwan; Oh, Kyoung Suk; Jo, Yimhyun; Yoon, Hyun; Kim, Ka-Hyun; Lee, Heon; Kim, Jin Young; Kim, Dong Suk

    2017-06-27

    Organic-inorganic hybrid metal halide perovskite solar cells (PSCs) are attracting tremendous research interest due to their high solar-to-electric power conversion efficiency with a high possibility of cost-effective fabrication and certified power conversion efficiency now exceeding 22%. Although many effective methods for their application have been developed over the past decade, their practical transition to large-size devices has been restricted by difficulties in achieving high performance. Here we report on the development of a simple and cost-effective production method with high-temperature and short-time annealing processing to obtain uniform, smooth, and large-size grain domains of perovskite films over large areas. With high-temperature short-time annealing at 400 °C for 4 s, the perovskite film with an average domain size of 1 μm was obtained, which resulted in fast solvent evaporation. Solar cells fabricated using this processing technique had a maximum power conversion efficiency exceeding 20% over a 0.1 cm 2 active area and 18% over a 1 cm 2 active area. We believe our approach will enable the realization of highly efficient large-area PCSs for practical development with a very simple and short-time procedure. This simple method should lead the field toward the fabrication of uniform large-scale perovskite films, which are necessary for the production of high-efficiency solar cells that may also be applicable to several other material systems for more widespread practical deployment.

  7. Impact of technical and technological changes on energy efficiency of production company - case study

    NASA Astrophysics Data System (ADS)

    Szwedzka, K.; Gruszka, J.; Szafer, P.

    2016-08-01

    Improving energy efficiency is one of the strategic objectives of the European Union for rational energy economy. To make efforts to improve energy efficiency have been obliged both small and large end-users. This article aims to show the possibilities of improving energy efficiency by introducing technical and technological process changes of pine lumber drying. The object of the research is process of drying lumber implemented in a production company, which is a key supplier of large furniture manufacturer. Pine lumber drying chamber consume about 45% of total electricity in sawmill. According to various sources, drying of 1m3 of lumber uses about 3060kWh and is dependent of inter alia: the drying process itself, the factors affecting the processing time and the desired output moisture content of the timber. The article proposals for changes in the process of drying lumber pine have been positively validated in the company, and as a result their energy consumption per 1 m3 of product declined by 18%.

  8. Cram

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamblin, T.

    2014-08-29

    Large-scale systems like Sequoia allow running small numbers of very large (1M+ process) jobs, but their resource managers and schedulers do not allow large numbers of small (4, 8, 16, etc.) process jobs to run efficiently. Cram is a tool that allows users to launch many small MPI jobs within one large partition, and to overcome the limitations of current resource management software for large ensembles of jobs.

  9. Highly Efficient and Uniform 1 cm2 Perovskite Solar Cells with an Electrochemically Deposited NiOx Hole-Extraction Layer.

    PubMed

    Park, Ik Jae; Kang, Gyeongho; Park, Min Ah; Kim, Ju Seong; Seo, Se Won; Kim, Dong Hoe; Zhu, Kai; Park, Taiho; Kim, Jin Young

    2017-06-22

    Given that the highest certified conversion efficiency of the organic-inorganic perovskite solar cell (PSC) already exceeds 22 %, which is even higher than that of the polycrystalline silicon solar cell, the significance of new scalable processes that can be utilized for preparing large-area devices and their commercialization is rapidly increasing. From this perspective, the electrodeposition method is one of the most suitable processes for preparing large-area devices because it is an already commercialized process with proven controllability and scalability. Here, a highly uniform NiO x layer prepared by electrochemical deposition is reported as an efficient hole-extraction layer of a p-i-n-type planar PSC with a large active area of >1 cm 2 . It is demonstrated that the increased surface roughness of the NiO x layer, achieved by controlling the deposition current density, facilitates the hole extraction at the interface between perovskite and NiO x , and thus increases the fill factor and the conversion efficiency. The electrochemically deposited NiO x layer also exhibits extremely uniform thickness and morphology, leading to highly efficient and uniform large-area PSCs. As a result, the p-i-n-type planar PSC with an area of 1.084 cm 2 exhibits a stable conversion efficiency of 17.0 % (19.2 % for 0.1 cm 2 ) without showing hysteresis effects. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Highly Efficient and Uniform 1 cm 2 Perovskite Solar Cells with an Electrochemically Deposited NiO x Hole-Extraction Layer

    DOE PAGES

    Park, Ik Jae; Kang, Gyeongho; Park, Min Ah; ...

    2017-05-10

    Here, given that the highest certified conversion efficiency of the organic-inorganic perovskite solar cell (PSC) already exceeds 22%, which is even higher than that of the polycrystalline silicon solar cell, the significance of new scalable processes that can be utilized for preparing large-area devices and their commercialization is rapidly increasing. From this perspective, the electrodeposition method is one of the most suitable processes for preparing large-area devices because it is an already commercialized process with proven controllability and scalability. Here, a highly uniform NiO x layer prepared by electrochemical deposition is reported as an efficient hole-extraction layer of a p-i-n-typemore » planar PSC with a large active area of >1 cm 2. It is demonstrated that the increased surface roughness of the NiO x layer, achieved by controlling the deposition current density, facilitates the hole extraction at the interface between perovskite and NiO x, and thus increases the fill factor and the conversion efficiency. The electrochemically deposited NiO x layer also exhibits extremely uniform thickness and morphology, leading to highly efficient and uniform large-area PSCs. As a result, the p-i-n-type planar PSC with an area of 1.084 cm 2 exhibits a stable conversion efficiency of 17.0% (19.2% for 0.1 cm 2) without showing hysteresis effects.« less

  11. a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.

    2015-07-01

    Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.

  12. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  13. Knowledge representation by connection matrices: A method for the on-board implementation of large expert systems

    NASA Technical Reports Server (NTRS)

    Kellner, A.

    1987-01-01

    Extremely large knowledge sources and efficient knowledge access characterizing future real-life artificial intelligence applications represent crucial requirements for on-board artificial intelligence systems due to obvious computer time and storage constraints on spacecraft. A type of knowledge representation and corresponding reasoning mechanism is proposed which is particularly suited for the efficient processing of such large knowledge bases in expert systems.

  14. Future-oriented maintenance strategy based on automated processes is finding its way into large astronomical facilities at remote observing sites

    NASA Astrophysics Data System (ADS)

    Silber, Armin; Gonzalez, Christian; Pino, Francisco; Escarate, Patricio; Gairing, Stefan

    2014-08-01

    With expanding sizes and increasing complexity of large astronomical observatories on remote observing sites, the call for an efficient and recourses saving maintenance concept becomes louder. The increasing number of subsystems on telescopes and instruments forces large observatories, like in industries, to rethink conventional maintenance strategies for reaching this demanding goal. The implementation of full-, or semi-automatic processes for standard service activities can help to keep the number of operating staff on an efficient level and to reduce significantly the consumption of valuable consumables or equipment. In this contribution we will demonstrate on the example of the 80 Cryogenic subsystems of the ALMA Front End instrument, how an implemented automatic service process increases the availability of spare parts and Line Replaceable Units. Furthermore how valuable staff recourses can be freed from continuous repetitive maintenance activities, to allow focusing more on system diagnostic tasks, troubleshooting and the interchanging of line replaceable units. The required service activities are decoupled from the day-to-day work, eliminating dependencies on workload peaks or logistic constrains. The automatic refurbishing processes running in parallel to the operational tasks with constant quality and without compromising the performance of the serviced system components. Consequentially that results in an efficiency increase, less down time and keeps the observing schedule on track. Automatic service processes in combination with proactive maintenance concepts are providing the necessary flexibility for the complex operational work structures of large observatories. The gained planning flexibility is allowing an optimization of operational procedures and sequences by considering the required cost efficiency.

  15. [Effect of pilot UASB-SFSBR-MAP process for the large scale swine wastewater treatment].

    PubMed

    Wang, Liang; Chen, Chong-Jun; Chen, Ying-Xu; Wu, Wei-Xiang

    2013-03-01

    In this paper, a treatment process consisted of UASB, step-fed sequencing batch reactor (SFSBR) and magnesium ammonium phosphate precipitation reactor (MAP) was built to treat the large scale swine wastewater, which aimed at overcoming drawbacks of conventional anaerobic-aerobic treatment process and SBR treatment process, such as the low denitrification efficiency, high operating costs and high nutrient losses and so on. Based on the treatment process, a pilot engineering was constructed. It was concluded from the experiment results that the removal efficiency of COD, NH4(+) -N and TP reached 95.1%, 92.7% and 88.8%, the recovery rate of NH4(+) -N and TP by MAP process reached 23.9% and 83.8%, the effluent quality was superior to the discharge standard of pollutants for livestock and poultry breeding (GB 18596-2001), mass concentration of COD, TN, NH4(+) -N, TP and SS were not higher than 135, 116, 43, 7.3 and 50 mg x L(-1) respectively. The process developed was reliable, kept self-balance of carbon source and alkalinity, reached high nutrient recovery efficiency. And the operating cost was equal to that of the traditional anaerobic-aerobic treatment process. So the treatment process could provide a high value of application and dissemination and be fit for the treatment pf the large scale swine wastewater in China.

  16. Efficient Sky-Blue Perovskite Light-Emitting Devices Based on Ethylammonium Bromide Induced Layered Perovskites.

    PubMed

    Wang, Qi; Ren, Jie; Peng, Xue-Feng; Ji, Xia-Xia; Yang, Xiao-Hui

    2017-09-06

    Low-dimensional organometallic halide perovskites are actively studied for the light-emitting applications due to their properties such as solution processability, high luminescence quantum yield, large exciton binding energy, and tunable band gap. Introduction of large-group ammonium halides not only serves as a convenient and versatile method to obtain layered perovskites but also allows the exploitation of the energy-funneling process to achieve a high-efficiency light emission. Herein, we investigate the influence of the addition of ethylammonium bromide on the morphology, crystallite structure, and optical properties of the resultant perovskite materials and report that the phase transition from bulk to layered perovskite occurs in the presence of excess ethylammonium bromide. On the basis of this strategy, we report green perovskite light-emitting devices with the maximum external quantum efficiency of ca. 3% and power efficiency of 9.3 lm/W. Notably, blue layered perovskite light-emitting devices with the Commission Internationale de I'Eclairage coordinates of (0.16, 0.23) exhibit the maximum external quantum efficiency of 2.6% and power efficiency of 1 lm/W at 100 cd/m 2 , representing a large improvement over the previously reported analogous devices.

  17. Improving timeliness and efficiency in the referral process for safety net providers: application of the Lean Six Sigma methodology.

    PubMed

    Deckard, Gloria J; Borkowski, Nancy; Diaz, Deisell; Sanchez, Carlos; Boisette, Serge A

    2010-01-01

    Designated primary care clinics largely serve low-income and uninsured patients who present a disproportionate number of chronic illnesses and face great difficulty in obtaining the medical care they need, particularly the access to specialty physicians. With limited capacity for providing specialty care, these primary care clinics generally refer patients to safety net hospitals' specialty ambulatory care clinics. A large public safety net health system successfully improved the effectiveness and efficiency of the specialty clinic referral process through application of Lean Six Sigma, an advanced process-improvement methodology and set of tools driven by statistics and engineering concepts.

  18. Combined process automation for large-scale EEG analysis.

    PubMed

    Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E

    2012-01-01

    Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Large-scale two-photon imaging revealed super-sparse population codes in the V1 superficial layer of awake monkeys.

    PubMed

    Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing

    2018-04-26

    One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.

  20. An efficient parallel-processing method for transposing large matrices in place.

    PubMed

    Portnoff, M R

    1999-01-01

    We have developed an efficient algorithm for transposing large matrices in place. The algorithm is efficient because data are accessed either sequentially in blocks or randomly within blocks small enough to fit in cache, and because the same indexing calculations are shared among identical procedures operating on independent subsets of the data. This inherent parallelism makes the method well suited for a multiprocessor computing environment. The algorithm is easy to implement because the same two procedures are applied to the data in various groupings to carry out the complete transpose operation. Using only a single processor, we have demonstrated nearly an order of magnitude increase in speed over the previously published algorithm by Gate and Twigg for transposing a large rectangular matrix in place. With multiple processors operating in parallel, the processing speed increases almost linearly with the number of processors. A simplified version of the algorithm for square matrices is presented as well as an extension for matrices large enough to require virtual memory.

  1. An Effective Methodology for Processing and Analyzing Large, Complex Spacecraft Data Streams

    ERIC Educational Resources Information Center

    Teymourlouei, Haydar

    2013-01-01

    The emerging large datasets have made efficient data processing a much more difficult task for the traditional methodologies. Invariably, datasets continue to increase rapidly in size with time. The purpose of this research is to give an overview of some of the tools and techniques that can be utilized to manage and analyze large datasets. We…

  2. Process configuration of Liquid-nitrogen Energy Storage System (LESS) for maximum turnaround efficiency

    NASA Astrophysics Data System (ADS)

    Dutta, Rohan; Ghosh, Parthasarathi; Chowdhury, Kanchan

    2017-12-01

    Diverse power generation sector requires energy storage due to penetration of variable renewable energy sources and use of CO2 capture plants with fossil fuel based power plants. Cryogenic energy storage being large-scale, decoupled system with capability of producing large power in the range of MWs is one of the options. The drawback of these systems is low turnaround efficiencies due to liquefaction processes being highly energy intensive. In this paper, the scopes of improving the turnaround efficiency of such a plant based on liquid Nitrogen were identified and some of them were addressed. A method using multiple stages of reheat and expansion was proposed for improved turnaround efficiency from 22% to 47% using four such stages in the cycle. The novelty here is the application of reheating in a cryogenic system and utilization of waste heat for that purpose. Based on the study, process conditions for a laboratory-scale setup were determined and presented here.

  3. Research on the technique of large-aperture off-axis parabolic surface processing using tri-station machine and its applicability.

    PubMed

    Zhang, Xin; Luo, Xiao; Hu, Haixiang; Zhang, Xuejun

    2015-09-01

    In order to process large-aperture aspherical mirrors, we designed and constructed a tri-station machine processing center with a three station device, which bears vectored feed motion of up to 10 axes. Based on this processing center, an aspherical mirror-processing model is proposed, in which each station implements traversal processing of large-aperture aspherical mirrors using only two axes, while the stations are switchable, thus lowering cost and enhancing processing efficiency. The applicability of the tri-station machine is also analyzed. At the same time, a simple and efficient zero-calibration method for processing is proposed. To validate the processing model, using our processing center, we processed an off-axis parabolic SiC mirror with an aperture diameter of 1450 mm. The experimental results indicate that, with a one-step iterative process, the peak to valley (PV) and root mean square (RMS) of the mirror converged from 3.441 and 0.5203 μm to 2.637 and 0.2962 μm, respectively, where the RMS reduced by 43%. The validity and high accuracy of the model are thereby demonstrated.

  4. Algorithms for solving large sparse systems of simultaneous linear equations on vector processors

    NASA Technical Reports Server (NTRS)

    David, R. E.

    1984-01-01

    Very efficient algorithms for solving large sparse systems of simultaneous linear equations have been developed for serial processing computers. These involve a reordering of matrix rows and columns in order to obtain a near triangular pattern of nonzero elements. Then an LU factorization is developed to represent the matrix inverse in terms of a sequence of elementary Gaussian eliminations, or pivots. In this paper it is shown how these algorithms are adapted for efficient implementation on vector processors. Results obtained on the CYBER 200 Model 205 are presented for a series of large test problems which show the comparative advantages of the triangularization and vector processing algorithms.

  5. Approximately 800-nm-Thick Pinhole-Free Perovskite Films via Facile Solvent Retarding Process for Efficient Planar Solar Cells.

    PubMed

    Yuan, Zhongcheng; Yang, Yingguo; Wu, Zhongwei; Bai, Sai; Xu, Weidong; Song, Tao; Gao, Xingyu; Gao, Feng; Sun, Baoquan

    2016-12-21

    Device performance of organometal halide perovskite solar cells significantly depends on the quality and thickness of perovskite absorber films. However, conventional deposition methods often generate pinholes within ∼300 nm-thick perovskite films, which are detrimental to the large area device manufacture. Here we demonstrated a simple solvent retarding process to deposit uniform pinhole free perovskite films with thicknesses up to ∼800 nm. Solvent evaporation during the retarding process facilitated the components separation in the mixed halide perovskite precursors, and hence the final films exhibited pinhole free morphology and large grain sizes. In addition, the increased precursor concentration after solvent-retarding process led to thick perovskite films. Based on the uniform and thick perovskite films prepared by this convenient process, a champion device efficiency up to 16.8% was achieved. We believe that this simple deposition procedure for high quality perovskite films around micrometer thickness has a great potential in the application of large area perovskite solar cells and other optoelectronic devices.

  6. Large Size Color-tunable Electroluminescence from Cationic Iridium Complexes-based Light-emitting Electrochemical Cells

    PubMed Central

    Zeng, Qunying; Li, Fushan; Guo, Tailiang; Shan, Guogang; Su, Zhongmin

    2016-01-01

    Solution-processable light-emitting electrochemical cells (LECs) with simple device architecture have become an attractive candidate for application in next generation lighting and flat-panel displays. Herein, single layer LECs employing two cationic Ir(III) complexes showing highly efficient blue-green and yellow electroluminescence with peak current efficiency of 31.6 cd A−1 and 40.6 cd A−1, respectively, have been reported. By using both complexes in the device, color-tunable LECs with a single spectral peak in the wavelength range from 499 to 570 nm were obtained by varying their rations. In addition, the fabrication of efficient LECs was demonstrated based on low cost doctor-blade coating technique, which was compatible with the roll to roll fabrication process for the large size production. In this work, for the first time, 4 inch LEC devices by doctor-blade coating were fabricated, which exhibit the efficiencies of 23.4 cd A−1 and 25.4 cd A−1 for the blue-green and yellow emission, respectively. The exciting results indicated that highly efficient LECs with controllable color could be realized and find practical application in large size lighting and displays. PMID:27278527

  7. Efficiency of the energy transfer in the FMO complex using hierarchical equations on Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Kramer, Tobias; Kreisbeck, Christoph; Rodriguez, Mirta; Hein, Birgit

    2011-03-01

    We study the efficiency of the energy transfer in the Fenna-Matthews-Olson complex solving the non-Markovian hierarchical equations (HE) proposed by Ishizaki and Fleming in 2009, which include properly the reorganization process. We compare it to the Markovian approach and find that the Markovian dynamics overestimates the thermalization rate, yielding higher efficiencies than the HE. Using the high-performance of graphics processing units (GPU) we cover a large range of reorganization energies and temperatures and find that initial quantum beatings are important for the energy distribution, but of limited influence to the efficiency. Our efficient GPU implementation of the HE allows us to calculate nonlinear spectra of the FMO complex. References see www.quantumdynamics.de

  8. Co-digestion of sewage sludge from external small WWTP's in a large plant

    NASA Astrophysics Data System (ADS)

    Miodoński, Stanisław

    2017-11-01

    Improving energy efficiency of WWTPs (Waste Water Treatment Plants) is crucial action of modern wastewater treatment technology. Technological treatment process optimization is important but the main goal will not be achieved without increasing production of renewable energy from sewage sludge in anaerobic digestion process which is most often used as sludge stabilization method on large WWTP's. Usually, anaerobic digestion reactors used for sludge digestion were designed with reserve and most of them is oversized. In many cases that reserve is unused. On the other hand, smaller WWTPs have problem with management of sewage sludge due to lack of adequately developed infrastructure for sludge stabilization. Paper shows an analysis of using a technological reserve of anaerobic digestion reactors at large WWTP (1 million P.E.) for sludge stabilization collected from smaller WWTP in a co-digestion process. Over 30 small WWTPs from the same region as the large WWTP were considered in this study. Furthermore, performed analysis included also evaluation of potential sludge disintegration pre-treatment for co-digestion efficiency improvement.

  9. Integration and segregation of large-scale brain networks during short-term task automatization

    PubMed Central

    Mohr, Holger; Wolfensteller, Uta; Betzel, Richard F.; Mišić, Bratislav; Sporns, Olaf; Richiardi, Jonas; Ruge, Hannes

    2016-01-01

    The human brain is organized into large-scale functional networks that can flexibly reconfigure their connectivity patterns, supporting both rapid adaptive control and long-term learning processes. However, it has remained unclear how short-term network dynamics support the rapid transformation of instructions into fluent behaviour. Comparing fMRI data of a learning sample (N=70) with a control sample (N=67), we find that increasingly efficient task processing during short-term practice is associated with a reorganization of large-scale network interactions. Practice-related efficiency gains are facilitated by enhanced coupling between the cingulo-opercular network and the dorsal attention network. Simultaneously, short-term task automatization is accompanied by decreasing activation of the fronto-parietal network, indicating a release of high-level cognitive control, and a segregation of the default mode network from task-related networks. These findings suggest that short-term task automatization is enabled by the brain's ability to rapidly reconfigure its large-scale network organization involving complementary integration and segregation processes. PMID:27808095

  10. Experimental investigation of precision grinding oriented to achieve high process efficiency for large and middle-scale optic

    NASA Astrophysics Data System (ADS)

    Li, Ping; Jin, Tan; Guo, Zongfu; Lu, Ange; Qu, Meina

    2016-10-01

    High efficiency machining of large precision optical surfaces is a challenging task for researchers and engineers worldwide. The higher form accuracy and lower subsurface damage helps to significantly reduce the cycle time for the following polishing process, save the cost of production, and provide a strong enabling technology to support the large telescope and laser energy fusion projects. In this paper, employing an Infeed Grinding (IG) mode with a rotary table and a cup wheel, a multi stage grinding process chain, as well as precision compensation technology, a Φ300mm diameter plano mirror is ground by the Schneider Surfacing Center SCG 600 that delivers a new level of quality and accuracy when grinding such large flats. Results show a PV form error of Pt<2 μm, the surface roughness Ra<30 nm and Rz<180 nm, with subsurface damage <20 μm, and a material removal rates of up to 383.2 mm3/s.

  11. Development and manufacture of reactive-transfer-printed CIGS photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Eldada, Louay; Sang, Baosheng; Lu, Dingyuan; Stanbery, Billy J.

    2010-09-01

    In recent years, thin-film photovoltaic (PV) companies started realizing their low manufacturing cost potential, and grabbing an increasingly larger market share from multicrystalline silicon companies. Copper Indium Gallium Selenide (CIGS) is the most promising thin-film PV material, having demonstrated the highest energy conversion efficiency in both cells and modules. However, most CIGS manufacturers still face the challenge of delivering a reliable and rapid manufacturing process that can scale effectively and deliver on the promise of this material system. HelioVolt has developed a reactive transfer process for CIGS absorber formation that has the benefits of good compositional control, high-quality CIGS grains, and a fast reaction. The reactive transfer process is a two stage CIGS fabrication method. Precursor films are deposited onto substrates and reusable print plates in the first stage, while in the second stage, the CIGS layer is formed by rapid heating with Se confinement. High quality CIGS films with large grains were produced on a full-scale manufacturing line, and resulted in high-efficiency large-form-factor modules. With 14% cell efficiency and 12% module efficiency, HelioVolt started to commercialize the process on its first production line with 20 MW nameplate capacity.

  12. Roll-to-Roll printed large-area all-polymer solar cells with 5% efficiency based on a low crystallinity conjugated polymer blend

    NASA Astrophysics Data System (ADS)

    Gu, Xiaodan; Zhou, Yan; Gu, Kevin; Kurosawa, Tadanori; Yan, Hongping; Wang, Cheng; Toney, Micheal; Bao, Zhenan

    The challenge of continuous printing in high efficiency large-area organic solar cells is a key limiting factor for their widespread adoption. We present a materials design concept for achieving large-area, solution coated all-polymer bulk heterojunction (BHJ) solar cells with stable phase separation morphology between the donor and acceptor. The key concept lies in inhibiting strong crystallization of donor and acceptor polymers, thus forming intermixed, low crystallinity and mostly amorphous blends. Based on experiments using donors and acceptors with different degree of crystallinity, our results showed that microphase separated donor and acceptor domain sizes are inversely proportional to the crystallinity of the conjugated polymers. This methodology of using low crystallinity donors and acceptors has the added benefit of forming a consistent and robust morphology that is insensitive to different processing conditions, allowing one to easily scale up the printing process from a small scale solution shearing coater to a large-scale continuous roll-to-roll (R2R) printer. We were able to continuously roll-to-roll slot die print large area all-polymer solar cells with power conversion efficiencies of 5%, with combined cell area up to 10 cm2. This is among the highest efficiencies realized with R2R coated active layer organic materials on flexible substrate. DOE BRIDGE sunshot program. Office of Naval Research.

  13. Roll-to-Roll Printed Large-Area All-Polymer Solar Cells with 5% Efficiency Based on a Low Crystallinity Conjugated Polymer Blend

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Xiaodan; Zhou, Yan; Gu, Kevin

    The challenge of continuous printing in high-efficiency large-area organic solar cells is a key limiting factor for their widespread adoption. We present a materials design concept for achieving large-area, solution-coated all-polymer bulk heterojunction solar cells with stable phase separation morphology between the donor and acceptor. The key concept lies in inhibiting strong crystallization of donor and acceptor polymers, thus forming intermixed, low crystallinity, and mostly amorphous blends. Based on experiments using donors and acceptors with different degree of crystallinity, the results show that microphase separated donor and acceptor domain sizes are inversely proportional to the crystallinity of the conjugated polymers.more » This particular methodology of using low crystallinity donors and acceptors has the added benefit of forming a consistent and robust morphology that is insensitive to different processing conditions, allowing one to easily scale up the printing process from a small-scale solution shearing coater to a large-scale continuous roll-to-roll (R2R) printer. Large-area all-polymer solar cells are continuously roll-to-roll slot die printed with power conversion efficiencies of 5%, with combined cell area up to 10 cm 2. This is among the highest efficiencies realized with R2R-coated active layer organic materials on flexible substrate.« less

  14. Roll-to-Roll Printed Large-Area All-Polymer Solar Cells with 5% Efficiency Based on a Low Crystallinity Conjugated Polymer Blend

    DOE PAGES

    Gu, Xiaodan; Zhou, Yan; Gu, Kevin; ...

    2017-03-07

    The challenge of continuous printing in high-efficiency large-area organic solar cells is a key limiting factor for their widespread adoption. We present a materials design concept for achieving large-area, solution-coated all-polymer bulk heterojunction solar cells with stable phase separation morphology between the donor and acceptor. The key concept lies in inhibiting strong crystallization of donor and acceptor polymers, thus forming intermixed, low crystallinity, and mostly amorphous blends. Based on experiments using donors and acceptors with different degree of crystallinity, the results show that microphase separated donor and acceptor domain sizes are inversely proportional to the crystallinity of the conjugated polymers.more » This particular methodology of using low crystallinity donors and acceptors has the added benefit of forming a consistent and robust morphology that is insensitive to different processing conditions, allowing one to easily scale up the printing process from a small-scale solution shearing coater to a large-scale continuous roll-to-roll (R2R) printer. Large-area all-polymer solar cells are continuously roll-to-roll slot die printed with power conversion efficiencies of 5%, with combined cell area up to 10 cm 2. This is among the highest efficiencies realized with R2R-coated active layer organic materials on flexible substrate.« less

  15. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  16. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  17. Application of kernel functions for accurate similarity search in large chemical databases.

    PubMed

    Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H

    2010-04-29

    Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.

  18. Full Quantum Dynamics Simulation of a Realistic Molecular System Using the Adaptive Time-Dependent Density Matrix Renormalization Group Method.

    PubMed

    Yao, Yao; Sun, Ke-Wei; Luo, Zhen; Ma, Haibo

    2018-01-18

    The accurate theoretical interpretation of ultrafast time-resolved spectroscopy experiments relies on full quantum dynamics simulations for the investigated system, which is nevertheless computationally prohibitive for realistic molecular systems with a large number of electronic and/or vibrational degrees of freedom. In this work, we propose a unitary transformation approach for realistic vibronic Hamiltonians, which can be coped with using the adaptive time-dependent density matrix renormalization group (t-DMRG) method to efficiently evolve the nonadiabatic dynamics of a large molecular system. We demonstrate the accuracy and efficiency of this approach with an example of simulating the exciton dissociation process within an oligothiophene/fullerene heterojunction, indicating that t-DMRG can be a promising method for full quantum dynamics simulation in large chemical systems. Moreover, it is also shown that the proper vibronic features in the ultrafast electronic process can be obtained by simulating the two-dimensional (2D) electronic spectrum by virtue of the high computational efficiency of the t-DMRG method.

  19. Design of an efficient music-speech discriminator.

    PubMed

    Tardón, Lorenzo J; Sammartino, Simone; Barbancho, Isabel

    2010-01-01

    In this paper, the problem of the design of a simple and efficient music-speech discriminator for large audio data sets in which advanced music playing techniques are taught and voice and music are intrinsically interleaved is addressed. In the process, a number of features used in speech-music discrimination are defined and evaluated over the available data set. Specifically, the data set contains pieces of classical music played with different and unspecified instruments (or even lyrics) and the voice of a teacher (a top music performer) or even the overlapped voice of the translator and other persons. After an initial test of the performance of the features implemented, a selection process is started, which takes into account the type of classifier selected beforehand, to achieve good discrimination performance and computational efficiency, as shown in the experiments. The discrimination application has been defined and tested on a large data set supplied by Fundacion Albeniz, containing a large variety of classical music pieces played with different instrument, which include comments and speeches of famous performers.

  20. The photobiological production of hydrogen: potential efficiency and effectiveness as a renewable fuel.

    PubMed

    Prince, Roger C; Kheshgi, Haroon S

    2005-01-01

    Photosynthetic microorganisms can produce hydrogen when illuminated, and there has been considerable interest in developing this to a commercially viable process. Its appealing aspects include the fact that the hydrogen would come from water, and that the process might be more energetically efficient than growing, harvesting, and processing crops. We review current knowledge about photobiological hydrogen production, and identify and discuss some of the areas where scientific and technical breakthroughs are essential for commercialization. First we describe the underlying biochemistry of the process, and identify some opportunities for improving photobiological hydrogen production at the molecular level. Then we address the fundamental quantum efficiency of the various processes that have been suggested, technological issues surrounding large-scale growth of hydrogen-producing microorganisms, and the scale and efficiency on which this would have to be practiced to make a significant contribution to current energy use.

  1. Nitrogen expander cycles for large capacity liquefaction of natural gas

    NASA Astrophysics Data System (ADS)

    Chang, Ho-Myung; Park, Jae Hoon; Gwak, Kyung Hyun; Choe, Kun Hyung

    2014-01-01

    Thermodynamic study is performed on nitrogen expander cycles for large capacity liquefaction of natural gas. In order to substantially increase the capacity, a Brayton refrigeration cycle with nitrogen expander was recently added to the cold end of the reputable propane pre-cooled mixed-refrigerant (C3-MR) process. Similar modifications with a nitrogen expander cycle are extensively investigated on a variety of cycle configurations. The existing and modified cycles are simulated with commercial process software (Aspen HYSYS) based on selected specifications. The results are compared in terms of thermodynamic efficiency, liquefaction capacity, and estimated size of heat exchangers. The combination of C3-MR with partial regeneration and pre-cooling of nitrogen expander cycle is recommended to have a great potential for high efficiency and large capacity.

  2. Efficiency and economics of large scale hydrogen liquefaction. [for future generation aircraft requirements

    NASA Technical Reports Server (NTRS)

    Baker, C. R.

    1975-01-01

    Liquid hydrogen is being considered as a substitute for conventional hydrocarbon-based fuels for future generations of commercial jet aircraft. Its acceptance will depend, in part, upon the technology and cost of liquefaction. The process and economic requirements for providing a sufficient quantity of liquid hydrogen to service a major airport are described. The design is supported by thermodynamic studies which determine the effect of process arrangement and operating parameters on the process efficiency and work of liquefaction.

  3. Improved efficiency of a large-area Cu(In,Ga)Se₂ solar cell by a nontoxic hydrogen-assisted solid Se vapor selenization process.

    PubMed

    Wu, Tsung-Ta; Hu, Fan; Huang, Jyun-Hong; Chang, Chia-ho; Lai, Chih-chung; Yen, Yu-Ting; Huang, Hou-Ying; Hong, Hwen-Fen; Wang, Zhiming M; Shen, Chang-Hong; Shieh, Jia-Min; Chueh, Yu-Lun

    2014-04-09

    A nontoxic hydrogen-assisted solid Se vapor selenization process (HASVS) technique to achieve a large-area (40 × 30 cm(2)) Cu(In,Ga)Se2 (CIGS) solar panel with enhanced efficiencies from 7.1 to 10.8% (12.0% for active area) was demonstrated. The remarkable improvement of efficiency and fill factor comes from improved open circuit voltage (Voc) and reduced dark current due to (1) decreased interface recombination raised from the formation of a widened buried homojunction with n-type Cd(Cu) participation and (2) enhanced separation of electron and hole carriers resulting from the accumulation of Na atoms on the surface of the CIGS film. The effects of microstructural, compositional, and electrical characteristics with hydrogen-assisted Se vapor selenization, including interdiffusion of atoms and formation of buried homojunction, were examined in detail. This methodology can be also applied to CIS (CuInSe2) thin film solar cells with enhanced efficiencies from 5.3% to 8.5% (9.4% for active area) and provides a facile approach to improve quality of CIGS and stimulate the nontoxic progress in the large scale CIGS PV industry.

  4. High Quantum Efficiency OLED Lighting Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiang, Joseph

    The overall goal of the program was to apply improvements in light outcoupling technology to a practical large area plastic luminaire, and thus enable the product vision of an extremely thin form factor high efficiency large area light source. The target substrate was plastic and the baseline device was operating at 35 LPW at the start of the program. The target LPW of the program was a >2x improvement in the LPW efficacy and the overall amount of light to be delivered was relatively high 900 lumens. Despite the extremely difficult challenges associated with scaling up a wet solution processmore » on plastic substrates, the program was able to make substantial progress. A small molecule wet solution process was successfully implemented on plastic substrates with almost no loss in efficiency in transitioning from the laboratory scale glass to large area plastic substrates. By transitioning to a small molecule based process, the LPW entitlement increased from 35 LPW to 60 LPW. A further 10% improvement in outcoupling efficiency was demonstrated via the use of a highly reflecting cathode, which reduced absorptive loss in the OLED device. The calculated potential improvement in some cases is even larger, ~30%, and thus there is considerable room for optimism in improving the net light coupling efficacy, provided absorptive loss mechanisms are eliminated. Further improvements are possible if scattering schemes such as the silver nanowire based hard coat structure are fully developed. The wet coating processes were successfully scaled to large area plastic substrate and resulted in the construction of a 900 lumens luminaire device.« less

  5. Record Efficiency on Large Area P-Type Czochralski Silicon Substrates

    NASA Astrophysics Data System (ADS)

    Hallam, Brett; Wenham, Stuart; Lee, Haeseok; Lee, Eunjoo; Lee, Hyunwoo; Kim, Jisun; Shin, Jeongeun; Cho, Kyeongyeon; Kim, Jisoo

    2012-10-01

    In this work we report a world record independently confirmed efficiency of 19.4% for a large area p-type Czochralski grown solar cell fabricated with a full area aluminium back surface field. This is achieved using the laser doped selective emitter solar cell technology on an industrial screen print production line with the addition of laser doping and light induced plating equipment. The use of a modified diffusion process is explored in which the emitter is diffused to a sheet resistance of 90 Ω/square and subsequent etch back of the emitter to 120 Ω/square. This results in a lower surface concentration of phosphorus compared to that of emitters diffused directly to 120 Ω/square. This modified diffusion process subsequently reduces the conductivity of the surface in relation to that of the heavily diffused laser doped contacts and avoids parasitic plating, resulting an average absolute increase in efficiency of 0.4% compared to cells fabricated without an emitter etch back process.

  6. Our Application of ISO 9000.

    ERIC Educational Resources Information Center

    Hammond, Jane

    2000-01-01

    Since a large, underfunded urban Colorado district initiated ISO 9000 reforms, administrators and staff have reviewed 14 central-office departments' processes to improve efficiency and enhance student outcomes. Jefferson County has saved $900,000 annually on purchasing processes, developed a quality curriculum-development process, and improved…

  7. Efficient development and processing of thermal math models of very large space truss structures

    NASA Technical Reports Server (NTRS)

    Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.

    1993-01-01

    As the spacecraft moves along the orbit, the truss members are subjected to direct and reflected solar, albedo and planetary infra-red (IR) heating rates, as well as IR heating and shadowing from other spacecraft components. This is a transient process with continuously changing heating loads and the shadowing effects. The resulting nonuniform temperature distribution may cause nonuniform thermal expansion, deflection and stress in the truss elements, truss warping and thermal distortions. There are three challenges in the thermal-structural analysis of the large truss structures. The first is the development of the thermal and structural math models, the second - model processing, and the third - the data transfer between the models. All three tasks require considerable time and computer resources to be done because of a very large number of components involved. To address these challenges a series of techniques of automated thermal math modeling and efficient processing of very large space truss structures were developed. In the process the finite element and finite difference methods are interfaced. A very substantial reduction of the quantity of computations was achieved while assuring a desired accuracy of the results. The techniques are illustrated on the thermal analysis of a segment of the Space Station main truss.

  8. Room-Temperature and Solution-Processable Cu-Doped Nickel Oxide Nanoparticles for Efficient Hole-Transport Layers of Flexible Large-Area Perovskite Solar Cells.

    PubMed

    He, Qiqi; Yao, Kai; Wang, Xiaofeng; Xia, Xuefeng; Leng, Shifeng; Li, Fan

    2017-12-06

    Flexible perovskite solar cells (PSCs) using plastic substrates have become one of the most attractive points in the field of thin-film solar cells. Low-temperature and solution-processable nanoparticles (NPs) enable the fabrication of semiconductor thin films in a simple and low-cost approach to function as charge-selective layers in flexible PSCs. Here, we synthesized phase-pure p-type Cu-doped NiO x NPs with good electrical properties, which can be processed to smooth, pinhole-free, and efficient hole transport layers (HTLs) with large-area uniformity over a wide range of film thickness using a room-temperature solution-processing technique. Such a high-quality inorganic HTL allows for the fabrication of flexible PSCs with an active area >1 cm 2 , which have a power conversion efficiency over 15.01% without hysteresis. Moreover, the Cu/NiO x NP-based flexible devices also demonstrate excellent air stability and mechanical stability compared to their counterpart fabricated on the pristine NiO x films. This work will contribute to the evolution of upscaling flexible PSCs with a simple fabrication process and high device performances.

  9. Production technology for high efficiency ion implanted solar cells

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, A. R.; Minnucci, J. A.; Greenwald, A. C.; Josephs, R. H.

    1978-01-01

    Ion implantation is being developed for high volume automated production of silicon solar cells. An implanter designed for solar cell processing and able to properly implant up to 300 4-inch wafers per hour is now operational. A machine to implant 180 sq m/hr of solar cell material has been designed. Implanted silicon solar cells with efficiencies exceeding 16% AM1 are now being produced and higher efficiencies are expected. Ion implantation and transient processing by pulsed electron beams are being integrated with electrostatic bonding to accomplish a simple method for large scale, low cost production of high efficiency solar cell arrays.

  10. Information processing using a single dynamical node as complex system

    PubMed Central

    Appeltant, L.; Soriano, M.C.; Van der Sande, G.; Danckaert, J.; Massar, S.; Dambre, J.; Schrauwen, B.; Mirasso, C.R.; Fischer, I.

    2011-01-01

    Novel methods for information processing are highly desired in our information-driven society. Inspired by the brain's ability to process information, the recently introduced paradigm known as 'reservoir computing' shows that complex networks can efficiently perform computation. Here we introduce a novel architecture that reduces the usually required large number of elements to a single nonlinear node with delayed feedback. Through an electronic implementation, we experimentally and numerically demonstrate excellent performance in a speech recognition benchmark. Complementary numerical studies also show excellent performance for a time series prediction benchmark. These results prove that delay-dynamical systems, even in their simplest manifestation, can perform efficient information processing. This finding paves the way to feasible and resource-efficient technological implementations of reservoir computing. PMID:21915110

  11. Incremental terrain processing for large digital elevation models

    NASA Astrophysics Data System (ADS)

    Ye, Z.

    2012-12-01

    Incremental terrain processing for large digital elevation models Zichuan Ye, Dean Djokic, Lori Armstrong Esri, 380 New York Street, Redlands, CA 92373, USA (E-mail: zye@esri.com, ddjokic@esri.com , larmstrong@esri.com) Efficient analyses of large digital elevation models (DEM) require generation of additional DEM artifacts such as flow direction, flow accumulation and other DEM derivatives. When the DEMs to analyze have a large number of grid cells (usually > 1,000,000,000) the generation of these DEM derivatives is either impractical (it takes too long) or impossible (software is incapable of processing such a large number of cells). Different strategies and algorithms can be put in place to alleviate this situation. This paper describes an approach where the overall DEM is partitioned in smaller processing units that can be efficiently processed. The processed DEM derivatives for each partition can then be either mosaicked back into a single large entity or managed on partition level. For dendritic terrain morphologies, the way in which partitions are to be derived and the order in which they are to be processed depend on the river and catchment patterns. These patterns are not available until flow pattern of the whole region is created, which in turn cannot be established upfront due to the size issues. This paper describes a procedure that solves this problem: (1) Resample the original large DEM grid so that the total number of cells is reduced to a level for which the drainage pattern can be established. (2) Run standard terrain preprocessing operations on the resampled DEM to generate the river and catchment system. (3) Define the processing units and their processing order based on the river and catchment system created in step (2). (4) Based on the processing order, apply the analysis, i.e., flow accumulation operation to each of the processing units, at the full resolution DEM. (5) As each processing unit is processed based on the processing order defined in (3), compare the resulting drainage pattern with the drainage pattern established at the coarser scale and adjust the drainage boundaries and rivers if necessary.

  12. Large-area copper indium diselenide (CIS) process, control and manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillespie, T.J.; Lanning, B.R.; Marshall, C.H.

    1997-12-31

    Lockheed Martin Astronautics (LMA) has developed a large-area (30x30cm) sequential CIS manufacturing approach amenable to low-cost photovoltaics (PV) production. A prototype CIS manufacturing system has been designed and built with compositional uniformity (Cu/In ratio) verified within {+-}4 atomic percent over the 30x30cm area. CIS device efficiencies have been measured by the National Renewable Energy Laboratory (NREL) at 7% on a flexible non-sodium-containing substrate and 10% on a soda-lime-silica (SLS) glass substrate. Critical elements of the manufacturing capability include the CIS sequential process selection, uniform large-area material deposition, and in-situ process control. Details of the process and large-area manufacturing approach aremore » discussed and results presented.« less

  13. CORDIC-based digital signal processing (DSP) element for adaptive signal processing

    NASA Astrophysics Data System (ADS)

    Bolstad, Gregory D.; Neeld, Kenneth B.

    1995-04-01

    The High Performance Adaptive Weight Computation (HAWC) processing element is a CORDIC based application specific DSP element that, when connected in a linear array, can perform extremely high throughput (100s of GFLOPS) matrix arithmetic operations on linear systems of equations in real time. In particular, it very efficiently performs the numerically intense computation of optimal least squares solutions for large, over-determined linear systems. Most techniques for computing solutions to these types of problems have used either a hard-wired, non-programmable systolic array approach, or more commonly, programmable DSP or microprocessor approaches. The custom logic methods can be efficient, but are generally inflexible. Approaches using multiple programmable generic DSP devices are very flexible, but suffer from poor efficiency and high computation latencies, primarily due to the large number of DSP devices that must be utilized to achieve the necessary arithmetic throughput. The HAWC processor is implemented as a highly optimized systolic array, yet retains some of the flexibility of a programmable data-flow system, allowing efficient implementation of algorithm variations. This provides flexible matrix processing capabilities that are one to three orders of magnitude less expensive and more dense than the current state of the art, and more importantly, allows a realizable solution to matrix processing problems that were previously considered impractical to physically implement. HAWC has direct applications in RADAR, SONAR, communications, and image processing, as well as in many other types of systems.

  14. Two pass method and radiation interchange processing when applied to thermal-structural analysis of large space truss structures

    NASA Technical Reports Server (NTRS)

    Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.; Rogers, Karen M.

    1993-01-01

    A method of efficient and automated thermal-structural processing of very large space structures is presented. The method interfaces the finite element and finite difference techniques. It also results in a pronounced reduction of the quantity of computations, computer resources and manpower required for the task, while assuring the desired accuracy of the results.

  15. Production of High Quality Die Steels from Large ESR Slab Ingots

    NASA Astrophysics Data System (ADS)

    Geng, Xin; Jiang, Zhou-hua; Li, Hua-bing; Liu, Fu-bin; Li, Xing

    With the rapid development of manufacture industry in China, die steels are in great need of large slab ingot of high quality and large tonnage, such as P20, WSM718R and so on. Solidification structure and size of large slab ingots produced with conventional methods are not satisfied. However, large slab ingots manufactured by ESR process have a good solidification structure and enough section size. In the present research, the new slab ESR process was used to produce the die steels large slab ingots with the maximum size of 980×2000×3200mm. The compact and sound ingot can be manufactured by the slab ESR process. The ultra-heavy plates with the maximum thickness of 410 mm can be obtained after rolling the 49 tons ingots. Due to reducing the cogging and forging process, the ESR for large slab ingots process can increase greatly the yield and production efficiency, and evidently cut off product costs.

  16. On Efficient Multigrid Methods for Materials Processing Flows with Small Particles

    NASA Technical Reports Server (NTRS)

    Thomas, James (Technical Monitor); Diskin, Boris; Harik, VasylMichael

    2004-01-01

    Multiscale modeling of materials requires simulations of multiple levels of structural hierarchy. The computational efficiency of numerical methods becomes a critical factor for simulating large physical systems with highly desperate length scales. Multigrid methods are known for their superior efficiency in representing/resolving different levels of physical details. The efficiency is achieved by employing interactively different discretizations on different scales (grids). To assist optimization of manufacturing conditions for materials processing with numerous particles (e.g., dispersion of particles, controlling flow viscosity and clusters), a new multigrid algorithm has been developed for a case of multiscale modeling of flows with small particles that have various length scales. The optimal efficiency of the algorithm is crucial for accurate predictions of the effect of processing conditions (e.g., pressure and velocity gradients) on the local flow fields that control the formation of various microstructures or clusters.

  17. Real-Time Measurement of Machine Efficiency during Inertia Friction Welding.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tung, Daniel Joseph; Mahaffey, David; Senkov, Oleg

    Process efficiency is a crucial parameter for inertia friction welding (IFW) that is largely unknown at the present time. A new method has been developed to determine the transient profile of the IFW process efficiency by comparing the workpiece torque used to heat and deform the joint region to the total torque. Particularly, the former is measured by a torque load cell attached to the non-rotating workpiece while the latter is calculated from the deceleration rate of flywheel rotation. The experimentally-measured process efficiency for IFW of AISI 1018 steel rods is validated independently by the upset length estimated from anmore » analytical equation of heat balance and the flash profile calculated from a finite element based thermal stress model. The transient behaviors of torque and efficiency during IFW are discussed based on the energy loss to machine bearings and the bond formation at the joint interface.« less

  18. Analyzing Team Based Engineering Design Process in Computer Supported Collaborative Learning

    ERIC Educational Resources Information Center

    Lee, Dong-Kuk; Lee, Eun-Sang

    2016-01-01

    The engineering design process has been largely implemented in a collaborative project format. Recently, technological advancement has helped collaborative problem solving processes such as engineering design to have efficient implementation using computers or online technology. In this study, we investigated college students' interaction and…

  19. LARGE—A Plasma Torch for Surface Chemistry Applications and CVD Processes—A Status Report

    NASA Astrophysics Data System (ADS)

    Zimmermann, Stephan; Theophile, Eckart; Landes, Klaus; Schein, Jochen

    2008-12-01

    The LARGE ( LONG ARG GENERATOR) is a new generation DC-plasma torch featuring an extended arc which is operated with a perpendicular gas flow to create a wide (up to 45 cm) plasma jet well suited for large area plasma processing. Using plasma diagnostic systems like high speed imaging, enthalpy probe, emission spectroscopy, and tomography, the LARGE produced plasma jet characteristics have been measured and sources of instability have been identified. With a simple model/simulation of the system LARGE III-150 and numerous experimental results, a new nozzle configuration and geometry (LARGE IV-150) has been designed, which produces a more homogenous plasma jet. These improvements enable the standard applications of the LARGE plasma torch (CVD coating process and surface activation process) to operate with higher efficiency.

  20. Materials interface engineering for solution-processed photovoltaics.

    PubMed

    Graetzel, Michael; Janssen, René A J; Mitzi, David B; Sargent, Edward H

    2012-08-16

    Advances in solar photovoltaics are urgently needed to increase the performance and reduce the cost of harvesting solar power. Solution-processed photovoltaics are cost-effective to manufacture and offer the potential for physical flexibility. Rapid progress in their development has increased their solar-power conversion efficiencies. The nanometre (electron) and micrometre (photon) scale interfaces between the crystalline domains that make up solution-processed solar cells are crucial for efficient charge transport. These interfaces include large surface area junctions between photoelectron donors and acceptors, the intralayer grain boundaries within the absorber, and the interfaces between photoactive layers and the top and bottom contacts. Controlling the collection and minimizing the trapping of charge carriers at these boundaries is crucial to efficiency.

  1. Low-Temperature Preparation of Tungsten Oxide Anode Buffer Layer via Ultrasonic Spray Pyrolysis Method for Large-Area Organic Solar Cells.

    PubMed

    Ji, Ran; Zheng, Ding; Zhou, Chang; Cheng, Jiang; Yu, Junsheng; Li, Lu

    2017-07-18

    Tungsten oxide (WO₃) is prepared by a low-temperature ultrasonic spray pyrolysis method in air atmosphere, and it is used as an anode buffer layer (ABL) for organic solar cells (OSCs). The properties of the WO₃ transition metal oxide material as well as the mechanism of ultrasonic spray pyrolysis processes are investigated. The results show that the ultrasonic spray pyrolysized WO₃ ABL exhibits low roughness, matched energy level, and high conductivity, which results in high charge transport efficiency and suppressive recombination in OSCs. As a result, compared to the OSCs based on vacuum thermal evaporated WO₃, a higher power conversion efficiency of 3.63% is reached with low-temperature ultrasonic spray pyrolysized WO₃ ABL. Furthermore, the mostly spray-coated OSCs with large area was fabricated, which has a power conversion efficiency of ~1%. This work significantly enhances our understanding of the preparation and application of low temperature-processed WO₃, and highlights the potential of large area, all spray coated OSCs for sustainable commercial fabrication.

  2. Solution processable inverted structure ZnO-organic hybrid heterojuction white LEDs

    NASA Astrophysics Data System (ADS)

    Bano, N.; Hussain, I.; Soomro, M. Y.; EL-Naggar, A. M.; Albassam, A. A.

    2018-05-01

    Improving luminance efficiency and colour purity are the most important challenges for zinc oxide (ZnO)-organic hybrid heterojunction light emitting diodes (LEDs), affecting their large area applications. If ZnO-organic hybrid heterojunction white LEDs are fabricated by a hydrothermal method, it is difficult to obtain pure and stable blue emission from PFO due to the presence of an undesirable green emission. In this paper, we present an inverted-structure ZnO-organic hybrid heterojunction LED to avoid green emission from PFO, which mainly originates during device processing. With this configuration, each ZnO nanorod (NR) forms a discrete p-n junction; therefore, large-area white LEDs can be designed without compromising the junction area. The configuration used for this novel structure is glass/ZnO NRs/PFO/PEDOT:PSS/L-ITO, which enables the development of efficient, large-area and low-cost hybrid heterojunction LEDs. Inverted-structure ZnO-organic hybrid heterojunction white LEDs offer several improvements in terms of brightness, size, colour, external quantum efficiency and a wider applicability as compared to normal architecture LEDs.

  3. Low-Temperature Preparation of Tungsten Oxide Anode Buffer Layer via Ultrasonic Spray Pyrolysis Method for Large-Area Organic Solar Cells

    PubMed Central

    Ji, Ran; Zheng, Ding; Zhou, Chang; Cheng, Jiang; Yu, Junsheng; Li, Lu

    2017-01-01

    Tungsten oxide (WO3) is prepared by a low-temperature ultrasonic spray pyrolysis method in air atmosphere, and it is used as an anode buffer layer (ABL) for organic solar cells (OSCs). The properties of the WO3 transition metal oxide material as well as the mechanism of ultrasonic spray pyrolysis processes are investigated. The results show that the ultrasonic spray pyrolysized WO3 ABL exhibits low roughness, matched energy level, and high conductivity, which results in high charge transport efficiency and suppressive recombination in OSCs. As a result, compared to the OSCs based on vacuum thermal evaporated WO3, a higher power conversion efficiency of 3.63% is reached with low-temperature ultrasonic spray pyrolysized WO3 ABL. Furthermore, the mostly spray-coated OSCs with large area was fabricated, which has a power conversion efficiency of ~1%. This work significantly enhances our understanding of the preparation and application of low temperature-processed WO3, and highlights the potential of large area, all spray coated OSCs for sustainable commercial fabrication. PMID:28773177

  4. Large guanidinium cation mixed with methylammonium in lead iodide perovskites for 19% efficient solar cells

    NASA Astrophysics Data System (ADS)

    Jodlowski, Alexander D.; Roldán-Carmona, Cristina; Grancini, Giulia; Salado, Manuel; Ralaiarisoa, Maryline; Ahmad, Shahzada; Koch, Norbert; Camacho, Luis; de Miguel, Gustavo; Nazeeruddin, Mohammad Khaja

    2017-12-01

    Organic-inorganic lead halide perovskites have shown photovoltaic performances above 20% in a range of solar cell architectures while offering simple and low-cost processability. Despite the multiple ionic compositions that have been reported so far, the presence of organic constituents is an essential element in all of the high-efficiency formulations, with the methylammonium and formamidinium cations being the sole efficient options available to date. In this study, we demonstrate improved material stability after the incorporation of a large organic cation, guanidinium, into the MAPbI3 crystal structure, which delivers average power conversion efficiencies over 19%, and stabilized performance for 1,000 h under continuous light illumination, a fundamental step within the perovskite field.

  5. Efficient characterisation of large deviations using population dynamics

    NASA Astrophysics Data System (ADS)

    Brewer, Tobias; Clark, Stephen R.; Bradford, Russell; Jack, Robert L.

    2018-05-01

    We consider population dynamics as implemented by the cloning algorithm for analysis of large deviations of time-averaged quantities. We use the simple symmetric exclusion process with periodic boundary conditions as a prototypical example and investigate the convergence of the results with respect to the algorithmic parameters, focussing on the dynamical phase transition between homogeneous and inhomogeneous states, where convergence is relatively difficult to achieve. We discuss how the performance of the algorithm can be optimised, and how it can be efficiently exploited on parallel computing platforms.

  6. a Novel Approach of Indexing and Retrieving Spatial Polygons for Efficient Spatial Region Queries

    NASA Astrophysics Data System (ADS)

    Zhao, J. H.; Wang, X. Z.; Wang, F. Y.; Shen, Z. H.; Zhou, Y. C.; Wang, Y. L.

    2017-10-01

    Spatial region queries are more and more widely used in web-based applications. Mechanisms to provide efficient query processing over geospatial data are essential. However, due to the massive geospatial data volume, heavy geometric computation, and high access concurrency, it is difficult to get response in real time. Spatial indexes are usually used in this situation. In this paper, based on k-d tree, we introduce a distributed KD-Tree (DKD-Tree) suitbable for polygon data, and a two-step query algorithm. The spatial index construction is recursive and iterative, and the query is an in memory process. Both the index and query methods can be processed in parallel, and are implemented based on HDFS, Spark and Redis. Experiments on a large volume of Remote Sensing images metadata have been carried out, and the advantages of our method are investigated by comparing with spatial region queries executed on PostgreSQL and PostGIS. Results show that our approach not only greatly improves the efficiency of spatial region query, but also has good scalability, Moreover, the two-step spatial range query algorithm can also save cluster resources to support a large number of concurrent queries. Therefore, this method is very useful when building large geographic information systems.

  7. Pilot line report: Development of a high efficiency thin silicon solar cell

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Experimental technology advances were implemented to increase the conversion efficiency of ultrathin 2cm x 2cm cells, to demonstrate a capability for fabricating such cells at a rate of 10,000 per month, and to fabricate 200 large-area ultrathin cells to determine their feasibility of manufacture. A production rate of 10,000 50 micron m cells per month with lot average AM0 efficiencies of 11.5% was demonstrated, with peak efficiencies of 13.5% obtained. Losses in most stages of the processing were minimized, the remaining exceptions being in the photolithography and metallization steps for front contact generation and breakage handling. The 5cm x 5cm cells were fabricated with a peak yield in excess of 40% for over 10% AM0 efficiency. Greater fabrication volume is needed to fully evaluate the expected yield and efficiency levels for large cells.

  8. The Matsu Wheel: A Cloud-Based Framework for Efficient Analysis and Reanalysis of Earth Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Patterson, Maria T.; Anderson, Nicholas; Bennett, Collin; Bruggemann, Jacob; Grossman, Robert L.; Handy, Matthew; Ly, Vuong; Mandl, Daniel J.; Pederson, Shane; Pivarski, James; hide

    2016-01-01

    Project Matsu is a collaboration between the Open Commons Consortium and NASA focused on developing open source technology for cloud-based processing of Earth satellite imagery with practical applications to aid in natural disaster detection and relief. Project Matsu has developed an open source cloud-based infrastructure to process, analyze, and reanalyze large collections of hyperspectral satellite image data using OpenStack, Hadoop, MapReduce and related technologies. We describe a framework for efficient analysis of large amounts of data called the Matsu "Wheel." The Matsu Wheel is currently used to process incoming hyperspectral satellite data produced daily by NASA's Earth Observing-1 (EO-1) satellite. The framework allows batches of analytics, scanning for new data, to be applied to data as it flows in. In the Matsu Wheel, the data only need to be accessed and preprocessed once, regardless of the number or types of analytics, which can easily be slotted into the existing framework. The Matsu Wheel system provides a significantly more efficient use of computational resources over alternative methods when the data are large, have high-volume throughput, may require heavy preprocessing, and are typically used for many types of analysis. We also describe our preliminary Wheel analytics, including an anomaly detector for rare spectral signatures or thermal anomalies in hyperspectral data and a land cover classifier that can be used for water and flood detection. Each of these analytics can generate visual reports accessible via the web for the public and interested decision makers. The result products of the analytics are also made accessible through an Open Geospatial Compliant (OGC)-compliant Web Map Service (WMS) for further distribution. The Matsu Wheel allows many shared data services to be performed together to efficiently use resources for processing hyperspectral satellite image data and other, e.g., large environmental datasets that may be analyzed for many purposes.

  9. An algorithm of discovering signatures from DNA databases on a computer cluster.

    PubMed

    Lee, Hsiao Ping; Sheu, Tzu-Fang

    2014-10-05

    Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.

  10. Nome Offshore Mining Information

    Science.gov Websites

    Lands Coal Regulatory Program Large Mine Permits Mineral Property and Rights Mining Index Land potential safety concerns, prevent overcrowding, and provide for efficient processing of the permits and Regulatory Program Large Mine Permitting Mineral Property Management Mining Fact Sheets Mining Forms APMA

  11. Economically viable large-scale hydrogen liquefaction

    NASA Astrophysics Data System (ADS)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  12. Modeling efficiency at the process level: an examination of the care planning process in nursing homes.

    PubMed

    Lee, Robert H; Bott, Marjorie J; Gajewski, Byron; Taunton, Roma Lee

    2009-02-01

    To examine the efficiency of the care planning process in nursing homes. We collected detailed primary data about the care planning process for a stratified random sample of 107 nursing homes from Kansas and Missouri. We used these data to calculate the average direct cost per care plan and used data on selected deficiencies from the Online Survey Certification and Reporting System to measure the quality of care planning. We then analyzed the efficiency of the assessment process using corrected ordinary least squares (COLS) and data envelopment analysis (DEA). Both approaches suggested that there was considerable inefficiency in the care planning process. The average COLS score was 0.43; the average DEA score was 0.48. The correlation between the two sets of scores was quite high, and there was no indication that lower costs resulted in lower quality. For-profit facilities were significantly more efficient than not-for-profit facilities. Multiple studies of nursing homes have found evidence of inefficiency, but virtually all have had measurement problems that raise questions about the results. This analysis, which focuses on a process with much simpler measurement issues, finds evidence of inefficiency that is largely consistent with earlier studies. Making nursing homes more efficient merits closer attention as a strategy for improving care. Increasing efficiency by adopting well-designed, reliable processes can simultaneously reduce costs and improve quality.

  13. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  14. Query-Based Outlier Detection in Heterogeneous Information Networks.

    PubMed

    Kuck, Jonathan; Zhuang, Honglei; Yan, Xifeng; Cam, Hasan; Han, Jiawei

    2015-03-01

    Outlier or anomaly detection in large data sets is a fundamental task in data science, with broad applications. However, in real data sets with high-dimensional space, most outliers are hidden in certain dimensional combinations and are relative to a user's search space and interest. It is often more effective to give power to users and allow them to specify outlier queries flexibly, and the system will then process such mining queries efficiently. In this study, we introduce the concept of query-based outlier in heterogeneous information networks, design a query language to facilitate users to specify such queries flexibly, define a good outlier measure in heterogeneous networks, and study how to process outlier queries efficiently in large data sets. Our experiments on real data sets show that following such a methodology, interesting outliers can be defined and uncovered flexibly and effectively in large heterogeneous networks.

  15. Query-Based Outlier Detection in Heterogeneous Information Networks

    PubMed Central

    Kuck, Jonathan; Zhuang, Honglei; Yan, Xifeng; Cam, Hasan; Han, Jiawei

    2015-01-01

    Outlier or anomaly detection in large data sets is a fundamental task in data science, with broad applications. However, in real data sets with high-dimensional space, most outliers are hidden in certain dimensional combinations and are relative to a user’s search space and interest. It is often more effective to give power to users and allow them to specify outlier queries flexibly, and the system will then process such mining queries efficiently. In this study, we introduce the concept of query-based outlier in heterogeneous information networks, design a query language to facilitate users to specify such queries flexibly, define a good outlier measure in heterogeneous networks, and study how to process outlier queries efficiently in large data sets. Our experiments on real data sets show that following such a methodology, interesting outliers can be defined and uncovered flexibly and effectively in large heterogeneous networks. PMID:27064397

  16. Electro-spray deposition of a mesoporous TiO2 charge collection layer: toward large scale and continuous production of high efficiency perovskite solar cells.

    PubMed

    Kim, Min-cheol; Kim, Byeong Jo; Yoon, Jungjin; Lee, Jin-wook; Suh, Dongchul; Park, Nam-gyu; Choi, Mansoo; Jung, Hyun Suk

    2015-12-28

    The spin-coating method, which is widely used for thin film device fabrication, is incapable of large-area deposition or being performed continuously. In perovskite hybrid solar cells using CH(3)NH(3)PbI(3) (MAPbI(3)), large-area deposition is essential for their potential use in mass production. Prior to replacing all the spin-coating process for fabrication of perovskite solar cells, herein, a mesoporous TiO(2) electron-collection layer is fabricated by using the electro-spray deposition (ESD) system. Moreover, impedance spectroscopy and transient photocurrent and photovoltage measurements reveal that the electro-sprayed mesoscopic TiO(2) film facilitates charge collection from the perovskite. The series resistance of the perovskite solar cell is also reduced owing to the highly porous nature of, and the low density of point defects in, the film. An optimized power conversion efficiency of 15.11% is achieved under an illumination of 1 sun; this efficiency is higher than that (13.67%) of the perovskite solar cell with the conventional spin-coated TiO(2) films. Furthermore, the large-area coating capability of the ESD process is verified through the coating of uniform 10 × 10 cm(2) TiO(2) films. This study clearly shows that ESD constitutes therefore a viable alternative for the fabrication of high-throughput, large-area perovskite solar cells.

  17. A study of the efficiency of hydrogen liquefaction. [jet aircraft applications

    NASA Technical Reports Server (NTRS)

    Baker, C. R.; Shaner, R. L.

    1976-01-01

    The search for an environmentally acceptable fuel to eventually replace petroleum-based fuels for long-range jet aircraft has singled out liquid hydrogen as an outstanding candidate. Hydrogen liquefaction is discussed, along with the effect of several operating parameters on process efficiency. A feasible large-scale commercial hydrogen liquefaction facility based on the results of the efficiency study is described. Potential future improvements in hydrogen liquefaction are noted.

  18. Really big data: Processing and analysis of large datasets

    USDA-ARS?s Scientific Manuscript database

    Modern animal breeding datasets are large and getting larger, due in part to the recent availability of DNA data for many animals. Computational methods for efficiently storing and analyzing those data are under development. The amount of storage space required for such datasets is increasing rapidl...

  19. The data array, a tool to interface the user to a large data base

    NASA Technical Reports Server (NTRS)

    Foster, G. H.

    1974-01-01

    Aspects of the processing of spacecraft data is considered. Use of the data array in a large address space as an intermediate form in data processing for a large scientific data base is advocated. Techniques for efficient indexing in data arrays are reviewed and the data array method for mapping an arbitrary structure onto linear address space is shown. A compromise between the two forms is given. The impact of the data array on the user interface are considered along with implementation.

  20. Nonepitaxial Thin-Film InP for Scalable and Efficient Photocathodes.

    PubMed

    Hettick, Mark; Zheng, Maxwell; Lin, Yongjing; Sutter-Fella, Carolin M; Ager, Joel W; Javey, Ali

    2015-06-18

    To date, some of the highest performance photocathodes of a photoelectrochemical (PEC) cell have been shown with single-crystalline p-type InP wafers, exhibiting half-cell solar-to-hydrogen conversion efficiencies of over 14%. However, the high cost of single-crystalline InP wafers may present a challenge for future large-scale industrial deployment. Analogous to solar cells, a thin-film approach could address the cost challenges by utilizing the benefits of the InP material while decreasing the use of expensive materials and processes. Here, we demonstrate this approach, using the newly developed thin-film vapor-liquid-solid (TF-VLS) nonepitaxial growth method combined with an atomic-layer deposition protection process to create thin-film InP photocathodes with large grain size and high performance, in the first reported solar device configuration generated by materials grown with this technique. Current-voltage measurements show a photocurrent (29.4 mA/cm(2)) and onset potential (630 mV) approaching single-crystalline wafers and an overall power conversion efficiency of 11.6%, making TF-VLS InP a promising photocathode for scalable and efficient solar hydrogen generation.

  1. Integrating theory, synthesis, spectroscopy and device efficiency to design and characterize donor materials for organic photovoltaics: a case study including 12 donors

    DOE PAGES

    Oosterhout, S. D.; Kopidakis, N.; Owczarczyk, Z. R.; ...

    2015-04-07

    There have been remarkable improvements in the power conversion efficiency of solution-processable Organic Photovoltaics (OPV) have largely been driven by the development of novel narrow bandgap copolymer donors comprising an electron-donating (D) and an electron-withdrawing (A) group within the repeat unit. The large pool of potential D and A units and the laborious processes of chemical synthesis and device optimization, has made progress on new high efficiency materials slow with a few new efficient copolymers reported every year despite the large number of groups pursuing these materials. In our paper we present an integrated approach toward new narrow bandgap copolymersmore » that uses theory to guide the selection of materials to be synthesized based on their predicted energy levels, and time-resolved microwave conductivity (TRMC) to select the best-performing copolymer–fullerene bulk heterojunction to be incorporated into complete OPV devices. We validate our methodology by using a diverse group of 12 copolymers, including new and literature materials, to demonstrate good correlation between (a) theoretically determined energy levels of polymers and experimentally determined ionization energies and electron affinities and (b) photoconductance, measured by TRMC, and OPV device performance. The materials used here also allow us to explore whether further copolymer design rules need to be incorporated into our methodology for materials selection. For example, we explore the effect of the enthalpy change (ΔH) during exciton dissociation on the efficiency of free charge carrier generation and device efficiency and find that ΔH of -0.4 eV is sufficient for efficient charge generation.« less

  2. A preferential design approach for energy-efficient and robust implantable neural signal processing hardware.

    PubMed

    Narasimhan, Seetharam; Chiel, Hillel J; Bhunia, Swarup

    2009-01-01

    For implantable neural interface applications, it is important to compress data and analyze spike patterns across multiple channels in real time. Such a computational task for online neural data processing requires an innovative circuit-architecture level design approach for low-power, robust and area-efficient hardware implementation. Conventional microprocessor or Digital Signal Processing (DSP) chips would dissipate too much power and are too large in size for an implantable system. In this paper, we propose a novel hardware design approach, referred to as "Preferential Design" that exploits the nature of the neural signal processing algorithm to achieve a low-voltage, robust and area-efficient implementation using nanoscale process technology. The basic idea is to isolate the critical components with respect to system performance and design them more conservatively compared to the noncritical ones. This allows aggressive voltage scaling for low power operation while ensuring robustness and area efficiency. We have applied the proposed approach to a neural signal processing algorithm using the Discrete Wavelet Transform (DWT) and observed significant improvement in power and robustness over conventional design.

  3. Scalable fabrication of efficient organolead trihalide perovskite solar cells with doctor-bladed active layers

    DOE PAGES

    Deng, Yehao; Peng, Edwin; Shao, Yuchuan; ...

    2015-03-25

    Organolead trihalide perovskites (OTPs) are nature abundant materials with prospects as future low-cost renewable energy sources boosted by the solution process capability of these materials. Here we report the fabrication of efficient OTP devices by a simple, high throughput and low-cost doctor-blade coating process which can be compatible with the roll-to-roll fabrication process for the large scale production of perovskite solar cell panels. The formulation of appropriate precursor inks by removing impurities is shown to be critical in the formation of continuous, pin-hole free and phase-pure perovskite films on large area substrates, which is assisted by a high deposition temperaturemore » to guide the nucleation and grain growth process. The domain size reached 80–250 μm in 1.5–2 μm thick bladed films. By controlling the stoichiometry and thickness of the OTP films, highest device efficiencies of 12.8% and 15.1% are achieved in the devices fabricated on poly(3,4-ethylenedioxythiophene) polystyrene sulfonate and cross-linked N4,N4'-bis(4-(6-((3-ethyloxetan-3-yl)methoxy)hexyl)phenyl)–N4,N4'-diphenylbiphenyl-4,4'-diamine covered ITO substrates. Furthermore, the carrier diffusion length in doctor-bladed OTP films is beyond 3.5 μm which is significantly larger than in the spin-coated films, due to the formation of crystalline grains with a very large size by the doctor-blade coating method.« less

  4. The wave-based substructuring approach for the efficient description of interface dynamics in substructuring

    NASA Astrophysics Data System (ADS)

    Donders, S.; Pluymers, B.; Ragnarsson, P.; Hadjit, R.; Desmet, W.

    2010-04-01

    In the vehicle design process, design decisions are more and more based on virtual prototypes. Due to competitive and regulatory pressure, vehicle manufacturers are forced to improve product quality, to reduce time-to-market and to launch an increasing number of design variants on the global market. To speed up the design iteration process, substructuring and component mode synthesis (CMS) methods are commonly used, involving the analysis of substructure models and the synthesis of the substructure analysis results. Substructuring and CMS enable efficient decentralized collaboration across departments and allow to benefit from the availability of parallel computing environments. However, traditional CMS methods become prohibitively inefficient when substructures are coupled along large interfaces, i.e. with a large number of degrees of freedom (DOFs) at the interface between substructures. The reason is that the analysis of substructures involves the calculation of a number of enrichment vectors, one for each interface degree of freedom (DOF). Since large interfaces are common in vehicles (e.g. the continuous line connections to connect the body with the windshield, roof or floor), this interface bottleneck poses a clear limitation in the vehicle noise, vibration and harshness (NVH) design process. Therefore there is a need to describe the interface dynamics more efficiently. This paper presents a wave-based substructuring (WBS) approach, which allows reducing the interface representation between substructures in an assembly by expressing the interface DOFs in terms of a limited set of basis functions ("waves"). As the number of basis functions can be much lower than the number of interface DOFs, this greatly facilitates the substructure analysis procedure and results in faster design predictions. The waves are calculated once from a full nominal assembly analysis, but these nominal waves can be re-used for the assembly of modified components. The WBS approach thus enables efficient structural modification predictions of the global modes, so that efficient vibro-acoustic design modification, optimization and robust design become possible. The results show that wave-based substructuring offers a clear benefit for vehicle design modifications, by improving both the speed of component reduction processes and the efficiency and accuracy of design iteration predictions, as compared to conventional substructuring approaches.

  5. Using memory-efficient algorithm for large-scale time-domain modeling of surface plasmon polaritons propagation in organic light emitting diodes

    NASA Astrophysics Data System (ADS)

    Zakirov, Andrey; Belousov, Sergei; Valuev, Ilya; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari

    2017-10-01

    We demonstrate an efficient approach to numerical modeling of optical properties of large-scale structures with typical dimensions much greater than the wavelength of light. For this purpose, we use the finite-difference time-domain (FDTD) method enhanced with a memory efficient Locally Recursive non-Locally Asynchronous (LRnLA) algorithm called DiamondTorre and implemented for General Purpose Graphical Processing Units (GPGPU) architecture. We apply our approach to simulation of optical properties of organic light emitting diodes (OLEDs), which is an essential step in the process of designing OLEDs with improved efficiency. Specifically, we consider a problem of excitation and propagation of surface plasmon polaritons (SPPs) in a typical OLED, which is a challenging task given that SPP decay length can be about two orders of magnitude greater than the wavelength of excitation. We show that with our approach it is possible to extend the simulated volume size sufficiently so that SPP decay dynamics is accounted for. We further consider an OLED with periodically corrugated metallic cathode and show how the SPP decay length can be greatly reduced due to scattering off the corrugation. Ultimately, we compare the performance of our algorithm to the conventional FDTD and demonstrate that our approach can efficiently be used for large-scale FDTD simulations with the use of only a single GPGPU-powered workstation, which is not practically feasible with the conventional FDTD.

  6. On plate graphite supported sample processing for simultaneous lipid and protein identification by matrix assisted laser desorption ionization mass spectrometry.

    PubMed

    Calvano, Cosima Damiana; van der Werf, Inez Dorothé; Sabbatini, Luigia; Palmisano, Francesco

    2015-05-01

    The simultaneous identification of lipids and proteins by matrix assisted laser desorption ionization-mass spectrometry (MALDI-MS) after direct on-plate processing of micro-samples supported on colloidal graphite is demonstrated. Taking advantages of large surface area and thermal conductivity, graphite provided an ideal substrate for on-plate proteolysis and lipid extraction. Indeed proteins could be efficiently digested on-plate within 15 min, providing sequence coverages comparable to those obtained by conventional in-solution overnight digestion. Interestingly, detection of hydrophilic phosphorylated peptides could be easily achieved without any further enrichment step. Furthermore, lipids could be simultaneously extracted/identified without any additional treatment/processing step as demonstrated for model complex samples such as milk and egg. The present approach is simple, efficient, of large applicability and offers great promise for protein and lipid identification in very small samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. GPU applications for data processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vladymyrov, Mykhailo, E-mail: mykhailo.vladymyrov@cern.ch; Aleksandrov, Andrey; INFN sezione di Napoli, I-80125 Napoli

    2015-12-31

    Modern experiments that use nuclear photoemulsion imply fast and efficient data acquisition from the emulsion can be performed. The new approaches in developing scanning systems require real-time processing of large amount of data. Methods that use Graphical Processing Unit (GPU) computing power for emulsion data processing are presented here. It is shown how the GPU-accelerated emulsion processing helped us to rise the scanning speed by factor of nine.

  8. Process and apparatus for separating fine particles by microbubble flotation together with a process and apparatus for generation of microbubbles

    DOEpatents

    Yoon, R.H.; Adel, G.T.; Luttrell, G.H.

    1991-01-01

    A method and apparatus are disclosed for the microbubble flotation separation of very fine particles, especially coal, so as to produce a high purity and large recovery efficiently. This is accomplished through the use of a high aspect ratio flotation column, microbubbles, and a countercurrent use of wash water to gently wash the froth. Also, disclosed are unique processes and apparatus for generating microbubbles for flotation in a high efficient and inexpensive manner using either a porous tube or an in-line static generator. 23 figures.

  9. Efficient frequent pattern mining algorithm based on node sets in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Billa, V. N. Vinay Kumar; Lakshmanna, K.; Rajesh, K.; Reddy, M. Praveen Kumar; Nagaraja, G.; Sudheer, K.

    2017-11-01

    The ultimate goal of Data Mining is to determine the hidden information which is useful in making decisions using the large databases collected by an organization. This Data Mining involves many tasks that are to be performed during the process. Mining frequent itemsets is the one of the most important tasks in case of transactional databases. These transactional databases contain the data in very large scale where the mining of these databases involves the consumption of physical memory and time in proportion to the size of the database. A frequent pattern mining algorithm is said to be efficient only if it consumes less memory and time to mine the frequent itemsets from the given large database. Having these points in mind in this thesis we proposed a system which mines frequent itemsets in an optimized way in terms of memory and time by using cloud computing as an important factor to make the process parallel and the application is provided as a service. A complete framework which uses a proven efficient algorithm called FIN algorithm. FIN algorithm works on Nodesets and POC (pre-order coding) tree. In order to evaluate the performance of the system we conduct the experiments to compare the efficiency of the same algorithm applied in a standalone manner and in cloud computing environment on a real time data set which is traffic accidents data set. The results show that the memory consumption and execution time taken for the process in the proposed system is much lesser than those of standalone system.

  10. An algebraic equation solution process formulated in anticipation of banded linear equations.

    DOT National Transportation Integrated Search

    1971-01-01

    A general method for the solution of large, sparsely banded, positive-definite, coefficient matrices is presented. The goal in developing the method was to produce an efficient and reliable solution process and to provide the user-programmer with a p...

  11. Efficient collective influence maximization in cascading processes with first-order transitions

    PubMed Central

    Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.

    2017-01-01

    In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches. PMID:28349988

  12. Efficient collective influence maximization in cascading processes with first-order transitions

    NASA Astrophysics Data System (ADS)

    Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.

    2017-03-01

    In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches.

  13. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  14. Development of high efficiency thin film polycrystalline silicon solar cells using VEST process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishihara, T.; Arimoto, S.; Morikawa, H.

    1998-12-31

    Thin film Si solar cell has been developed using Via-hole Etching for the Separation of Thin films (VEST) process. The process is based on SOI technology of zone-melting recrystallization (ZMR) followed by chemical vapor deposition (CVD), separation of thin film, and screen printing. Key points for achieving high efficiency are (1) quality of Si films, (2) back surface emitter (BSE), (3) front surface emitter etch-back process, (4) back surface field (BSF) layer thickness and its resistivity, and (5) defect passivation by hydrogen implantation. As a result of experiments, the authors have achieved 16% efficiency (V{sub oc}:0.589V, J{sub sc}:35.6mA/cm{sup 2}, F,F:0.763)more » with a cell size of 95.8cm{sup 2} and the thickness of 77 {micro}m. It is the highest efficiency ever reported for large area thin film Si solar cells.« less

  15. A method for predicting optimized processing parameters for surfacing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupont, J.N.; Marder, A.R.

    1994-12-31

    Welding is used extensively for surfacing applications. To operate a surfacing process efficiently, the variables must be optimized to produce low levels of dilution with the substrate while maintaining high deposition rates. An equation for dilution in terms of the welding variables, thermal efficiency factors, and thermophysical properties of the overlay and substrate was developed by balancing energy and mass terms across the welding arc. To test the validity of the resultant dilution equation, the PAW, GTAW, GMAW, and SAW processes were used to deposit austenitic stainless steel onto carbon steel over a wide range of parameters. Arc efficiency measurementsmore » were conducted using a Seebeck arc welding calorimeter. Melting efficiency was determined based on knowledge of the arc efficiency. Dilution was determined for each set of processing parameters using a quantitative image analysis system. The pertinent equations indicate dilution is a function of arc power (corrected for arc efficiency), filler metal feed rate, melting efficiency, and thermophysical properties of the overlay and substrate. With the aid of the dilution equation, the effect of processing parameters on dilution is presented by a new processing diagram. A new method is proposed for determining dilution from welding variables. Dilution is shown to depend on the arc power, filler metal feed rate, arc and melting efficiency, and the thermophysical properties of the overlay and substrate. Calculated dilution levels were compared with measured values over a large range of processing parameters and good agreement was obtained. The results have been applied to generate a processing diagram which can be used to: (1) predict the maximum deposition rate for a given arc power while maintaining adequate fusion with the substrate, and (2) predict the resultant level of dilution with the substrate.« less

  16. MilxXplore: a web-based system to explore large imaging datasets.

    PubMed

    Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J

    2013-01-01

    As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis.

  17. Joint classification and contour extraction of large 3D point clouds

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  18. Improving the Efficiency and Effectiveness of Grading through the Use of Computer-Assisted Grading Rubrics

    ERIC Educational Resources Information Center

    Anglin, Linda; Anglin, Kenneth; Schumann, Paul L.; Kaliski, John A.

    2008-01-01

    This study tests the use of computer-assisted grading rubrics compared to other grading methods with respect to the efficiency and effectiveness of different grading processes for subjective assignments. The test was performed on a large Introduction to Business course. The students in this course were randomly assigned to four treatment groups…

  19. Large Scale Frequent Pattern Mining using MPI One-Sided Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishnu, Abhinav; Agarwal, Khushbu

    In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. Anmore » experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.« less

  20. Memoized Online Variational Inference for Dirichlet Process Mixture Models

    DTIC Science & Technology

    2014-06-27

    breaking process [7], which places artifically large mass on the final component. It is more efficient and broadly applicable than an alternative trunction...models. In Uncertainty in Artificial Intelligence , 2008. [13] N. Le Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential

  1. Hydrodynamic cavitation as a strategy to enhance the efficiency of lignocellulosic biomass pretreatment.

    PubMed

    Terán Hilares, Ruly; Ramos, Lucas; da Silva, Silvio Silvério; Dragone, Giuliano; Mussatto, Solange I; Santos, Júlio César Dos

    2018-06-01

    Hydrodynamic cavitation (HC) is a process technology with potential for application in different areas including environmental, food processing, and biofuels production. Although HC is an undesirable phenomenon for hydraulic equipment, the net energy released during this process is enough to accelerate certain chemical reactions. The application of cavitation energy to enhance the efficiency of lignocellulosic biomass pretreatment is an interesting strategy proposed for integration in biorefineries for the production of bio-based products. Moreover, the use of an HC-assisted process was demonstrated as an attractive alternative when compared to other conventional pretreatment technologies. This is not only due to high pretreatment efficiency resulting in high enzymatic digestibility of carbohydrate fraction, but also, by its high energy efficiency, simple configuration, and construction of systems, besides the possibility of using on the large scale. This paper gives an overview regarding HC technology and its potential for application on the pretreatment of lignocellulosic biomass. The parameters affecting this process and the perspectives for future developments in this area are also presented and discussed.

  2. How to harvest efficient laser from solar light

    NASA Astrophysics Data System (ADS)

    Zhao, Changming; Guan, Zhe; Zhang, Haiyang

    2018-02-01

    Solar Pumped Solid State Lasers (SPSSL) is a kind of solid state lasers that can transform solar light into laser directly, with the advantages of least energy transform procedure, higher energy transform efficiency, simpler structure, higher reliability, and longer lifetime, which is suitable for use in unmanned space system, for solar light is the only form of energy source in space. In order to increase the output power and improve the efficiency of SPSSL, we conducted intensive studies on the suitable laser material selection for solar pump, high efficiency/large aperture focusing optical system, the optimization of concave cavity as the second focusing system, laser material bonding and surface processing. Using bonded and grooved Nd:YAG rod as laser material, large aperture Fresnel lens as the first stage focusing element, concave cavity as the second stage focusing element, we finally got 32.1W/m2 collection efficiency, which is the highest collection efficiency in the world up to now.

  3. Efficient path-based computations on pedigree graphs with compact encodings

    PubMed Central

    2012-01-01

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898

  4. Novel Binders and Methods for Agglomeration of Ore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. K. Kawatra; T. C. Eisele; K. A. Lewandowski

    2006-12-31

    Many metal extraction operations, such as leaching of copper, leaching of precious metals, and reduction of metal oxides to metal in high-temperature furnaces, require agglomeration of ore to ensure that reactive liquids or gases are evenly distributed throughout the ore being processed. Agglomeration of ore into coarse, porous masses achieves this even distribution of fluids by preventing fine particles from migrating and clogging the spaces and channels between the larger ore particles. Binders are critically necessary to produce agglomerates that will not break down during processing. However, for many important metal extraction processes there are no binders known that willmore » work satisfactorily. Primary examples of this are copper heap leaching, where there are no binders that will work in the acidic environment encountered in this process, and advanced ironmaking processes, where binders must function satisfactorily over an extraordinarily large range of temperatures (from room temperature up to over 1200 C). As a result, operators of many facilities see a large loss of process efficiency due to their inability to take advantage of agglomeration. The large quantities of ore that must be handled in metal extraction processes also means that the binder must be inexpensive and useful at low dosages to be economical. The acid-resistant binders and agglomeration procedures developed in this project will also be adapted for use in improving the energy efficiency and performance of a broad range of mineral agglomeration applications, particularly heap leaching and advanced primary ironmaking. This project has identified several acid-resistant binders and agglomeration procedures that can be used for improving the energy efficiency of heap leaching, by preventing the ''ponding'' and ''channeling'' effects that currently cause reduced recovery and extended leaching cycle times. Methods have also been developed for iron ore processing which are intended to improve the performance of pellet binders, and have directly saved energy by increasing filtration rates of the pelletization feed by as much as 23%.« less

  5. A Process Management System for Networked Manufacturing

    NASA Astrophysics Data System (ADS)

    Liu, Tingting; Wang, Huifen; Liu, Linyan

    With the development of computer, communication and network, networked manufacturing has become one of the main manufacturing paradigms in the 21st century. Under the networked manufacturing environment, there exist a large number of cooperative tasks susceptible to alterations, conflicts caused by resources and problems of cost and quality. This increases the complexity of administration. Process management is a technology used to design, enact, control, and analyze networked manufacturing processes. It supports efficient execution, effective management, conflict resolution, cost containment and quality control. In this paper we propose an integrated process management system for networked manufacturing. Requirements of process management are analyzed and architecture of the system is presented. And a process model considering process cost and quality is developed. Finally a case study is provided to explain how the system runs efficiently.

  6. A General-purpose Framework for Parallel Processing of Large-scale LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Z.; Hodgson, M.; Li, W.

    2016-12-01

    Light detection and ranging (LiDAR) technologies have proven efficiency to quickly obtain very detailed Earth surface data for a large spatial extent. Such data is important for scientific discoveries such as Earth and ecological sciences and natural disasters and environmental applications. However, handling LiDAR data poses grand geoprocessing challenges due to data intensity and computational intensity. Previous studies received notable success on parallel processing of LiDAR data to these challenges. However, these studies either relied on high performance computers and specialized hardware (GPUs) or focused mostly on finding customized solutions for some specific algorithms. We developed a general-purpose scalable framework coupled with sophisticated data decomposition and parallelization strategy to efficiently handle big LiDAR data. Specifically, 1) a tile-based spatial index is proposed to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, 2) two spatial decomposition techniques are developed to enable efficient parallelization of different types of LiDAR processing tasks, and 3) by coupling existing LiDAR processing tools with Hadoop, this framework is able to conduct a variety of LiDAR data processing tasks in parallel in a highly scalable distributed computing environment. The performance and scalability of the framework is evaluated with a series of experiments conducted on a real LiDAR dataset using a proof-of-concept prototype system. The results show that the proposed framework 1) is able to handle massive LiDAR data more efficiently than standalone tools; and 2) provides almost linear scalability in terms of either increased workload (data volume) or increased computing nodes with both spatial decomposition strategies. We believe that the proposed framework provides valuable references on developing a collaborative cyberinfrastructure for processing big earth science data in a highly scalable environment.

  7. Large-scale fabrication of micro-lens array by novel end-fly-cutting-servo diamond machining.

    PubMed

    Zhu, Zhiwei; To, Suet; Zhang, Shaojian

    2015-08-10

    Fast/slow tool servo (FTS/STS) diamond turning is a very promising technique for the generation of micro-lens array (MLA). However, it is still a challenge to process MLA in large scale due to certain inherent limitations of this technique. In the present study, a novel ultra-precision diamond cutting method, as the end-fly-cutting-servo (EFCS) system, is adopted and investigated for large-scale generation of MLA. After a detailed discussion of the characteristic advantages for processing MLA, the optimal toolpath generation strategy for the EFCS is developed with consideration of the geometry and installation pose of the diamond tool. A typical aspheric MLA over a large area is experimentally fabricated, and the resulting form accuracy, surface micro-topography and machining efficiency are critically investigated. The result indicates that the MLA with homogeneous quality over the whole area is obtained. Besides, high machining efficiency, extremely small volume of control points for the toolpath, and optimal usage of system dynamics of the machine tool during the whole cutting can be simultaneously achieved.

  8. Producing Hydrogen With Sunlight

    NASA Technical Reports Server (NTRS)

    Biddle, J. R.; Peterson, D. B.; Fujita, T.

    1987-01-01

    Costs high but reduced by further research. Producing hydrogen fuel on large scale from water by solar energy practical if plant costs reduced, according to study. Sunlight attractive energy source because it is free and because photon energy converts directly to chemical energy when it breaks water molecules into diatomic hydrogen and oxygen. Conversion process low in efficiency and photochemical reactor must be spread over large area, requiring large investment in plant. Economic analysis pertains to generic photochemical processes. Does not delve into details of photochemical reactor design because detailed reactor designs do not exist at this early stage of development.

  9. Research on pre-processing of QR Code

    NASA Astrophysics Data System (ADS)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  10. Combined Brayton-JT cycles with refrigerants for natural gas liquefaction

    NASA Astrophysics Data System (ADS)

    Chang, Ho-Myung; Park, Jae Hoon; Lee, Sanggyu; Choe, Kun Hyung

    2012-06-01

    Thermodynamic cycles for natural gas liquefaction with single-component refrigerants are investigated under a governmental project in Korea, aiming at new processes to meet the requirements on high efficiency, large capacity, and simple equipment. Based upon the optimization theory recently published by the present authors, it is proposed to replace the methane-JT cycle in conventional cascade process with a nitrogen-Brayton cycle. A variety of systems to combine nitrogen-Brayton, ethane-JT and propane-JT cycles are simulated with Aspen HYSYS and quantitatively compared in terms of thermodynamic efficiency, flow rate of refrigerants, and estimated size of heat exchangers. A specific Brayton-JT cycle is suggested with detailed thermodynamic data for further process development. The suggested cycle is expected to be more efficient and simpler than the existing cascade process, while still taking advantage of easy and robust operation with single-component refrigerants.

  11. An effective and efficient assessment process

    Treesearch

    Russell T. Graham; Theresa B. Jain

    1999-01-01

    Depending on the agency, discipline, or audience, assessments supply data and information to address relevant policy questions and to help make decisions. If properly executed, assessment processes can draw conclusions and make recommendations on how to manage natural resources. Assessments, especially large ones, can be easily influenced by internal and external...

  12. Development of a large area space solar cell assembly

    NASA Technical Reports Server (NTRS)

    Spitzer, M. B.

    1982-01-01

    The development of a large area high efficiency solar cell assembly is described. The assembly consists of an ion implanted silicon solar cell and glass cover. The important attributes of fabrication are the use of a back surface field which is compatible with a back surface reflector, and integration of coverglass application and cell fabrications. Cell development experiments concerned optimization of ion implantation processing of 2 ohm-cm boron-doped silicon. Process parameters were selected based on these experiments and cells with area of 34.3 sq cm wre fabricated. The average AMO efficiency of the twenty-five best cells was 13.9% and the best bell had an efficiency of 14.4%. An important innovation in cell encapsulation was also developed. In this technique, the coverglass is applied before the cell is sawed to final size. The coverglass and cell are then sawed as a unit. In this way, the cost of the coverglass is reduced, since the tolerance on glass size is relaxed, and costly coverglass/cell alignment procedures are eliminated. Adhesive investigated were EVA, FEP-Teflon sheet and DC 93-500. Details of processing and results are reported.

  13. Enhanced out-coupling efficiency of organic light-emitting diodes using an nanostructure imprinted by an alumina nanohole array

    NASA Astrophysics Data System (ADS)

    Endo, Kuniaki; Adachi, Chihaya

    2014-03-01

    We demonstrate organic light-emitting diodes (OLEDs) with enhanced out-coupling efficiency containing nanostructures imprinted by an alumina nanohole array template that can be applied to large-emitting-area and flexible devices using a roll-to-roll process. The nanostructures are imprinted on a glass substrate by an ultraviolet nanoimprint process using an alumina nanohole array mold and then an OLED is fabricated on the nanostructures. The enhancement of out-coupling efficiency is proportional to the root-mean-square roughness of the nanostructures, and a maximum improvement of external electroluminescence quantum efficiency of 17% is achieved. The electroluminescence spectra of the OLEDs indicate that this improvement is caused by enhancement of the out-coupling of surface plasmon polaritons.

  14. PLATSIM: An efficient linear simulation and analysis package for large-order flexible systems

    NASA Technical Reports Server (NTRS)

    Maghami, Periman; Kenny, Sean P.; Giesy, Daniel P.

    1995-01-01

    PLATSIM is a software package designed to provide efficient time and frequency domain analysis of large-order generic space platforms implemented with any linear time-invariant control system. Time domain analysis provides simulations of the overall spacecraft response levels due to either onboard or external disturbances. The time domain results can then be processed by the jitter analysis module to assess the spacecraft's pointing performance in a computationally efficient manner. The resulting jitter analysis algorithms have produced an increase in speed of several orders of magnitude over the brute force approach of sweeping minima and maxima. Frequency domain analysis produces frequency response functions for uncontrolled and controlled platform configurations. The latter represents an enabling technology for large-order flexible systems. PLATSIM uses a sparse matrix formulation for the spacecraft dynamics model which makes both the time and frequency domain operations quite efficient, particularly when a large number of modes are required to capture the true dynamics of the spacecraft. The package is written in MATLAB script language. A graphical user interface (GUI) is included in the PLATSIM software package. This GUI uses MATLAB's Handle graphics to provide a convenient way for setting simulation and analysis parameters.

  15. Los Alamos Discovers Super Efficient Solar Using Perovskite Crystals

    ScienceCinema

    Mohite, Aditya; Nie, Wanyi

    2018-05-11

    State-of-the-art photovoltaics using high-purity, large-area, wafer-scale single-crystalline semiconductors grown by sophisticated, high temperature crystal-growth processes offer promising routes for developing low-cost, solar-based clean global energy solutions for the future. Solar cells composed of the recently discovered material organic-inorganic perovskites offer the efficiency of silicon, yet suffer from a variety of deficiencies limiting the commercial viability of perovskite photovoltaic technology. In research to appear in Science, Los Alamos National Laboratory researchers reveal a new solution-based hot-casting technique that eliminates these limitations, one that allows for the growth of high-quality, large-area, millimeter-scale perovskite crystals and demonstrates that highly efficient and reproducible solar cells with reduced trap assisted recombination can be realized.

  16. SAR correlation technique - An algorithm for processing data with large range walk

    NASA Technical Reports Server (NTRS)

    Jin, M.; Wu, C.

    1983-01-01

    This paper presents an algorithm for synthetic aperture radar (SAR) azimuth correlation with extraneously large range migration effect which can not be accommodated by the existing frequency domain interpolation approach used in current SEASAT SAR processing. A mathematical model is first provided for the SAR point-target response in both the space (or time) and the frequency domain. A simple and efficient processing algorithm derived from the hybrid algorithm is then given. This processing algorithm enables azimuth correlation by two steps. The first step is a secondary range compression to handle the dispersion of the spectra of the azimuth response along range. The second step is the well-known frequency domain range migration correction approach for the azimuth compression. This secondary range compression can be processed simultaneously with range pulse compression. Simulation results provided here indicate that this processing algorithm yields a satisfactory compressed impulse response for SAR data with large range migration.

  17. NOVEL BINDERS AND METHODS FOR AGGLOMERATION OF ORE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S.K. Kawatra; T.C. Eisele; J.A. Gurtler

    2004-04-01

    Many metal extraction operations, such as leaching of copper, leaching of precious metals, and reduction of metal oxides to metal in high-temperature furnaces, require agglomeration of ore to ensure that reactive liquids or gases are evenly distributed throughout the ore being processed. Agglomeration of ore into coarse, porous masses achieves this even distribution of fluids by preventing fine particles from migrating and clogging the spaces and channels between the larger ore particles. Binders are critically necessary to produce agglomerates that will not break down during processing. However, for many important metal extraction processes there are no binders known that willmore » work satisfactorily. Primary examples of this are copper heap leaching, where there are no binders that will work in the acidic environment encountered in this process, and advanced ironmaking processes, where binders must function satisfactorily over an extraordinarily large range of temperatures (from room temperature up to over 1200 C). As a result, operators of many facilities see a large loss of process efficiency due to their inability to take advantage of agglomeration. The large quantities of ore that must be handled in metal extraction processes also means that the binder must be inexpensive and useful at low dosages to be economical. The acid-resistant binders and agglomeration procedures developed in this project will also be adapted for use in improving the energy efficiency and performance of a broad range of mineral agglomeration applications, particularly heap leaching and advanced primary ironmaking.« less

  18. Retail Buildings: Assessing and Reducing Plug and Process Loads in Retail Buildings (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2013-04-01

    Plug and process loads (PPLs) in commercial buildings account for almost 5% of U.S. primary energy consumption. Minimizing these loads is a primary challenge in the design and operation of an energy-efficient building. PPLs are not related to general lighting, heating, ventilation, cooling, and water heating, and typically do not provide comfort to the occupants. They use an increasingly large fraction of the building energy use pie because the number and variety of electrical devices have increased along with building system efficiency. Reducing PPLs is difficult because energy efficiency opportunities and the equipment needed to address PPL energy use inmore » retail spaces are poorly understood.« less

  19. A framework for the direct evaluation of large deviations in non-Markovian processes

    NASA Astrophysics Data System (ADS)

    Cavallaro, Massimo; Harris, Rosemary J.

    2016-11-01

    We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the ‘cloning’ procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means.

  20. Semantics-based distributed I/O with the ParaMEDIC framework.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balaji, P.; Feng, W.; Lin, H.

    2008-01-01

    Many large-scale applications simultaneously rely on multiple resources for efficient execution. For example, such applications may require both large compute and storage resources; however, very few supercomputing centers can provide large quantities of both. Thus, data generated at the compute site oftentimes has to be moved to a remote storage site for either storage or visualization and analysis. Clearly, this is not an efficient model, especially when the two sites are distributed over a wide-area network. Thus, we present a framework called 'ParaMEDIC: Parallel Metadata Environment for Distributed I/O and Computing' which uses application-specific semantic information to convert the generatedmore » data to orders-of-magnitude smaller metadata at the compute site, transfer the metadata to the storage site, and re-process the metadata at the storage site to regenerate the output. Specifically, ParaMEDIC trades a small amount of additional computation (in the form of data post-processing) for a potentially significant reduction in data that needs to be transferred in distributed environments.« less

  1. An Exploratory Study of the Effects of Online Course Efficiency Perceptions on Student Evaluation of Teaching (SET) Measures

    ERIC Educational Resources Information Center

    Estelami, Hooman

    2016-01-01

    One of the fundamental drivers of the growing use of distance learning methods in modern business education has been the efficiency gains associated with this method of educational delivery. Distance methods benefit both students and educational institutions as they facilitate the processing of large volumes of learning material to overcome…

  2. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo.

    PubMed

    McDaniel, T; D'Azevedo, E F; Li, Y W; Wong, K; Kent, P R C

    2017-11-07

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.

  3. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.

    2017-11-01

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.

  4. Parallel and Efficient Sensitivity Analysis of Microscopy Image Segmentation Workflows in Hybrid Systems

    PubMed Central

    Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel

    2017-01-01

    We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725

  5. Facile fabrication of large-grain CH3NH3PbI3−xBrx films for high-efficiency solar cells via CH3NH3Br-selective Ostwald ripening

    PubMed Central

    Yang, Mengjin; Zhang, Taiyang; Schulz, Philip; Li, Zhen; Li, Ge; Kim, Dong Hoe; Guo, Nanjie; Berry, Joseph J.; Zhu, Kai; Zhao, Yixin

    2016-01-01

    Organometallic halide perovskite solar cells (PSCs) have shown great promise as a low-cost, high-efficiency photovoltaic technology. Structural and electro-optical properties of the perovskite absorber layer are most critical to device operation characteristics. Here we present a facile fabrication of high-efficiency PSCs based on compact, large-grain, pinhole-free CH3NH3PbI3−xBrx (MAPbI3−xBrx) thin films with high reproducibility. A simple methylammonium bromide (MABr) treatment via spin-coating with a proper MABr concentration converts MAPbI3 thin films with different initial film qualities (for example, grain size and pinholes) to high-quality MAPbI3−xBrx thin films following an Ostwald ripening process, which is strongly affected by MABr concentration and is ineffective when replacing MABr with methylammonium iodide. A higher MABr concentration enhances I–Br anion exchange reaction, yielding poorer device performance. This MABr-selective Ostwald ripening process improves cell efficiency but also enhances device stability and thus represents a simple, promising strategy for further improving PSC performance with higher reproducibility and reliability. PMID:27477212

  6. Finite-size effect on optimal efficiency of heat engines.

    PubMed

    Tajima, Hiroyasu; Hayashi, Masahito

    2017-07-01

    The optimal efficiency of quantum (or classical) heat engines whose heat baths are n-particle systems is given by the strong large deviation. We give the optimal work extraction process as a concrete energy-preserving unitary time evolution among the heat baths and the work storage. We show that our optimal work extraction turns the disordered energy of the heat baths to the ordered energy of the work storage, by evaluating the ratio of the entropy difference to the energy difference in the heat baths and the work storage, respectively. By comparing the statistical mechanical optimal efficiency with the macroscopic thermodynamic bound, we evaluate the accuracy of the macroscopic thermodynamics with finite-size heat baths from the statistical mechanical viewpoint. We also evaluate the quantum coherence effect on the optimal efficiency of the cycle processes without restricting their cycle time by comparing the classical and quantum optimal efficiencies.

  7. Microbial desulfurization of coal

    NASA Technical Reports Server (NTRS)

    Dastoor, M. N.; Kalvinskas, J. J.

    1978-01-01

    Experiments indicate that several sulfur-oxidizing bacteria strains have been very efficient in desulfurizing coal. Process occurs at room temperature and does not require large capital investments of high energy inputs. Process may expand use of abundant reserves of high-sulfur bituminous coal, which is currently restricted due to environmental pollution. On practical scale, process may be integrated with modern coal-slurry transportation lines.

  8. Automated aray assembly, phase 2

    NASA Technical Reports Server (NTRS)

    Daiello, R. V.

    1979-01-01

    A manufacturing process suitable for the large-scale production of silicon solar array modules at a cost of less than $500/peak kW is described. Factors which control the efficiency of ion implanted silicon solar cells, screen-printed thick film metallization, spray-on antireflection coating process, and panel assembly are discussed. Conclusions regarding technological readiness or cost effectiveness of individual process steps are presented.

  9. Storage and computationally efficient permutations of factorized covariance and square-root information arrays

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.

  10. A review of the promises and challenges of micro-concentrator photovoltaics

    NASA Astrophysics Data System (ADS)

    Domínguez, César; Jost, Norman; Askins, Steve; Victoria, Marta; Antón, Ignacio

    2017-09-01

    Micro concentrator photovoltaics (micro-CPV) is an unconventional approach for developing high-efficiency low-cost PV systems. The micrifying of cells and optics brings about an increase of efficiency with respect to classical CPV, at the expense of some fundamental challenges at mass production. The large costs linked to miniaturization under conventional serial-assembly processes raise the need for the development of parallel manufacturing technologies. In return, the tiny sizes involved allows exploring unconventional optical architectures or revisiting conventional concepts that were typically discarded because of large material consumption or high bulk absorption at classical CPV sizes.

  11. A new framework to increase the efficiency of large-scale solar power plants.

    NASA Astrophysics Data System (ADS)

    Alimohammadi, Shahrouz; Kleissl, Jan P.

    2015-11-01

    A new framework to estimate the spatio-temporal behavior of solar power is introduced, which predicts the statistical behavior of power output at utility scale Photo-Voltaic (PV) power plants. The framework is based on spatio-temporal Gaussian Processes Regression (Kriging) models, which incorporates satellite data with the UCSD version of the Weather and Research Forecasting model. This framework is designed to improve the efficiency of the large-scale solar power plants. The results are also validated from measurements of the local pyranometer sensors, and some improvements in different scenarios are observed. Solar energy.

  12. RabbitQR: fast and flexible big data processing at LSST data rates using existing, shared-use hardware

    NASA Astrophysics Data System (ADS)

    Kotulla, Ralf; Gopu, Arvind; Hayashi, Soichi

    2016-08-01

    Processing astronomical data to science readiness was and remains a challenge, in particular in the case of multi detector instruments such as wide-field imagers. One such instrument, the WIYN One Degree Imager, is available to the astronomical community at large, and, in order to be scientifically useful to its varied user community on a short timescale, provides its users fully calibrated data in addition to the underlying raw data. However, time-efficient re-processing of the often large datasets with improved calibration data and/or software requires more than just a large number of CPU-cores and disk space. This is particularly relevant if all computing resources are general purpose and shared with a large number of users in a typical university setup. Our approach to address this challenge is a flexible framework, combining the best of both high performance (large number of nodes, internal communication) and high throughput (flexible/variable number of nodes, no dedicated hardware) computing. Based on the Advanced Message Queuing Protocol, we a developed a Server-Manager- Worker framework. In addition to the server directing the work flow and the worker executing the actual work, the manager maintains a list of available worker, adds and/or removes individual workers from the worker pool, and re-assigns worker to different tasks. This provides the flexibility of optimizing the worker pool to the current task and workload, improves load balancing, and makes the most efficient use of the available resources. We present performance benchmarks and scaling tests, showing that, today and using existing, commodity shared- use hardware we can process data with data throughputs (including data reduction and calibration) approaching that expected in the early 2020s for future observatories such as the Large Synoptic Survey Telescope.

  13. An Exponential Luminous Efficiency Model for Hypervelocity Impact into Regolith

    NASA Technical Reports Server (NTRS)

    Swift, W. R.; Moser, D. E.; Suggs, R. M.; Cooke, W. J.

    2011-01-01

    The flash of thermal radiation produced as part of the impact-crater forming process can be used to determine the energy of the impact if the luminous efficiency is known. From this energy the mass and, ultimately, the mass flux of similar impactors can be deduced. The luminous efficiency, eta, is a unique function of velocity with an extremely large variation in the laboratory range of under 6 km/s but a necessarily small variation with velocity in the meteoric range of 20 to 70 km/s. Impacts into granular or powdery regolith, such as that on the moon, differ from impacts into solid materials in that the energy is deposited via a serial impact process which affects the rate of deposition of internal (thermal) energy. An exponential model of the process is developed which differs from the usual polynomial models of crater formation. The model is valid for the early time portion of the process and focuses on the deposition of internal energy into the regolith. The model is successfully compared with experimental luminous efficiency data from both laboratory impacts and from lunar impact observations. Further work is proposed to clarify the effects of mass and density upon the luminous efficiency scaling factors. Keywords hypervelocity impact impact flash luminous efficiency lunar impact meteoroid 1

  14. Increase of efficiency of finishing-cleaning and hardening processing of details based on rotor-screw technological systems

    NASA Astrophysics Data System (ADS)

    Lebedev, V. A.; Serga, G. V.; Khandozhko, A. V.

    2018-03-01

    The article proposes technical solutions for increasing the efficiency of finishing-cleaning and hardening processing of parts on the basis of rotor-screw technological systems. The essence, design features and technological capabilities of the rotor-screw technological system with a rotating container are disclosed, which allows one to expand the range of the resulting displacement vectors, granules of the abrasive medium and processed parts. Ways of intensification of the processing on their basis by means of vibration activation of the process providing a combined effect on the mass of loading of large and small amplitude low-frequency oscillations are proposed. The results of the experimental studies of the movement of bulk materials in a screw container are presented, which showed that Kv = 0.5-0.6 can be considered the optimal value of the container filling factor. The estimation of screw containers application efficiency proceeding from their design features is given.

  15. Uncertainty quantification of seabed parameters for large data volumes along survey tracks with a tempered particle filter

    NASA Astrophysics Data System (ADS)

    Dettmer, J.; Quijano, J. E.; Dosso, S. E.; Holland, C. W.; Mandolesi, E.

    2016-12-01

    Geophysical seabed properties are important for the detection and classification of unexploded ordnance. However, current surveying methods such as vertical seismic profiling, coring, or inversion are of limited use when surveying large areas with high spatial sampling density. We consider surveys based on a source and receiver array towed by an autonomous vehicle which produce large volumes of seabed reflectivity data that contain unprecedented and detailed seabed information. The data are analyzed with a particle filter, which requires efficient reflection-coefficient computation, efficient inversion algorithms and efficient use of computer resources. The filter quantifies information content of multiple sequential data sets by considering results from previous data along the survey track to inform the importance sampling at the current point. Challenges arise from environmental changes along the track where the number of sediment layers and their properties change. This is addressed by a trans-dimensional model in the filter which allows layering complexity to change along a track. Efficiency is improved by likelihood tempering of various particle subsets and including exchange moves (parallel tempering). The filter is implemented on a hybrid computer that combines central processing units (CPUs) and graphics processing units (GPUs) to exploit three levels of parallelism: (1) fine-grained parallel computation of spherical reflection coefficients with a GPU implementation of Levin integration; (2) updating particles by concurrent CPU processes which exchange information using automatic load balancing (coarse grained parallelism); (3) overlapping CPU-GPU communication (a major bottleneck) with GPU computation by staggering CPU access to the multiple GPUs. The algorithm is applied to spherical reflection coefficients for data sets along a 14-km track on the Malta Plateau, Mediterranean Sea. We demonstrate substantial efficiency gains over previous methods. [This research was supported in part by the U.S. Dept of Defense, thought the Strategic Environmental Research and Development Program (SERDP).

  16. Computationally efficient algorithm for Gaussian Process regression in case of structured samples

    NASA Astrophysics Data System (ADS)

    Belyaev, M.; Burnaev, E.; Kapushev, Y.

    2016-04-01

    Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.

  17. Los Alamos National Security, LLC Request for Information on how industry may partner with the Laboratory on KIVA software.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mcdonald, Kathleen Herrera

    2016-02-29

    KIVA is a family of Fortran-based computational fluid dynamics software developed by LANL. The software predicts complex fuel and air flows as well as ignition, combustion, and pollutant-formation processes in engines. The KIVA models have been used to understand combustion chemistry processes, such as auto-ignition of fuels, and to optimize diesel engines for high efficiency and low emissions. Fuel economy is heavily dependent upon engine efficiency, which in turn depends to a large degree on how fuel is burned within the cylinders of the engine. Higher in-cylinder pressures and temperatures lead to increased fuel economy, but they also create moremore » difficulty in controlling the combustion process. Poorly controlled and incomplete combustion can cause higher levels of emissions and lower engine efficiencies.« less

  18. A Statistical Ontology-Based Approach to Ranking for Multiword Search

    ERIC Educational Resources Information Center

    Kim, Jinwoo

    2013-01-01

    Keyword search is a prominent data retrieval method for the Web, largely because the simple and efficient nature of keyword processing allows a large amount of information to be searched with fast response. However, keyword search approaches do not formally capture the clear meaning of a keyword query and fail to address the semantic relationships…

  19. Inkjet-Printed Small-Molecule Organic Light-Emitting Diodes: Halogen-Free Inks, Printing Optimization, and Large-Area Patterning.

    PubMed

    Zhou, Lu; Yang, Lei; Yu, Mengjie; Jiang, Yi; Liu, Cheng-Fang; Lai, Wen-Yong; Huang, Wei

    2017-11-22

    Manufacturing small-molecule organic light-emitting diodes (OLEDs) via inkjet printing is rather attractive for realizing high-efficiency and long-life-span devices, yet it is challenging. In this paper, we present our efforts on systematical investigation and optimization of the ink properties and the printing process to enable facile inkjet printing of conjugated light-emitting small molecules. Various factors on influencing the inkjet-printed film quality during the droplet generation, the ink spreading on the substrates, and its solidification processes have been systematically investigated and optimized. Consequently, halogen-free inks have been developed and large-area patterning inkjet printing on flexible substrates with efficient blue emission has been successfully demonstrated. Moreover, OLEDs manufactured by inkjet printing the light-emitting small molecules manifested superior performance as compared with their corresponding spin-cast counterparts.

  20. Graphene oxide hole transport layers for large area, high efficiency organic solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Chris T. G.; Rhodes, Rhys W.; Beliatis, Michail J.

    2014-08-18

    Graphene oxide (GO) is becoming increasingly popular for organic electronic applications. We present large active area (0.64 cm{sup 2}), solution processable, poly[[9-(1-octylnonyl)-9H-carbazole-2,7-diyl]-2,5-thiophenediyl-2,1, 3-benzothiadiazole-4,7-diyl-2,5-thiophenediyl]:[6,6]-Phenyl C{sub 71} butyric acid methyl ester (PCDTBT:PC{sub 70}BM) organic photovoltaic (OPV) solar cells, incorporating GO hole transport layers (HTL). The power conversion efficiency (PCE) of ∼5% is the highest reported for OPV using this architecture. A comparative study of solution-processable devices has been undertaken to benchmark GO OPV performance with poly(3,4-ethylenedioxythiophene) poly(styrenesulfonate) (PEDOT:PSS) HTL devices, confirming the viability of GO devices, with comparable PCEs, suitable as high chemical and thermal stability replacements for PEDOT:PSS in OPV.

  1. Facile and Scalable Fabrication of Highly Efficient Lead Iodide Perovskite Thin-Film Solar Cells in Air Using Gas Pump Method.

    PubMed

    Ding, Bin; Gao, Lili; Liang, Lusheng; Chu, Qianqian; Song, Xiaoxuan; Li, Yan; Yang, Guanjun; Fan, Bin; Wang, Mingkui; Li, Chengxin; Li, Changjiu

    2016-08-10

    Control of the perovskite film formation process to produce high-quality organic-inorganic metal halide perovskite thin films with uniform morphology, high surface coverage, and minimum pinholes is of great importance to highly efficient solar cells. Herein, we report on large-area light-absorbing perovskite films fabrication with a new facile and scalable gas pump method. By decreasing the total pressure in the evaporation environment, the gas pump method can significantly enhance the solvent evaporation rate by 8 times faster and thereby produce an extremely dense, uniform, and full-coverage perovskite thin film. The resulting planar perovskite solar cells can achieve an impressive power conversion efficiency up to 19.00% with an average efficiency of 17.38 ± 0.70% for 32 devices with an area of 5 × 2 mm, 13.91% for devices with a large area up to 1.13 cm(2). The perovskite films can be easily fabricated in air conditions with a relative humidity of 45-55%, which definitely has a promising prospect in industrial application of large-area perovskite solar panels.

  2. MilxXplore: a web-based system to explore large imaging datasets

    PubMed Central

    Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J

    2013-01-01

    Objective As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. Materials and methods MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Discussion Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. Conclusions MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis. PMID:23775173

  3. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  4. Experimental Methods for Investigation of Shape Memory Based Elastocaloric Cooling Processes and Model Validation

    PubMed Central

    Schmidt, Marvin; Ullrich, Johannes; Wieczorek, André; Frenzel, Jan; Eggeler, Gunther; Schütze, Andreas; Seelecke, Stefan

    2016-01-01

    Shape Memory Alloys (SMA) using elastocaloric cooling processes have the potential to be an environmentally friendly alternative to the conventional vapor compression based cooling process. Nickel-Titanium (Ni-Ti) based alloy systems, especially, show large elastocaloric effects. Furthermore, exhibit large latent heats which is a necessary material property for the development of an efficient solid-state based cooling process. A scientific test rig has been designed to investigate these processes and the elastocaloric effects in SMAs. The realized test rig enables independent control of an SMA's mechanical loading and unloading cycles, as well as conductive heat transfer between SMA cooling elements and a heat source/sink. The test rig is equipped with a comprehensive monitoring system capable of synchronized measurements of mechanical and thermal parameters. In addition to determining the process-dependent mechanical work, the system also enables measurement of thermal caloric aspects of the elastocaloric cooling effect through use of a high-performance infrared camera. This combination is of particular interest, because it allows illustrations of localization and rate effects — both important for efficient heat transfer from the medium to be cooled. The work presented describes an experimental method to identify elastocaloric material properties in different materials and sample geometries. Furthermore, the test rig is used to investigate different cooling process variations. The introduced analysis methods enable a differentiated consideration of material, process and related boundary condition influences on the process efficiency. The comparison of the experimental data with the simulation results (of a thermomechanically coupled finite element model) allows for better understanding of the underlying physics of the elastocaloric effect. In addition, the experimental results, as well as the findings based on the simulation results, are used to improve the material properties. PMID:27168093

  5. Experimental Methods for Investigation of Shape Memory Based Elastocaloric Cooling Processes and Model Validation.

    PubMed

    Schmidt, Marvin; Ullrich, Johannes; Wieczorek, André; Frenzel, Jan; Eggeler, Gunther; Schütze, Andreas; Seelecke, Stefan

    2016-05-02

    Shape Memory Alloys (SMA) using elastocaloric cooling processes have the potential to be an environmentally friendly alternative to the conventional vapor compression based cooling process. Nickel-Titanium (Ni-Ti) based alloy systems, especially, show large elastocaloric effects. Furthermore, exhibit large latent heats which is a necessary material property for the development of an efficient solid-state based cooling process. A scientific test rig has been designed to investigate these processes and the elastocaloric effects in SMAs. The realized test rig enables independent control of an SMA's mechanical loading and unloading cycles, as well as conductive heat transfer between SMA cooling elements and a heat source/sink. The test rig is equipped with a comprehensive monitoring system capable of synchronized measurements of mechanical and thermal parameters. In addition to determining the process-dependent mechanical work, the system also enables measurement of thermal caloric aspects of the elastocaloric cooling effect through use of a high-performance infrared camera. This combination is of particular interest, because it allows illustrations of localization and rate effects - both important for efficient heat transfer from the medium to be cooled. The work presented describes an experimental method to identify elastocaloric material properties in different materials and sample geometries. Furthermore, the test rig is used to investigate different cooling process variations. The introduced analysis methods enable a differentiated consideration of material, process and related boundary condition influences on the process efficiency. The comparison of the experimental data with the simulation results (of a thermomechanically coupled finite element model) allows for better understanding of the underlying physics of the elastocaloric effect. In addition, the experimental results, as well as the findings based on the simulation results, are used to improve the material properties.

  6. Transformational electronics: a powerful way to revolutionize our information world

    NASA Astrophysics Data System (ADS)

    Rojas, Jhonathan P.; Torres Sevilla, Galo A.; Ghoneim, Mohamed T.; Hussain, Aftab M.; Ahmed, Sally M.; Nassar, Joanna M.; Bahabry, Rabab R.; Nour, Maha; Kutbee, Arwa T.; Byas, Ernesto; Al-Saif, Bidoor; Alamri, Amal M.; Hussain, Muhammad M.

    2014-06-01

    With the emergence of cloud computation, we are facing the rising waves of big data. It is our time to leverage such opportunity by increasing data usage both by man and machine. We need ultra-mobile computation with high data processing speed, ultra-large memory, energy efficiency and multi-functionality. Additionally, we have to deploy energy-efficient multi-functional 3D ICs for robust cyber-physical system establishment. To achieve such lofty goals we have to mimic human brain, which is inarguably the world's most powerful and energy efficient computer. Brain's cortex has folded architecture to increase surface area in an ultra-compact space to contain its neuron and synapses. Therefore, it is imperative to overcome two integration challenges: (i) finding out a low-cost 3D IC fabrication process and (ii) foldable substrates creation with ultra-large-scale-integration of high performance energy efficient electronics. Hence, we show a low-cost generic batch process based on trench-protect-peel-recycle to fabricate rigid and flexible 3D ICs as well as high performance flexible electronics. As of today we have made every single component to make a fully flexible computer including non-planar state-of-the-art FinFETs. Additionally we have demonstrated various solid-state memory, movable MEMS devices, energy harvesting and storage components. To show the versatility of our process, we have extended our process towards other inorganic semiconductor substrates such as silicon germanium and III-V materials. Finally, we report first ever fully flexible programmable silicon based microprocessor towards foldable brain computation and wirelessly programmable stretchable and flexible thermal patch for pain management for smart bionics.

  7. C 3, A Command-line Catalog Cross-match Tool for Large Astrophysical Catalogs

    NASA Astrophysics Data System (ADS)

    Riccio, Giuseppe; Brescia, Massimo; Cavuoti, Stefano; Mercurio, Amata; di Giorgio, Anna Maria; Molinari, Sergio

    2017-02-01

    Modern Astrophysics is based on multi-wavelength data organized into large and heterogeneous catalogs. Hence, the need for efficient, reliable and scalable catalog cross-matching methods plays a crucial role in the era of the petabyte scale. Furthermore, multi-band data have often very different angular resolution, requiring the highest generality of cross-matching features, mainly in terms of region shape and resolution. In this work we present C 3 (Command-line Catalog Cross-match), a multi-platform application designed to efficiently cross-match massive catalogs. It is based on a multi-core parallel processing paradigm and conceived to be executed as a stand-alone command-line process or integrated within any generic data reduction/analysis pipeline, providing the maximum flexibility to the end-user, in terms of portability, parameter configuration, catalog formats, angular resolution, region shapes, coordinate units and cross-matching types. Using real data, extracted from public surveys, we discuss the cross-matching capabilities and computing time efficiency also through a direct comparison with some publicly available tools, chosen among the most used within the community, and representative of different interface paradigms. We verified that the C 3 tool has excellent capabilities to perform an efficient and reliable cross-matching between large data sets. Although the elliptical cross-match and the parametric handling of angular orientation and offset are known concepts in the astrophysical context, their availability in the presented command-line tool makes C 3 competitive in the context of public astronomical tools.

  8. Intelligent earthquake data processing for global adjoint tomography

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Hill, J.; Li, T.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Tromp, J.

    2016-12-01

    Due to the increased computational capability afforded by modern and future computing architectures, the seismology community is demanding a more comprehensive understanding of the full waveform information from the recorded earthquake seismograms. Global waveform tomography is a complex workflow that matches observed seismic data with synthesized seismograms by iteratively updating the earth model parameters based on the adjoint state method. This methodology allows us to compute a very accurate model of the earth's interior. The synthetic data is simulated by solving the wave equation in the entire globe using a spectral-element method. In order to ensure the inversion accuracy and stability, both the synthesized and observed seismograms must be carefully pre-processed. Because the scale of the inversion problem is extremely large and there is a very large volume of data to both be read and written, an efficient and reliable pre-processing workflow must be developed. We are investigating intelligent algorithms based on a machine-learning (ML) framework that will automatically tune parameters for the data processing chain. One straightforward application of ML in data processing is to classify all possible misfit calculation windows into usable and unusable ones, based on some intelligent ML models such as neural network, support vector machine or principle component analysis. The intelligent earthquake data processing framework will enable the seismology community to compute the global waveform tomography using seismic data from an arbitrarily large number of earthquake events in the fastest, most efficient way.

  9. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  10. Theoretical study of cut area of reduction of large surfaces of rotation parts on machines with rotary cutters “Extra”

    NASA Astrophysics Data System (ADS)

    Bondarenko, J. A.; Fedorenko, M. A.; Pogonin, A. A.

    2018-03-01

    Large parts can be treated without disassembling machines using “Extra”, having technological and design challenges, which differ from the challenges in the processing of these components on the stationary machine. Extension machines are used to restore large parts up to the condition allowing one to use them in a production environment. To achieve the desired accuracy and surface roughness parameters, the surface after rotary grinding becomes recoverable, which greatly increases complexity. In order to improve production efficiency and productivity of the process, the qualitative rotary processing of the machined surface is applied. The rotary cutting process includes a continuous change of the cutting edge surfaces. The kinematic parameters of a rotary cutting define its main features and patterns, the cutting operation of the rotary cutting instrument.

  11. Challenges and Opportunities: One Stop Processing of Automatic Large-Scale Base Map Production Using Airborne LIDAR Data Within GIS Environment. Case Study: Makassar City, Indonesia

    NASA Astrophysics Data System (ADS)

    Widyaningrum, E.; Gorte, B. G. H.

    2017-05-01

    LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information Agency in Indonesia. As a progressive advanced technology, Geographic Information System (GIS) open possibilities to deal with geospatial data automatic processing and analyses. Considering further needs of spatial data sharing and integration, the one stop processing of LiDAR data in a GIS environment is considered a powerful and efficient approach for the base map provision. The quality of the automated topographic base map is assessed and analysed based on its completeness, correctness, quality, and the confusion matrix.

  12. Gaussian process based intelligent sampling for measuring nano-structure surfaces

    NASA Astrophysics Data System (ADS)

    Sun, L. J.; Ren, M. J.; Yin, Y. H.

    2016-09-01

    Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.

  13. In-Process Metrology And Control Of Large Optical Grinders

    NASA Astrophysics Data System (ADS)

    Anderson, D. S.; Ketelsen, D.; Kittrell, W. Cary; Kuhn, Wm; Parks, R. E.; Stahl, P.

    1987-01-01

    The advent of rapid figure generation at the University of Arizona has prompted the development of rapid metrology techniques. The success and efficiency of the generating process is highly dependent on timely and accurate measurements to update the feedback loop between machine and optician. We will describe the advantages and problems associated with the in-process metrology and control systems used at the Optical Sciences Center.

  14. Event-driven processing for hardware-efficient neural spike sorting

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Pereira, João L.; Constandinou, Timothy G.

    2018-02-01

    Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.

  15. Belt-MRF for large aperture mirrors.

    PubMed

    Ren, Kai; Luo, Xiao; Zheng, Ligong; Bai, Yang; Li, Longxiang; Hu, Haixiang; Zhang, Xuejun

    2014-08-11

    With high-determinacy and no subsurface damage, Magnetorheological Finishing (MRF) has become an important tool in fabricating high-precision optics. But for large mirrors, the application of MRF is restricted by its small removal function and low material removal rate. In order to improve the material removal rate, shorten the processing cycle, we proposed a new MRF concept, named Belt-MRF to expand the application of MRF to large mirrors and made a prototype with a large remove function, using a belt instead of a very large polishing wheel to expand the polishing length. A series of experimental results on Silicon carbide (SiC) and BK 7 specimens and fabrication simulation verified that the Belt-MRF has high material removal rates, stable removal function and high convergence efficiency which makes it a promising technology for processing large aperture optical elements.

  16. Abnormal crystal growth in CH 3NH 3PbI 3-xCl x using a multi-cycle solution coating process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Qingfeng; Yuan, Yongbo; Shao, Yuchuan

    2015-06-23

    Recently, the efficiency of organolead trihalide perovskite solar cells has improved greatly because of improved material qualities with longer carrier diffusion lengths. Mixing chlorine in the precursor for mixed halide films has been reported to dramatically enhance the diffusion lengths of mixed halide perovskite films, mainly as a result of a much longer carrier recombination lifetime. Here we report that adding Cl containing precursor for mixed halide perovskite formation can induce the abnormal grain growth behavior that yields well-oriented grains accompanied by the appearance of some very large size grains. The abnormal grain growth becomes prominent only after multi-cycle coatingmore » of MAI : MACl blend precursor. The large grain size is found mainly to contribute to a longer carrier charge recombination lifetime, and thus increases the device efficiency to 18.9%, but without significantly impacting the carrier transport property. As a result, the strong correlation identified between material process and morphology provides guidelines for future material optimization and device efficiency enhancement.« less

  17. Laser-zone growth in a Ribbon-To-Ribbon, RTR, process silicon sheet growth development for the large area silicon sheet task of the low cost silicon solar array project

    NASA Technical Reports Server (NTRS)

    Gurtler, R. W.; Baghdadi, A.

    1977-01-01

    A ribbon-to-ribbon process was used for routine growth of samples for analysis and fabrication into solar cells. One lot of solar cells was completely evaluated: ribbon solar cell efficiencies averaged 9.23% with a highest efficiency of 11.7%. Spherical reflectors have demonstrated significant improvements in laser silicon coupling efficiencies. Material analyses were performed including silicon photovoltage and open circuit photovoltage diffusion length measurements, crystal morphology studies, modulus of rupture measurements, and annealing/gettering studies. An initial economic analysis was performed indicating that ribbon-to-ribbon add-on costs of $.10/watt might be expected in the early 1980's.

  18. Relationships between Lead Halide Perovskite Thin-Film Fabrication, Morphology, and Performance in Solar Cells.

    PubMed

    Sharenko, Alexander; Toney, Michael F

    2016-01-20

    Solution-processed lead halide perovskite thin-film solar cells have achieved power conversion efficiencies comparable to those obtained with several commercial photovoltaic technologies in a remarkably short period of time. This rapid rise in device efficiency is largely the result of the development of fabrication protocols capable of producing continuous, smooth perovskite films with micrometer-sized grains. Further developments in film fabrication and morphological control are necessary, however, in order for perovskite solar cells to reliably and reproducibly approach their thermodynamic efficiency limit. This Perspective discusses the fabrication of lead halide perovskite thin films, while highlighting the processing-property-performance relationships that have emerged from the literature, and from this knowledge, suggests future research directions.

  19. Perspective: Maintaining surface-phase purity is key to efficient open air fabricated cuprous oxide solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoye, Robert L. Z., E-mail: rlzh2@cam.ac.uk, E-mail: jld35@cam.ac.uk; Ievskaya, Yulia; MacManus-Driscoll, Judith L., E-mail: rlzh2@cam.ac.uk, E-mail: jld35@cam.ac.uk

    2015-02-01

    Electrochemically deposited Cu{sub 2}O solar cells are receiving growing attention owing to a recent doubling in efficiency. This was enabled by the controlled chemical environment used in depositing doped ZnO layers by atomic layer deposition, which is not well suited to large-scale industrial production. While open air fabrication with atmospheric pressure spatial atomic layer deposition overcomes this limitation, we find that this approach is limited by an inability to remove the detrimental CuO layer that forms on the Cu{sub 2}O surface. Herein, we propose strategies for achieving efficiencies in atmospherically processed cells that are equivalent to the high values achievedmore » in vacuum processed cells.« less

  20. Efficiency of the neighbor-joining method in reconstructing deep and shallow evolutionary relationships in large phylogenies.

    PubMed

    Kumar, S; Gadagkar, S R

    2000-12-01

    The neighbor-joining (NJ) method is widely used in reconstructing large phylogenies because of its computational speed and the high accuracy in phylogenetic inference as revealed in computer simulation studies. However, most computer simulation studies have quantified the overall performance of the NJ method in terms of the percentage of branches inferred correctly or the percentage of replications in which the correct tree is recovered. We have examined other aspects of its performance, such as the relative efficiency in correctly reconstructing shallow (close to the external branches of the tree) and deep branches in large phylogenies; the contribution of zero-length branches to topological errors in the inferred trees; and the influence of increasing the tree size (number of sequences), evolutionary rate, and sequence length on the efficiency of the NJ method. Results show that the correct reconstruction of deep branches is no more difficult than that of shallower branches. The presence of zero-length branches in realized trees contributes significantly to the overall error observed in the NJ tree, especially in large phylogenies or slowly evolving genes. Furthermore, the tree size does not influence the efficiency of NJ in reconstructing shallow and deep branches in our simulation study, in which the evolutionary process is assumed to be homogeneous in all lineages.

  1. Low-Temperature Soft-Cover Deposition of Uniform Large-Scale Perovskite Films for High-Performance Solar Cells.

    PubMed

    Ye, Fei; Tang, Wentao; Xie, Fengxian; Yin, Maoshu; He, Jinjin; Wang, Yanbo; Chen, Han; Qiang, Yinghuai; Yang, Xudong; Han, Liyuan

    2017-09-01

    Large-scale high-quality perovskite thin films are crucial to produce high-performance perovskite solar cells. However, for perovskite films fabricated by solvent-rich processes, film uniformity can be prevented by convection during thermal evaporation of the solvent. Here, a scalable low-temperature soft-cover deposition (LT-SCD) method is presented, where the thermal convection-induced defects in perovskite films are eliminated through a strategy of surface tension relaxation. Compact, homogeneous, and convection-induced-defects-free perovskite films are obtained on an area of 12 cm 2 , which enables a power conversion efficiency (PCE) of 15.5% on a solar cell with an area of 5 cm 2 . This is the highest efficiency at this large cell area. A PCE of 15.3% is also obtained on a flexible perovskite solar cell deposited on the polyethylene terephthalate substrate owing to the advantage of presented low-temperature processing. Hence, the present LT-SCD technology provides a new non-spin-coating route to the deposition of large-area uniform perovskite films for both rigid and flexible perovskite devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. A very efficient RCS data compression and reconstruction technique, volume 4

    NASA Technical Reports Server (NTRS)

    Tseng, N. Y.; Burnside, W. D.

    1992-01-01

    A very efficient compression and reconstruction scheme for RCS measurement data was developed. The compression is done by isolating the scattering mechanisms on the target and recording their individual responses in the frequency and azimuth scans, respectively. The reconstruction, which is an inverse process of the compression, is granted by the sampling theorem. Two sets of data, the corner reflectors and the F-117 fighter model, were processed and the results were shown to be convincing. The compression ratio can be as large as several hundred, depending on the target's geometry and scattering characteristics.

  3. Efficient fuzzy C-means architecture for image segmentation.

    PubMed

    Li, Hui-Ya; Hwang, Wen-Jyi; Chang, Chia-Yen

    2011-01-01

    This paper presents a novel VLSI architecture for image segmentation. The architecture is based on the fuzzy c-means algorithm with spatial constraint for reducing the misclassification rate. In the architecture, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. In addition, an efficient pipelined circuit is used for the updating process for accelerating the computational speed. Experimental results show that the the proposed circuit is an effective alternative for real-time image segmentation with low area cost and low misclassification rate.

  4. Method and apparatus for energy efficient self-aeration in chemical, biochemical, and wastewater treatment processes

    DOEpatents

    Gao, Johnway [Richland, WA; Skeen, Rodney S [Pendleton, OR

    2002-05-28

    The present invention is a pulse spilling self-aerator (PSSA) that has the potential to greatly lower the installation, operation, and maintenance cost associated with aerating and mixing aqueous solutions. Currently, large quantities of low-pressure air are required in aeration systems to support many biochemical production processes and wastewater treatment plants. Oxygen is traditionally supplied and mixed by a compressor or blower and a mechanical agitator. These systems have high-energy requirements and high installation and maintenance costs. The PSSA provides a mixing and aeration capability that can increase operational efficiency and reduce overall cost.

  5. Offset Initial Sodium Loss To Improve Coulombic Efficiency and Stability of Sodium Dual-Ion Batteries.

    PubMed

    Ma, Ruifang; Fan, Ling; Chen, Suhua; Wei, Zengxi; Yang, Yuhua; Yang, Hongguan; Qin, Yong; Lu, Bingan

    2018-05-09

    Sodium dual-ion batteries (NDIBs) are attracting extensive attention recently because of their low cost and abundant sodium resources. However, the low capacity of the carbonaceous anode would reduce the energy density, and the formation of the solid-electrolyte interphase (SEI) in the anode during the initial cycles will lead to large amount consumption of Na + in the electrolyte, which results in low Coulombic efficiency and inferior stability of the NDIBs. To address these issues, a phosphorus-doped soft carbon (P-SC) anode combined with a presodiation process is developed to enhance the performance of the NDIBs. The phosphorus atom doping could enhance the electric conductivity and further improve the sodium storage property. On the other hand, an SEI could preform in the anode during the presodiation process; thus the anode has no need to consume large amounts of Na + to form the SEI during the cycling of the NDIBs. Consequently, the NDIBs with P-SC anode after the presodiation process exhibit high Coulombic efficiency (over 90%) and long cycle stability (81 mA h g -1 at 1000 mA g -1 after 900 cycles with capacity retention of 81.8%), far more superior to the unsodiated NDIBs. This work may provide guidance for developing high performance NDIBs in the future.

  6. Biorefinery process for protein extraction from oriental mustard (Brassica juncea (L.) Czern.) using ethanol stillage.

    PubMed

    Ratanapariyanuch, Kornsulee; Tyler, Robert T; Shim, Youn Young; Reaney, Martin Jt

    2012-01-12

    Large volumes of treated process water are required for protein extraction. Evaporation of this water contributes greatly to the energy consumed in enriching protein products. Thin stillage remaining from ethanol production is available in large volumes and may be suitable for extracting protein rich materials. In this work protein was extracted from ground defatted oriental mustard (Brassica juncea (L.) Czern.) meal using thin stillage. Protein extraction efficiency was studied at pHs between 7.6 and 10.4 and salt concentrations between 3.4 × 10-2 and 1.2 M. The optimum extraction efficiency was pH 10.0 and 1.0 M NaCl. Napin and cruciferin were the most prevalent proteins in the isolate. The isolate exhibited high in vitro digestibility (74.9 ± 0.80%) and lysine content (5.2 ± 0.2 g/100 g of protein). No differences in the efficiency of extraction, SDS-PAGE profile, digestibility, lysine availability, or amino acid composition were observed between protein extracted with thin stillage and that extracted with NaCl solution. The use of thin stillage, in lieu of water, for protein extraction would decrease the energy requirements and waste disposal costs of the protein isolation and biofuel production processes.

  7. Biorefinery process for protein extraction from oriental mustard (Brassica juncea (L.) Czern.) using ethanol stillage

    PubMed Central

    2012-01-01

    Large volumes of treated process water are required for protein extraction. Evaporation of this water contributes greatly to the energy consumed in enriching protein products. Thin stillage remaining from ethanol production is available in large volumes and may be suitable for extracting protein rich materials. In this work protein was extracted from ground defatted oriental mustard (Brassica juncea (L.) Czern.) meal using thin stillage. Protein extraction efficiency was studied at pHs between 7.6 and 10.4 and salt concentrations between 3.4 × 10-2 and 1.2 M. The optimum extraction efficiency was pH 10.0 and 1.0 M NaCl. Napin and cruciferin were the most prevalent proteins in the isolate. The isolate exhibited high in vitro digestibility (74.9 ± 0.80%) and lysine content (5.2 ± 0.2 g/100 g of protein). No differences in the efficiency of extraction, SDS-PAGE profile, digestibility, lysine availability, or amino acid composition were observed between protein extracted with thin stillage and that extracted with NaCl solution. The use of thin stillage, in lieu of water, for protein extraction would decrease the energy requirements and waste disposal costs of the protein isolation and biofuel production processes. PMID:22239856

  8. Development of numerical processing in children with typical and dyscalculic arithmetic skills—a longitudinal study

    PubMed Central

    Landerl, Karin

    2013-01-01

    Numerical processing has been demonstrated to be closely associated with arithmetic skills, however, our knowledge on the development of the relevant cognitive mechanisms is limited. The present longitudinal study investigated the developmental trajectories of numerical processing in 42 children with age-adequate arithmetic development and 41 children with dyscalculia over a 2-year period from beginning of Grade 2, when children were 7; 6 years old, to beginning of Grade 4. A battery of numerical processing tasks (dot enumeration, non-symbolic and symbolic comparison of one- and two-digit numbers, physical comparison, number line estimation) was given five times during the study (beginning and middle of each school year). Efficiency of numerical processing was a very good indicator of development in numerical processing while within-task effects remained largely constant and showed low long-term stability before middle of Grade 3. Children with dyscalculia showed less efficient numerical processing reflected in specifically prolonged response times. Importantly, they showed consistently larger slopes for dot enumeration in the subitizing range, an untypically large compatibility effect when processing two-digit numbers, and they were consistently less accurate in placing numbers on a number line. Thus, we were able to identify parameters that can be used in future research to characterize numerical processing in typical and dyscalculic development. These parameters can also be helpful for identification of children who struggle in their numerical development. PMID:23898310

  9. Rapid Geometry Creation for Computer-Aided Engineering Parametric Analyses: A Case Study Using ComGeom2 for Launch Abort System Design

    NASA Technical Reports Server (NTRS)

    Hawke, Veronica; Gage, Peter; Manning, Ted

    2007-01-01

    ComGeom2, a tool developed to generate Common Geometry representation for multidisciplinary analysis, has been used to create a large set of geometries for use in a design study requiring analysis by two computational codes. This paper describes the process used to generate the large number of configurations and suggests ways to further automate the process and make it more efficient for future studies. The design geometry for this study is the launch abort system of the NASA Crew Launch Vehicle.

  10. Smith predictor with sliding mode control for processes with large dead times

    NASA Astrophysics Data System (ADS)

    Mehta, Utkal; Kaya, İbrahim

    2017-11-01

    The paper discusses the Smith Predictor scheme with Sliding Mode Controller (SP-SMC) for processes with large dead times. This technique gives improved load-disturbance rejection with optimum input control signal variations. A power rate reaching law is incorporated in the sporadic part of sliding mode control such that the overall performance recovers meaningfully. The proposed scheme obtains parameter values by satisfying a new performance index which is based on biobjective constraint. In simulation study, the efficiency of the method is evaluated for robustness and transient performance over reported techniques.

  11. Advances in Proteomics Data Analysis and Display Using an Accurate Mass and Time Tag Approach

    PubMed Central

    Zimmer, Jennifer S.D.; Monroe, Matthew E.; Qian, Wei-Jun; Smith, Richard D.

    2007-01-01

    Proteomics has recently demonstrated utility in understanding cellular processes on the molecular level as a component of systems biology approaches and for identifying potential biomarkers of various disease states. The large amount of data generated by utilizing high efficiency (e.g., chromatographic) separations coupled to high mass accuracy mass spectrometry for high-throughput proteomics analyses presents challenges related to data processing, analysis, and display. This review focuses on recent advances in nanoLC-FTICR-MS-based proteomics approaches and the accompanying data processing tools that have been developed to display and interpret the large volumes of data being produced. PMID:16429408

  12. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan

    2016-04-28

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less

  13. TDat: An Efficient Platform for Processing Petabyte-Scale Whole-Brain Volumetric Images.

    PubMed

    Li, Yuxin; Gong, Hui; Yang, Xiaoquan; Yuan, Jing; Jiang, Tao; Li, Xiangning; Sun, Qingtao; Zhu, Dan; Wang, Zhenyu; Luo, Qingming; Li, Anan

    2017-01-01

    Three-dimensional imaging of whole mammalian brains at single-neuron resolution has generated terabyte (TB)- and even petabyte (PB)-sized datasets. Due to their size, processing these massive image datasets can be hindered by the computer hardware and software typically found in biological laboratories. To fill this gap, we have developed an efficient platform named TDat, which adopts a novel data reformatting strategy by reading cuboid data and employing parallel computing. In data reformatting, TDat is more efficient than any other software. In data accessing, we adopted parallelization to fully explore the capability for data transmission in computers. We applied TDat in large-volume data rigid registration and neuron tracing in whole-brain data with single-neuron resolution, which has never been demonstrated in other studies. We also showed its compatibility with various computing platforms, image processing software and imaging systems.

  14. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    PubMed Central

    Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost

    2016-01-01

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167

  15. Large area tunnel oxide passivated rear contact n -type Si solar cells with 21.2% efficiency: Large area tunnel oxide passivated rear contact n -type Si solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Yuguo; Upadhyaya, Vijaykumar; Chen, Chia-Wei

    This paper reports on the implementation of carrier-selective tunnel oxide passivated rear contact for high-efficiency screen-printed large area n-type front junction crystalline Si solar cells. It is shown that the tunnel oxide grown in nitric acid at room temperature (25°C) and capped with n+ polysilicon layer provides excellent rear contact passivation with implied open-circuit voltage iVoc of 714mV and saturation current density J0b of 10.3 fA/cm2 for the back surface field region. The durability of this passivation scheme is also investigated for a back-end high temperature process. In combination with an ion-implanted Al2O3-passivated boron emitter and screen-printed front metal grids,more » this passivated rear contact enabled 21.2% efficient front junction Si solar cells on 239 cm2 commercial grade n-type Czochralski wafers.« less

  16. Efficient Surface Enhanced Raman Scattering substrates from femtosecond laser based fabrication

    NASA Astrophysics Data System (ADS)

    Parmar, Vinod; Kanaujia, Pawan K.; Bommali, Ravi Kumar; Vijaya Prakash, G.

    2017-10-01

    A fast and simple femtosecond laser based methodology for efficient Surface Enhanced Raman Scattering (SERS) substrate fabrication has been proposed. Both nano scaffold silicon (black silicon) and gold nanoparticles (Au-NP) are fabricated by femtosecond laser based technique for mass production. Nano rough silicon scaffold enables large electromagnetic fields for the localized surface plasmons from decorated metallic nanoparticles. Thus giant enhancement (approximately in the order of 104) of Raman signal arises from the mixed effects of electron-photon-phonon coupling, even at nanomolar concentrations of test organic species (Rhodamine 6G). Proposed process demonstrates the low-cost and label-less application ability from these large-area SERS substrates.

  17. Implementing an Enterprise Information System to Reengineer and Streamline Administrative Processes in a Distance Learning Unit

    ERIC Educational Resources Information Center

    Abdous, M'hammed; He, Wu

    2009-01-01

    During the past three years, we have developed and implemented an enterprise information system (EIS) to reengineer and facilitate the administrative process for preparing and teaching distance learning courses in a midsized-to-large university (with 23,000 students). The outcome of the implementation has been a streamlined and efficient process…

  18. Extraction and recovery of pectic fragments from citrus processing waste for co-production with ethanol

    USDA-ARS?s Scientific Manuscript database

    Steam treatment of citrus processing waste (CPW) at 160°C followed by a rapid decompression (steam explosion) at either pH 2.8 or 4.5 provides an efficient and rapid fragmentation of protopectin in CPW and renders a large fraction of fragmented pectins, arabinans, galactans and arabinogalactans solu...

  19. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  20. Power and Performance Trade-offs for Space Time Adaptive Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino

    Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementationmore » on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.« less

  1. Effect of drug physico-chemical properties on the efficiency of top-down process and characterization of nanosuspension.

    PubMed

    Liu, Tao; Müller, Rainer H; Möschwitzer, Jan P

    2015-01-01

    The top-down approach is frequently used for drug nanocrystal production. A large number of review papers have referred to the top-down approach in terms of process parameters such as stabilizer selection. However, a very important factor, that is, the influence of drug properties, has been not addressed so far. This review will first discuss different nanocrystal technologies in brief. The focus will be on reviewing the different drug properties such as solid state and particle morphology on the efficiency of particle size reduction during top-down processes. Furthermore, the drug properties in the final nanosuspensions are critical for drug dissolution velocity. Therefore, another focus is the characterization of drugs in obtained nanosuspension. Drug physical properties play an important role in the production efficiency. The combinative technologies using modified drugs could significantly improve the performances of top-down processes. However, further understanding of the drug millability and homogenization will still be needed. In addition, a carefully established characterization system for nansuspension is essential.

  2. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE PAGES

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...

    2017-06-06

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallarno, George; Rogers, James H; Maxwell, Don E

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learnedmore » in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.« less

  4. Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.

    PubMed

    Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E

    2016-01-01

    Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.

  5. Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets

    PubMed Central

    Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.

    2018-01-01

    Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777

  6. Exploring Google Earth Engine platform for big data processing: classification of multi-temporal satellite imagery for crop mapping

    NASA Astrophysics Data System (ADS)

    Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii

    2017-02-01

    Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.

  7. Evaluation of counting methods for oceanic radium-228

    NASA Astrophysics Data System (ADS)

    Orr, James C.

    1988-07-01

    Measurement of open ocean 228Ra is difficult, typically requiring at least 200 L of seawater. The burden of collecting and processing these large-volume samples severely limits the widespread use of this promising tracer. To use smaller-volume samples, a more sensitive means of analysis is required. To seek out new and improved counting method(s), conventional 228Ra counting methods have been compared with some promising techniques which are currently used for other radionuclides. Of the conventional methods, α spectrometry possesses the highest efficiency (3-9%) and lowest background (0.0015 cpm), but it suffers from the need for complex chemical processing after sampling and the need to allow about 1 year for adequate ingrowth of 228Th granddaughter. The other two conventional counting methods measure the short-lived 228Ac daughter while it remains supported by 228Ra, thereby avoiding the complex sample processing and the long delay before counting. The first of these, high-resolution γ spectrometry, offers the simplest processing and an efficiency (4.8%) comparable to α spectrometry; yet its high background (0.16 cpm) and substantial equipment cost (˜30,000) limit its widespread use. The second no-wait method, β-γ coincidence spectrometry, also offers comparable efficiency (5.3%), but it possesses both lower background (0.0054 cpm) and lower initial cost (˜12,000). Three new (i.e., untried for 228Ra) techniques all seem to promise about a fivefold increase in efficiency over conventional methods. By employing liquid scintillation methods, both α spectrometry and β-γ coincidence spectrometry can improve their counter efficiency while retaining low background. The third new 228Ra counting method could be adapted from a technique which measures 224Ra by 220Rn emanation. After allowing for ingrowth and then counting for the 224Ra great-granddaughter, 228Ra could be back calculated, thereby yielding a method with high efficiency, where no sample processing is required. The efficiency and background of each of the three new methods have been estimated and are compared with those of the three methods currently employed to measure oceanic 228Ra. From efficiency and background, the relative figure of merit and the detection limit have been determined for each of the six counters. These data suggest that the new counting methods have the potential to measure most 228Ra samples with just 30 L of seawater, to better than 5% precision. Not only would this reduce the time, effort, and expense involved in sample collection, but 228Ra could then be measured on many small-volume samples (20-30 L) previously collected with only 226Ra in mind. By measuring 228Ra quantitatively on such small-volume samples, three analyses (large-volume 228Ra, large-volume 226Ra, and small-volume 226Ra) could be reduced to one, thereby dramatically improving analytical precision.

  8. Assessing Stress Responses in Beaked and Sperm Whales in the Bahamas

    DTIC Science & Technology

    2012-09-30

    acceptable extraction efficiency for steroids (Hayward et al. 2010; Wasser et al. 2010). The"small sample size" effect on hormone concentration was...efficiency ( Wasser pers. comm., Hunt et al. unpub. data). 4) Pilot test of hormone content in seawater removed from samples. The large volume of...2006), and Wasser et al. (2010), with extraction modifications discussed above. RESULTS Sample processing Using a consistent fecal:solvent

  9. Facile fabrication of large-grain CH 3NH 3PbI 3-xBr x films for high-efficiency solar cells via CH 3NH 3Br-selective Ostwald ripening

    DOE PAGES

    Yang, Mengjin; Zhang, Taiyang; Schulz, Philip; ...

    2016-08-01

    Organometallic halide perovskite solar cells (PSCs) have shown great promise as a low-cost, high-efficiency photovoltaic technology. Structural and electro-optical properties of the perovskite absorber layer are most critical to device operation characteristics. Here we present a facile fabrication of high-efficiency PSCs based on compact, large-grain, pinhole-free CH 3NH 3PbI 3-xBr x (MAPbI 3-xBr x) thin films with high reproducibility. A simple methylammonium bromide (MABr) treatment via spin-coating with a proper MABr concentration converts MAPbI 3 thin films with different initial film qualities (for example, grain size and pinholes) to high-quality MAPbI 3-xBr x thin films following an Ostwald ripening process,more » which is strongly affected by MABr concentration and is ineffective when replacing MABr with methylammonium iodide. A higher MABr concentration enhances I-Br anion exchange reaction, yielding poorer device performance. Lastly, this MABr-selective Ostwald ripening process improves cell efficiency but also enhances device stability and thus represents a simple, promising strategy for further improving PSC performance with higher reproducibility and reliability.« less

  10. Conservation of quantum efficiency in quantum well intermixing by stress engineering with dielectric bilayers

    NASA Astrophysics Data System (ADS)

    Arslan, Seval; Demir, Abdullah; Şahin, Seval; Aydınlı, Atilla

    2018-02-01

    In semiconductor lasers, quantum well intermixing (QWI) with high selectivity using dielectrics often results in lower quantum efficiency. In this paper, we report on an investigation regarding the effect of thermally induced dielectric stress on the quantum efficiency of quantum well structures in impurity-free vacancy disordering (IFVD) process using photoluminescence and device characterization in conjunction with microscopy. SiO2 and Si x O2/SrF2 (versus SrF2) films were employed for the enhancement and suppression of QWI, respectively. Large intermixing selectivity of 75 nm (125 meV), consistent with the theoretical modeling results, with negligible effect on the suppression region characteristics, was obtained. Si x O2 layer compensates for the large thermal expansion coefficient mismatch of SrF2 with the semiconductor and mitigates the detrimental effects of SrF2 without sacrificing its QWI benefits. The bilayer dielectric approach dramatically improved the dielectric-semiconductor interface quality. Fabricated high power semiconductor lasers demonstrated high quantum efficiency in the lasing region using the bilayer dielectric film during the intermixing process. Our results reveal that stress engineering in IFVD is essential and the thermal stress can be controlled by engineering the dielectric strain opening new perspectives for QWI of photonic devices.

  11. A highly efficient approach to protein interactome mapping based on collaborative filtering framework.

    PubMed

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-09

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.

  12. A Highly Efficient Approach to Protein Interactome Mapping Based on Collaborative Filtering Framework

    PubMed Central

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-01

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly. PMID:25572661

  13. A Highly Efficient Approach to Protein Interactome Mapping Based on Collaborative Filtering Framework

    NASA Astrophysics Data System (ADS)

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-01

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.

  14. A GPU-Accelerated Approach for Feature Tracking in Time-Varying Imagery Datasets.

    PubMed

    Peng, Chao; Sahani, Sandip; Rushing, John

    2017-10-01

    We propose a novel parallel connected component labeling (CCL) algorithm along with efficient out-of-core data management to detect and track feature regions of large time-varying imagery datasets. Our approach contributes to the big data field with parallel algorithms tailored for GPU architectures. We remove the data dependency between frames and achieve pixel-level parallelism. Due to the large size, the entire dataset cannot fit into cached memory. Frames have to be streamed through the memory hierarchy (disk to CPU main memory and then to GPU memory), partitioned, and processed as batches, where each batch is small enough to fit into the GPU. To reconnect the feature regions that are separated due to data partitioning, we present a novel batch merging algorithm to extract the region connection information across multiple batches in a parallel fashion. The information is organized in a memory-efficient structure and supports fast indexing on the GPU. Our experiment uses a commodity workstation equipped with a single GPU. The results show that our approach can efficiently process a weather dataset composed of terabytes of time-varying radar images. The advantages of our approach are demonstrated by comparing to the performance of an efficient CPU cluster implementation which is being used by the weather scientists.

  15. Can cooperative behaviors promote evacuation efficiency?

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan; Zheng, Xiaoping

    2018-02-01

    This study aims to get insight into the question whether cooperative behaviors can promote the evacuation efficiency during an evacuation process. In this work, cooperative behaviors and evacuation efficiency have been examined in detail by using a cellular automata model with behavioral extension. The simulation results show that moderate cooperative behaviors can result in the highest evacuation efficiency. It is found that in a mixture of cooperative and competitive individuals, more cooperative people will lead to relatively high evacuation efficiency, and the larger subgroup will play a leading role. This work can also provide some new insights for the study of cooperative behaviors and evacuation efficiency which can be a scientific decision-making basis for emergency response involving large-scale crowd evacuation in emergencies.

  16. An Exponential Luminous Efficiency Model for Hypervelocity Impact into Regolith

    NASA Technical Reports Server (NTRS)

    Swift, Wesley R.; Moser, D.E.; Suggs, Robb M.; Cooke, W.J.

    2010-01-01

    The flash of thermal radiation produced as part of the impact-crater forming process can be used to determine the energy of the impact if the luminous efficiency is known. From this energy the mass and, ultimately, the mass flux of similar impactors can be deduced. The luminous efficiency, Eta is a unique function of velocity with an extremely large variation in the laboratory range of under 8 km/s but a necessarily small variation with velocity in the meteoric range of 20 to 70 km/s. Impacts into granular or powdery regolith, such as that on the moon, differ from impacts into solid materials in that the energy is deposited via a serial impact process which affects the rate of deposition of internal (thermal) energy. An exponential model of the process is developed which differs from the usual polynomial models of crater formation. The model is valid for the early time portion of the process and focuses on the deposition of internal energy into the regolith. The model is successfully compared with experimental luminous efficiency data from laboratory impacts and from astronomical determinations and scaling factors are estimated. Further work is proposed to clarify the effects of mass and density upon the luminous efficiency scaling factors

  17. Large-scale Cortical Network Properties Predict Future Sound-to-Word Learning Success

    PubMed Central

    Sheppard, John Patrick; Wang, Ji-Ping; Wong, Patrick C. M.

    2013-01-01

    The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants’ future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults. PMID:22360625

  18. Evolutionary process development towards next generation crystalline silicon solar cells : a semiconductor process toolbox application

    NASA Astrophysics Data System (ADS)

    John, J.; Prajapati, V.; Vermang, B.; Lorenz, A.; Allebe, C.; Rothschild, A.; Tous, L.; Uruena, A.; Baert, K.; Poortmans, J.

    2012-08-01

    Bulk crystalline Silicon solar cells are covering more than 85% of the world's roof top module installation in 2010. With a growth rate of over 30% in the last 10 years this technology remains the working horse of solar cell industry. The full Aluminum back-side field (Al BSF) technology has been developed in the 90's and provides a production learning curve on module price of constant 20% in average. The main reason for the decrease of module prices with increasing production capacity is due to the effect of up scaling industrial production. For further decreasing of the price per wattpeak silicon consumption has to be reduced and efficiency has to be improved. In this paper we describe a successive efficiency improving process development starting from the existing full Al BSF cell concept. We propose an evolutionary development includes all parts of the solar cell process: optical enhancement (texturing, polishing, anti-reflection coating), junction formation and contacting. Novel processes are benchmarked on industrial like baseline flows using high-efficiency cell concepts like i-PERC (Passivated Emitter and Rear Cell). While the full Al BSF crystalline silicon solar cell technology provides efficiencies of up to 18% (on cz-Si) in production, we are achieving up to 19.4% conversion efficiency for industrial fabricated, large area solar cells with copper based front side metallization and local Al BSF applying the semiconductor toolbox.

  19. Development of high efficiency (14 percent) solar cell array module

    NASA Technical Reports Server (NTRS)

    Iles, P. A.; Khemthong, S.; Olah, S.; Sampson, W. J.; Ling, K. S.

    1980-01-01

    Most effort was concentrated on development of procedures to provide large area (3 in. diameter) high efficiency (16.5 percent AM1, 28 C) P+NN+ solar cells. Intensive tests with 3 in. slices gave consistently lower efficiency (13.5 percent). The problems were identified as incomplete formation of and optimum back surface field (BSF), and interaction of the BSF process and the shallow P+ junction. The problem was shown not to be caused by reduced quality of silicon near the edges of the larger slices.

  20. NOVEL BINDERS AND METHODS FOR AGGLOMERATION OF ORE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S.K. Kawatra; T.C. Eisele; J.A. Gurtler

    2005-04-01

    Many metal extraction operations, such as leaching of copper, leaching of precious metals, and reduction of metal oxides to metal in high-temperature furnaces, require agglomeration of ore to ensure that reactive liquids or gases are evenly distributed throughout the ore being processed. Agglomeration of ore into coarse, porous masses achieves this even distribution of fluids by preventing fine particles from migrating and clogging the spaces and channels between the larger ore particles. Binders are critically necessary to produce agglomerates that will not breakdown during processing. However, for many important metal extraction processes there are no binders known that will workmore » satisfactorily. Primary examples of this are copper heap leaching, where there are no binders that will work in the acidic environment encountered in this process. As a result, operators of many facilities see large loss of process efficiency due to their inability to take advantage of agglomeration. The large quantities of ore that must be handled in metal extraction processes also means that the binder must be inexpensive and useful at low dosages to be economical. The acid-resistant binders and agglomeration procedures developed in this project will also be adapted for use in improving the energy efficiency and performance of a broad range of mineral agglomeration applications, particularly heap leaching.« less

  1. Novel Binders and Methods for Agglomeration of Ore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. K. Kawatra; T. C. Eisele; J. A. Gurtler

    2004-03-31

    Many metal extraction operations, such as leaching of copper, leaching of precious metals, and reduction of metal oxides to metal in high-temperature furnaces, require agglomeration of ore to ensure that reactive liquids or gases are evenly distributed throughout the ore being processed. Agglomeration of ore into coarse, porous masses achieves this even distribution of fluids by preventing fine particles from migrating and clogging the spaces and channels between the larger ore particles. Binders are critically necessary to produce agglomerates that will not break down during processing. However, for many important metal extraction processes there are no binders known that willmore » work satisfactorily. A primary example of this is copper heap leaching, where there are no binders that will work in the acidic environment encountered in this process. As a result, operators of acidic heap-leach facilities see a large loss of process efficiency due to their inability to take advantage of agglomeration. The large quantities of ore that must be handled in metal extraction processes also means that the binder must be inexpensive and useful at low dosages to be economical. The acid-resistant binders and agglomeration procedures developed in this project will also be adapted for use in improving the energy efficiency and performance of other agglomeration applications, particularly advanced primary ironmaking.« less

  2. Controlling CH3NH3PbI(3-x)Cl(x) Film Morphology with Two-Step Annealing Method for Efficient Hybrid Perovskite Solar Cells.

    PubMed

    Liu, Dong; Wu, Lili; Li, Chunxiu; Ren, Shengqiang; Zhang, Jingquan; Li, Wei; Feng, Lianghuan

    2015-08-05

    The methylammonium lead halide perovskite solar cells have become very attractive because they can be prepared with low-cost solution-processable technology and their power conversion efficiency have been increasing from 3.9% to 20% in recent years. However, the high performance of perovskite photovoltaic devices are dependent on the complicated process to prepare compact perovskite films with large grain size. Herein, a new method is developed to achieve excellent CH3NH3PbI3-xClx film with fine morphology and crystallization based on one step deposition and two-step annealing process. This method include the spin coating deposition of the perovskite films with the precursor solution of PbI2, PbCl2, and CH3NH3I at the molar ratio 1:1:4 in dimethylformamide (DMF) and the post two-step annealing (TSA). The first annealing is achieved by solvent-induced process in DMF to promote migration and interdiffusion of the solvent-assisted precursor ions and molecules and realize large size grain growth. The second annealing is conducted by thermal-induced process to further improve morphology and crystallization of films. The compact perovskite films are successfully prepared with grain size up to 1.1 μm according to SEM observation. The PL decay lifetime, and the optic energy gap for the film with two-step annealing are 460 ns and 1.575 eV, respectively, while they are 307 and 327 ns and 1.577 and 1.582 eV for the films annealed in one-step thermal and one-step solvent process. On the basis of the TSA process, the photovoltaic devices exhibit the best efficiency of 14% under AM 1.5G irradiation (100 mW·cm(-2)).

  3. Evaluation of grid generation technologies from an applied perspective

    NASA Technical Reports Server (NTRS)

    Hufford, Gary S.; Harrand, Vincent J.; Patel, Bhavin C.; Mitchell, Curtis R.

    1995-01-01

    An analysis of the grid generation process from the point of view of an applied CFD engineer is given. Issues addressed include geometric modeling, structured grid generation, unstructured grid generation, hybrid grid generation and use of virtual parts libraries in large parametric analysis projects. The analysis is geared towards comparing the effective turn around time for specific grid generation and CFD projects. The conclusion was made that a single grid generation methodology is not universally suited for all CFD applications due to both limitations in grid generation and flow solver technology. A new geometric modeling and grid generation tool, CFD-GEOM, is introduced to effectively integrate the geometric modeling process to the various grid generation methodologies including structured, unstructured, and hybrid procedures. The full integration of the geometric modeling and grid generation allows implementation of extremely efficient updating procedures, a necessary requirement for large parametric analysis projects. The concept of using virtual parts libraries in conjunction with hybrid grids for large parametric analysis projects is also introduced to improve the efficiency of the applied CFD engineer.

  4. Large-area, lightweight and thick biomimetic composites with superior material properties via fast, economic, and green pathways.

    PubMed

    Walther, Andreas; Bjurhager, Ingela; Malho, Jani-Markus; Pere, Jaakko; Ruokolainen, Janne; Berglund, Lars A; Ikkala, Olli

    2010-08-11

    Although remarkable success has been achieved to mimic the mechanically excellent structure of nacre in laboratory-scale models, it remains difficult to foresee mainstream applications due to time-consuming sequential depositions or energy-intensive processes. Here, we introduce a surprisingly simple and rapid methodology for large-area, lightweight, and thick nacre-mimetic films and laminates with superior material properties. Nanoclay sheets with soft polymer coatings are used as ideal building blocks with intrinsic hard/soft character. They are forced to rapidly self-assemble into aligned nacre-mimetic films via paper-making, doctor-blading or simple painting, giving rise to strong and thick films with tensile modulus of 45 GPa and strength of 250 MPa, that is, partly exceeding nacre. The concepts are environmentally friendly, energy-efficient, and economic and are ready for scale-up via continuous roll-to-roll processes. Excellent gas barrier properties, optical translucency, and extraordinary shape-persistent fire-resistance are demonstrated. We foresee advanced large-scale biomimetic materials, relevant for lightweight sustainable construction and energy-efficient transportation.

  5. Low-temperature, Low-Energy, and High-Efficiency Pretreatment Technology for Large Wood Chips with a Redox Couple Catalyst.

    PubMed

    Gogoi, Parikshit; Zhang, Zhe; Geng, Zhishuai; Liu, Wei; Hu, Weize; Deng, Yulin

    2018-03-22

    The pretreatment of lignocellulosic biomass plays a vital role in the conversion of cellulosic biomass to bioethanol, especially for softwoods and hardwoods. Although many pretreatment technologies have been reported so far, only a few pretreatment methods can handle large woodchips directly. To improve the efficiency of pretreatment, existing technologies require the grinding of the wood into small particles, which is an energy-consuming process. Herein, for the first time, we report a simple, effective, and low-temperature (≈100 °C) process for the pretreatment of hardwood (HW) and softwood (SW) chips directly by using a catalytic system of FeCl 3 /NaNO 3 (FCSNRC). The pretreatment experiments were conducted systematically, and a conversion of 71.53 and 70.66 % of cellulose to sugar could be obtained for the direct use of large HW and SW chips. The new method reported here overcomes one of the critical barriers in biomass-to-biofuel conversion, and both grinding and thermal energies can be reduced significantly. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. QUAL-NET, a high temporal-resolution eutrophication model for large hydrographic networks

    NASA Astrophysics Data System (ADS)

    Minaudo, Camille; Curie, Florence; Jullian, Yann; Gassama, Nathalie; Moatar, Florentina

    2018-04-01

    To allow climate change impact assessment of water quality in river systems, the scientific community lacks efficient deterministic models able to simulate hydrological and biogeochemical processes in drainage networks at the regional scale, with high temporal resolution and water temperature explicitly determined. The model QUALity-NETwork (QUAL-NET) was developed and tested on the Middle Loire River Corridor, a sub-catchment of the Loire River in France, prone to eutrophication. Hourly variations computed efficiently by the model helped disentangle the complex interactions existing between hydrological and biological processes across different timescales. Phosphorus (P) availability was the most constraining factor for phytoplankton development in the Loire River, but simulating bacterial dynamics in QUAL-NET surprisingly evidenced large amounts of organic matter recycled within the water column through the microbial loop, which delivered significant fluxes of available P and enhanced phytoplankton growth. This explained why severe blooms still occur in the Loire River despite large P input reductions since 1990. QUAL-NET could be used to study past evolutions or predict future trajectories under climate change and land use scenarios.

  7. Simulation research on the process of large scale ship plane segmentation intelligent workshop

    NASA Astrophysics Data System (ADS)

    Xu, Peng; Liao, Liangchuang; Zhou, Chao; Xue, Rui; Fu, Wei

    2017-04-01

    Large scale ship plane segmentation intelligent workshop is a new thing, and there is no research work in related fields at home and abroad. The mode of production should be transformed by the existing industry 2.0 or part of industry 3.0, also transformed from "human brain analysis and judgment + machine manufacturing" to "machine analysis and judgment + machine manufacturing". In this transforming process, there are a great deal of tasks need to be determined on the aspects of management and technology, such as workshop structure evolution, development of intelligent equipment and changes in business model. Along with them is the reformation of the whole workshop. Process simulation in this project would verify general layout and process flow of large scale ship plane section intelligent workshop, also would analyze intelligent workshop working efficiency, which is significant to the next step of the transformation of plane segmentation intelligent workshop.

  8. Ultrafast laser direct hard-mask writing for high efficiency c-Si texture designs

    NASA Astrophysics Data System (ADS)

    Kumar, Kitty; Lee, Kenneth K. C.; Nogami, Jun; Herman, Peter R.; Kherani, Nazir P.

    2013-03-01

    This study reports a high-resolution hard-mask laser writing technique to facilitate the selective etching of crystalline silicon (c-Si) into an inverted-pyramidal texture with feature size and periodicity on the order of the wavelength which, thus, provides for both anti-reflection and effective light-trapping of infrared and visible light. The process also enables engineered positional placement of the inverted-pyramid thereby providing another parameter for optimal design of an optically efficient pattern. The proposed technique, a non-cleanroom process, is scalable for large area micro-fabrication of high-efficiency thin c-Si photovoltaics. Optical wave simulations suggest the fabricated textured surface with 1.3 μm inverted-pyramids and a single anti-reflective coating increases the relative energy conversion efficiency by 11% compared to the PERL-cell texture with 9 μm inverted pyramids on a 400 μm thick wafer. This efficiency gain is anticipated to improve further for thinner wafers due to enhanced diffractive light trapping effects.

  9. Fast-SNP: a fast matrix pre-processing algorithm for efficient loopless flux optimization of metabolic models

    PubMed Central

    Saa, Pedro A.; Nielsen, Lars K.

    2016-01-01

    Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155

  10. Characteristics and adaptability of iron- and sulfur-oxidizing microorganisms used for the recovery of metals from minerals and their concentrates

    PubMed Central

    Rawlings, Douglas E

    2005-01-01

    Microorganisms are used in large-scale heap or tank aeration processes for the commercial extraction of a variety of metals from their ores or concentrates. These include copper, cobalt, gold and, in the past, uranium. The metal solubilization processes are considered to be largely chemical with the microorganisms providing the chemicals and the space (exopolysaccharide layer) where the mineral dissolution reactions occur. Temperatures at which these processes are carried out can vary from ambient to 80°C and the types of organisms present depends to a large extent on the process temperature used. Irrespective of the operation temperature, biomining microbes have several characteristics in common. One shared characteristic is their ability to produce the ferric iron and sulfuric acid required to degrade the mineral and facilitate metal recovery. Other characteristics are their ability to grow autotrophically, their acid-tolerance and their inherent metal resistance or ability to acquire metal resistance. Although the microorganisms that drive the process have the above properties in common, biomining microbes usually occur in consortia in which cross-feeding may occur such that a combination of microbes including some with heterotrophic tendencies may contribute to the efficiency of the process. The remarkable adaptability of these organisms is assisted by several of the processes being continuous-flow systems that enable the continual selection of microorganisms that are more efficient at mineral degradation. Adaptability is also assisted by the processes being open and non-sterile thereby permitting new organisms to enter. This openness allows for the possibility of new genes that improve cell fitness to be selected from the horizontal gene pool. Characteristics that biomining microorganisms have in common and examples of their remarkable adaptability are described. PMID:15877814

  11. Characteristics and adaptability of iron- and sulfur-oxidizing microorganisms used for the recovery of metals from minerals and their concentrates.

    PubMed

    Rawlings, Douglas E

    2005-05-06

    Microorganisms are used in large-scale heap or tank aeration processes for the commercial extraction of a variety of metals from their ores or concentrates. These include copper, cobalt, gold and, in the past, uranium. The metal solubilization processes are considered to be largely chemical with the microorganisms providing the chemicals and the space (exopolysaccharide layer) where the mineral dissolution reactions occur. Temperatures at which these processes are carried out can vary from ambient to 80 degrees C and the types of organisms present depends to a large extent on the process temperature used. Irrespective of the operation temperature, biomining microbes have several characteristics in common. One shared characteristic is their ability to produce the ferric iron and sulfuric acid required to degrade the mineral and facilitate metal recovery. Other characteristics are their ability to grow autotrophically, their acid-tolerance and their inherent metal resistance or ability to acquire metal resistance. Although the microorganisms that drive the process have the above properties in common, biomining microbes usually occur in consortia in which cross-feeding may occur such that a combination of microbes including some with heterotrophic tendencies may contribute to the efficiency of the process. The remarkable adaptability of these organisms is assisted by several of the processes being continuous-flow systems that enable the continual selection of microorganisms that are more efficient at mineral degradation. Adaptability is also assisted by the processes being open and non-sterile thereby permitting new organisms to enter. This openness allows for the possibility of new genes that improve cell fitness to be selected from the horizontal gene pool. Characteristics that biomining microorganisms have in common and examples of their remarkable adaptability are described.

  12. Large Scale Gaussian Processes for Atmospheric Parameter Retrieval and Cloud Screening

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.

    2017-12-01

    Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.

  13. Maximizing root/rhizosphere efficiency to improve crop productivity and nutrient use efficiency in intensive agriculture of China.

    PubMed

    Shen, Jianbo; Li, Chunjian; Mi, Guohua; Li, Long; Yuan, Lixing; Jiang, Rongfeng; Zhang, Fusuo

    2013-03-01

    Root and rhizosphere research has been conducted for many decades, but the underlying strategy of root/rhizosphere processes and management in intensive cropping systems remain largely to be determined. Improved grain production to meet the food demand of an increasing population has been highly dependent on chemical fertilizer input based on the traditionally assumed notion of 'high input, high output', which results in overuse of fertilizers but ignores the biological potential of roots or rhizosphere for efficient mobilization and acquisition of soil nutrients. Root exploration in soil nutrient resources and root-induced rhizosphere processes plays an important role in controlling nutrient transformation, efficient nutrient acquisition and use, and thus crop productivity. The efficiency of root/rhizosphere in terms of improved nutrient mobilization, acquisition, and use can be fully exploited by: (1) manipulating root growth (i.e. root development and size, root system architecture, and distribution); (2) regulating rhizosphere processes (i.e. rhizosphere acidification, organic anion and acid phosphatase exudation, localized application of nutrients, rhizosphere interactions, and use of efficient crop genotypes); and (3) optimizing root zone management to synchronize root growth and soil nutrient supply with demand of nutrients in cropping systems. Experiments have shown that root/rhizosphere management is an effective approach to increase both nutrient use efficiency and crop productivity for sustainable crop production. The objectives of this paper are to summarize the principles of root/rhizosphere management and provide an overview of some successful case studies on how to exploit the biological potential of root system and rhizosphere processes to improve crop productivity and nutrient use efficiency.

  14. It is more efficient to type: innovative self-registration and appointment self-arrival system improves the patient reception process.

    PubMed

    Knight, Vickie; Guy, Rebecca J; Handan, Wand; Lu, Heng; McNulty, Anna

    2014-06-01

    In 2010, we introduced an express sexually transmitted infection/HIV testing service at a large metropolitan sexual health clinic, which significantly increased clinical service capacity. However, it also increased reception staff workload and caused backlogs of patients waiting to register or check in for appointments. We therefore implemented a new electronic self-registration and appointment self-arrival system in March 2012 to increase administrative efficiency and reduce waiting time for patients. We compared the median processing time overall and for each step of the registration and arrival process as well as the completeness of patient contact information recorded, in a 1-week period before and after the redesign of the registration system. χ2 Test and rank sum tests were used. Before the redesign, the median processing time was 8.33 minutes (interquartile range [IQR], 6.82-15.43), decreasing by 30% to 5.83 minutes (IQR, 4.75-7.42) when the new electronic self-registration and appointment self-arrival system was introduced (P < 0.001). The largest gain in efficiency was in the time taken to prepare the medical record for the clinician, reducing from a median of 5.31 minutes (IQR, 4.02-8.29) to 0.57 minutes (IQR, 0.38-1) in the 2 periods. Before implementation, 20% of patients provided a postal address and 31% an e-mail address, increasing to 60% and 70% post redesign, respectively (P < 0.001). Our evaluation shows that an electronic patient self-registration and appointment self-arrival system can improve clinic efficiency and save patient time. Systems like this one could be used by any outpatient service with large patient volumes as an integrated part of the electronic patient management system or as a standalone feature.

  15. Extreme Quantum Memory Advantage for Rare-Event Sampling

    NASA Astrophysics Data System (ADS)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  16. Streamers and their applications

    NASA Astrophysics Data System (ADS)

    Pemen, A. J. M.

    2011-10-01

    In this invited lecture we give an overview of our 15 years of experience on streamer plasma research. Efforts are directed to integrating the competence areas of plasma physics, pulsed power technology and chemical processing. The current status is the development of a large scale pulsed corona system for gas treatment. Applications on biogas conditioning, VOC removal, odor abatement and control of traffic emissions have been demonstrated. Detailed research on electrical and chemical processes resulted in a boost of efficiencies. Energy transfer efficiency to the plasma was raised to above 90%. Simultaneous improvement of the plasma chemistry resulted in a highly efficient radical generation: O-radical production up to 50% of the theoretical maximum has been achieved. A major challenge in pulsed power driven streamers is to unravel, understand and ultimately control the complex interactions between the transient plasma, electrical circuits, and process. Even more a challenge is to yield electron energies that fit activation energies of the process. We will discuss our ideas on adjusting pulsed power waveforms and plasma reactor settings to obtain more controlled catalytic processing: the ``Chemical Transistor'' concept.

  17. Distance-limited perpendicular distance sampling for coarse woody debris: theory and field results

    Treesearch

    Mark J. Ducey; Micheal S. Williams; Jeffrey H. Gove; Steven Roberge; Robert S. Kenning

    2013-01-01

    Coarse woody debris (CWD) has been identified as an important component in many forest ecosystem processes. Perpendicular distance sampling (PDS) is one of the several efficient new methods that have been proposed for CWD inventory. One drawback of PDS is that the maximum search distance can be very large, especially if CWD diameters are large or the volume factor...

  18. Low joining efficiency and non-conservative repair of two distant double-strand breaks in mouse embryonic stem cells.

    PubMed

    Boubakour-Azzouz, Imenne; Ricchetti, Miria

    2008-02-01

    Efficient and faithful repair of DNA double-strand breaks (DSBs) is critical for genome stability. To understand whether cells carrying a functional repair apparatus are able to efficiently heal two distant chromosome ends and whether this DNA lesion might result in genome rearrangements, we induced DSBs in genetically modified mouse embryonic stem cells carrying two I-SceI sites in cis separated by a distance of 9 kbp. We show that in this context non-homologous end-joining (NHEJ) can repair using standard DNA pairing of the broken ends, but it also joins 3' non-complementary overhangs that require unusual joining intermediates. The repair efficiency of this lesion appears to be dramatically low and the extent of genome alterations was high in striking contrast with the spectra of repair events reported for two collinear DSBs in other experimental systems. The dramatic decline in accuracy suggests that significant constraints operate in the repair process of these distant DSBs, which may also control the low efficiency of this process. These findings provide important insights into the mechanism of repair by NHEJ and how this process may protect the genome from large rearrangements.

  19. Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong

    2017-12-01

    The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.

  20. Energy Efficiency and Universal Design in Home Renovations - A Comparative Review.

    PubMed

    Kapedani, Ermal; Herssens, Jasmien; Verbeeck, Griet

    2016-01-01

    Policy and societal objectives indicate a large need for housing renovations that both accommodate lifelong living and significantly increase energy efficiency. However, these two areas of research are not yet examined in conjunction and this paper hypothesizes this as a missed opportunity to create better renovation concepts. The paper outlines a comparative review on research in Energy Efficiency and Universal Design in order to find the similarities and differences in both depth and breadth of knowledge. Scientific literature in the two fields reveals a disparate depth of knowledge in areas of theory, research approach, and degree of implementation in society. Universal Design and Energy Efficiency are part of a trajectory of expanding scope towards greater sustainability and, although social urgency has been a driver of the research intensity and approach in both fields, in energy efficiency there is an engineering, problem solving approach while Universal Design has a more sociological, user-focused one. These different approaches are reflected in the way home owners in Energy Efficiency research are viewed as consumers and decision makers whose drivers are studied, while Universal Design treats home owners as informants in the design process and studies their needs. There is an inherent difficulty in directly merging Universal Design and Energy Efficiency at a conceptual level because Energy Efficiency is understood as a set of measures, i.e. a product, while Universal Design is part of a (design) process. The conceptual difference is apparent in their implementation as well. Internationally energy efficiency in housing has been largely imposed through legislation, while legislation directly mandating Universal Design is either non-existent or it has an explicit focus on accessibility. However, Energy Efficiency and Universal Design can be complementary concepts and, even though it is more complex than expected, the combination offers possibilities to advance knowledge in both fields.

  1. An exergy approach to efficiency evaluation of desalination

    NASA Astrophysics Data System (ADS)

    Ng, Kim Choon; Shahzad, Muhammad Wakil; Son, Hyuk Soo; Hamed, Osman A.

    2017-05-01

    This paper presents an evaluation process efficiency based on the consumption of primary energy for all types of practical desalination methods available hitherto. The conventional performance ratio has, thus far, been defined with respect to the consumption of derived energy, such as the electricity or steam, which are susceptible to the conversion losses of power plants and boilers that burned the input primary fuels. As derived energies are usually expressed by the units, either kWh or Joules, these units cannot differentiate the grade of energy supplied to the processes accurately. In this paper, the specific energy consumption is revisited for the efficacy of all large-scale desalination plants. In today's combined production of electricity and desalinated water, accomplished with advanced cogeneration concept, the input exergy of fuels is utilized optimally and efficiently in a temperature cascaded manner. By discerning the exergy destruction successively in the turbines and desalination processes, the relative contribution of primary energy to the processes can be accurately apportioned to the input primary energy. Although efficiency is not a law of thermodynamics, however, a common platform for expressing the figures of merit explicit to the efficacy of desalination processes can be developed meaningfully that has the thermodynamic rigor up to the ideal or thermodynamic limit of seawater desalination for all scientists and engineers to aspire to.

  2. Large improvement of phosphorus incorporation efficiency in n-type chemical vapor deposition of diamond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohtani, Ryota; Yamamoto, Takashi; Janssens, Stoffel D.

    2014-12-08

    Microwave plasma enhanced chemical vapor deposition is a promising way to generate n-type, e.g., phosphorus-doped, diamond layers for the fabrication of electronic components, which can operate at extreme conditions. However, a deeper understanding of the doping process is lacking and low phosphorus incorporation efficiencies are generally observed. In this work, it is shown that systematically changing the internal design of a non-commercial chemical vapor deposition chamber, used to grow diamond layers, leads to a large increase of the phosphorus doping efficiency in diamond, produced in this device, without compromising its electronic properties. Compared to the initial reactor design, the dopingmore » efficiency is about 100 times higher, reaching 10%, and for a very broad doping range, the doping efficiency remains highly constant. It is hypothesized that redesigning the deposition chamber generates a higher flow of active phosphorus species towards the substrate, thereby increasing phosphorus incorporation in diamond and reducing deposition of phosphorus species at reactor walls, which additionally reduces undesirable memory effects.« less

  3. Establishment of an efficient virus-induced gene silencing (VIGS) assay in Arabidopsis by Agrobacterium-mediated rubbing infection.

    PubMed

    Manhães, Ana Marcia E de A; de Oliveira, Marcos V V; Shan, Libo

    2015-01-01

    Several VIGS protocols have been established for high-throughput functional genomic screens as it bypasses the time-consuming and laborious process of generation of transgenic plants. The silencing efficiency in this approach is largely hindered by a technically demanding step in which the first pair of newly emerged true leaves at the 2-week-old stage are infiltrated with a needleless syringe. To further optimize VIGS efficiency and achieve rapid inoculation for a large-scale functional genomic study, here we describe a protocol of an efficient VIGS assay in Arabidopsis using Agrobacterium-mediated rubbing infection. The Agrobacterium inoculation is performed by simply rubbing the leaves with Filter Agent Celite(®) 545. The highly efficient and uniform silencing effect was indicated by the development of a visibly albino phenotype due to silencing of the Cloroplastos alterados 1 (CLA1) gene in the newly emerged leaves. In addition, the albino phenotype could be observed in stems and flowers, indicating its potential application for gene functional studies in the late vegetative development and flowering stages.

  4. Optimizing biomass feedstock logistics for forest residue processing and transportation on a tree-shaped road network

    Treesearch

    Hee Han; Woodam Chung; Lucas Wells; Nathaniel Anderson

    2018-01-01

    An important task in forest residue recovery operations is to select the most cost-efficient feedstock logistics system for a given distribution of residue piles, road access, and available machinery. Notable considerations include inaccessibility of treatment units to large chip vans and frequent, long-distance mobilization of forestry equipment required to process...

  5. Efficient processing of fluorescence images using directional multiscale representations.

    PubMed

    Labate, D; Laezza, F; Negi, P; Ozcan, B; Papadakis, M

    2014-01-01

    Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data.

  6. Efficient processing of fluorescence images using directional multiscale representations

    PubMed Central

    Labate, D.; Laezza, F.; Negi, P.; Ozcan, B.; Papadakis, M.

    2017-01-01

    Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data. PMID:28804225

  7. Integration of the stratigraphic aspects of very large sea-floor databases using information processing

    USGS Publications Warehouse

    Jenkins, Clinton N.; Flocks, J.; Kulp, M.; ,

    2006-01-01

    Information-processing methods are described that integrate the stratigraphic aspects of large and diverse collections of sea-floor sample data. They efficiently convert common types of sea-floor data into database and GIS (geographical information system) tables, visual core logs, stratigraphic fence diagrams and sophisticated stratigraphic statistics. The input data are held in structured documents, essentially written core logs that are particularly efficient to create from raw input datasets. Techniques are described that permit efficient construction of regional databases consisting of hundreds of cores. The sedimentological observations in each core are located by their downhole depths (metres below sea floor - mbsf) and also by a verbal term that describes the sample 'situation' - a special fraction of the sediment or position in the core. The main processing creates a separate output event for each instance of top, bottom and situation, assigning top-base mbsf values from numeric or, where possible, from word-based relative locational information such as 'core catcher' in reference to sampler device, and recovery or penetration length. The processing outputs represent the sub-bottom as a sparse matrix of over 20 sediment properties of interest, such as grain size, porosity and colour. They can be plotted in a range of core-log programs including an in-built facility that better suits the requirements of sea-floor data. Finally, a suite of stratigraphic statistics are computed, including volumetric grades, overburdens, thicknesses and degrees of layering. ?? The Geological Society of London 2006.

  8. Energy extraction of a spinning particle via the super Penrose process from an extremal Kerr black hole

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Liu, Wen-Biao

    2018-03-01

    The energy extraction of the collisional Penrose process has been investigated in recent years. Previous researchers mainly concentrated on the case of nonspin massive or massless particles, and they discovered that when the collision occurs near the horizon of extremal rotating black holes, the arbitrary large efficiency can be achieved with the particle's angular momentum below the critical value as L1<2 . In this paper, the energy extraction of spinning massive particles is calculated via the super Penrose process. We obtain the dependence of the impact factor and the turning points on the particle's spin s . The super Penrose process can occur only when s ≤1 and J1<2 , where J1 is the spinning particle's angular momentum. It is found that the efficiency of the energy extraction is monotonously increasing with the particle's spin s increasing for s <1 , and it can become arbitrarily high when the collision occurs close to the horizon. We compare the maximum extracted energy of spinning particles with that of the nonspin case and find a significant increase of the extracted energy. When s →1 , the maximum extracted energy can be orders of magnitude larger than that of the nonspin case. For the astrophysical black holes, the large efficiency is also obtained. Naturally, when the particle's spin s ≪1 , we can degenerate the result back to the nonspin case.

  9. Tuning Chemical Potential Difference across Alternately Doped Graphene p-n Junctions for High-Efficiency Photodetection.

    PubMed

    Lin, Li; Xu, Xiang; Yin, Jianbo; Sun, Jingyu; Tan, Zhenjun; Koh, Ai Leen; Wang, Huan; Peng, Hailin; Chen, Yulin; Liu, Zhongfan

    2016-07-13

    Being atomically thin, graphene-based p-n junctions hold great promise for applications in ultrasmall high-efficiency photodetectors. It is well-known that the efficiency of such photodetectors can be improved by optimizing the chemical potential difference of the graphene p-n junction. However, to date, such tuning has been limited to a few hundred millielectronvolts. To improve this critical parameter, here we report that using a temperature-controlled chemical vapor deposition process, we successfully achieved modulation-doped growth of an alternately nitrogen- and boron-doped graphene p-n junction with a tunable chemical potential difference up to 1 eV. Furthermore, such p-n junction structure can be prepared on a large scale with stable, uniform, and substitutional doping and exhibits a single-crystalline nature. This work provides a feasible method for synthesizing low-cost, large-scale, high efficiency graphene p-n junctions, thus facilitating their applications in optoelectronic and energy conversion devices.

  10. A robust and scalable neuromorphic communication system by combining synaptic time multiplexing and MIMO-OFDM.

    PubMed

    Srinivasa, Narayan; Zhang, Deying; Grigorian, Beayna

    2014-03-01

    This paper describes a novel architecture for enabling robust and efficient neuromorphic communication. The architecture combines two concepts: 1) synaptic time multiplexing (STM) that trades space for speed of processing to create an intragroup communication approach that is firing rate independent and offers more flexibility in connectivity than cross-bar architectures and 2) a wired multiple input multiple output (MIMO) communication with orthogonal frequency division multiplexing (OFDM) techniques to enable a robust and efficient intergroup communication for neuromorphic systems. The MIMO-OFDM concept for the proposed architecture was analyzed by simulating large-scale spiking neural network architecture. Analysis shows that the neuromorphic system with MIMO-OFDM exhibits robust and efficient communication while operating in real time with a high bit rate. Through combining STM with MIMO-OFDM techniques, the resulting system offers a flexible and scalable connectivity as well as a power and area efficient solution for the implementation of very large-scale spiking neural architectures in hardware.

  11. A robust embedded vision system feasible white balance algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  12. Efficient and scalable ionization of neutral atoms by an orderly array of gold-doped silicon nanowires

    NASA Astrophysics Data System (ADS)

    Bucay, Igal; Helal, Ahmed; Dunsky, David; Leviyev, Alex; Mallavarapu, Akhila; Sreenivasan, S. V.; Raizen, Mark

    2017-04-01

    Ionization of atoms and molecules is an important process in many applications and processes such as mass spectrometry. Ionization is typically accomplished by electron bombardment, and while it is scalable to large volumes, is also very inefficient due to the small cross section of electron-atom collisions. Photoionization methods can be highly efficient, but are not scalable due to the small ionization volume. Electric field ionization is accomplished using ultra-sharp conducting tips biased to a few kilovolts, but suffers from a low ionization volume and tip fabrication limitations. We report on our progress towards an efficient, robust, and scalable method of atomic and molecular ionization using orderly arrays of sharp, gold-doped silicon nanowires. As demonstrated in earlier work, the presence of the gold greatly enhances the ionization probability, which was attributed to an increase in available acceptor surface states. We present here a novel process used to fabricate the nanowire array, results of simulations aimed at optimizing the configuration of the array, and our progress towards demonstrating efficient and scalable ionization.

  13. Process in manufacturing high efficiency AlGaAs/GaAs solar cells by MO-CVD

    NASA Technical Reports Server (NTRS)

    Yeh, Y. C. M.; Chang, K. I.; Tandon, J.

    1984-01-01

    Manufacturing technology for mass producing high efficiency GaAs solar cells is discussed. A progress using a high throughput MO-CVD reactor to produce high efficiency GaAs solar cells is discussed. Thickness and doping concentration uniformity of metal oxide chemical vapor deposition (MO-CVD) GaAs and AlGaAs layer growth are discussed. In addition, new tooling designs are given which increase the throughput of solar cell processing. To date, 2cm x 2cm AlGaAs/GaAs solar cells with efficiency up to 16.5% were produced. In order to meet throughput goals for mass producing GaAs solar cells, a large MO-CVD system (Cambridge Instrument Model MR-200) with a susceptor which was initially capable of processing 20 wafers (up to 75 mm diameter) during a single growth run was installed. In the MR-200, the sequencing of the gases and the heating power are controlled by a microprocessor-based programmable control console. Hence, operator errors can be reduced, leading to a more reproducible production sequence.

  14. Three essays on U.S. electricity restructuring

    NASA Astrophysics Data System (ADS)

    Sergici, Sanem I.

    2008-04-01

    The traditional structure of the electricity sector in the U.S. has been that of large vertically integrated companies with sole responsibility for distributing power to end users within a franchise area. The restructuring of this sector that has occurred in the past 10-20 years has profoundly altered this picture. This dissertation examines three aspects of that restructuring process. First chapter of my dissertation investigates the impacts of divestitures of generation, an important part of the process of restructuring, on the efficiency of distribution systems. We find that while all divestitures as a group do not significantly affect distribution efficiency, those mandated by state public utility commissions have resulted in large and statistically significant adverse effects on distribution efficiency. Second chapter of my dissertation explores whether independent system operator (ISO) formation in New York has led to operating efficiencies at the unit and the system level. ISOs oversee the centralized management of the grid and the energy market and are expected to promote more efficient power generation. We test these efficiencies focusing on the generation units in New York ISO region from 1998 to 2004 and find that the NYISO formation has introduced limited efficiencies at the unit and the system level. Restructuring in the electricity industry has spawned a new wave of mergers, both raising questions and providing opportunities to examine these mergers. Third chapter of my dissertation investigates the drivers of electric utility mergers consummated between 1992 and 2004. My results provide support for disturbance theory of mergers, size hypothesis, and inefficient management hypothesis as drivers of electric utility mergers. I also find that the adjacency of the service territories is the most noteworthy determinant of the pairings between IOUs.

  15. Bioprocessing of Cryopreservation for Large-Scale Banking of Human Pluripotent Stem Cells

    PubMed Central

    Ma, Teng

    2012-01-01

    Abstract Human pluripotent stem cell (hPSC)-derived cell therapy requires production of therapeutic cells in large quantity, which starts from thawing the cryopreserved cells from a working cell bank or a master cell bank. An optimal cryopreservation and thaw process determines the efficiency of hPSC expansion and plays a significant role in the subsequent lineage-specific differentiation. However, cryopreservation in hPSC bioprocessing has been a challenge due to the unique growth requirements of hPSC, the sensitivity to cryoinjury, and the unscalable cryopreservation procedures commonly used in the laboratory. Tremendous progress has been made to identify the regulatory pathways regulating hPSC responses during cryopreservation and the development of small molecule interventions that effectively improves the efficiency of cryopreservation. The adaption of these methods in current good manufacturing practices (cGMP)-compliant cryopreservation processes not only improves cell survival, but also their therapeutic potency. This review summarizes the advances in these areas and discusses the technical requirements in the development of cGMP-compliant hPSC cryopreservation process. PMID:23515461

  16. One-step assembly and targeted integration of multigene constructs assisted by the I-SceI meganuclease in Saccharomyces cerevisiae

    PubMed Central

    Kuijpers, Niels GA; Chroumpi, Soultana; Vos, Tim; Solis-Escalante, Daniel; Bosman, Lizanne; Pronk, Jack T; Daran, Jean-Marc; Daran-Lapujade, Pascale

    2013-01-01

    In vivo assembly of overlapping fragments by homologous recombination in Saccharomyces cerevisiae is a powerful method to engineer large DNA constructs. Whereas most in vivo assembly methods reported to date result in circular vectors, stable integrated constructs are often preferred for metabolic engineering as they are required for large-scale industrial application. The present study explores the potential of combining in vivo assembly of large, multigene expression constructs with their targeted chromosomal integration in S. cerevisiae. Combined assembly and targeted integration of a ten-fragment 22-kb construct to a single chromosomal locus was successfully achieved in a single transformation process, but with low efficiency (5% of the analyzed transformants contained the correctly assembled construct). The meganuclease I-SceI was therefore used to introduce a double-strand break at the targeted chromosomal locus, thus to facilitate integration of the assembled construct. I-SceI-assisted integration dramatically increased the efficiency of assembly and integration of the same construct to 95%. This study paves the way for the fast, efficient, and stable integration of large DNA constructs in S. cerevisiae chromosomes. PMID:24028550

  17. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency ofmore » individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern Minnesota, and future proposals are pending with non-taconite mineral processing applications.« less

  18. Giant nonlinear response at a plasmonic nanofocus drives efficient four-wave mixing

    NASA Astrophysics Data System (ADS)

    Nielsen, Michael P.; Shi, Xingyuan; Dichtl, Paul; Maier, Stefan A.; Oulton, Rupert F.

    2017-12-01

    Efficient optical frequency mixing typically must accumulate over large interaction lengths because nonlinear responses in natural materials are inherently weak. This limits the efficiency of mixing processes owing to the requirement of phase matching. Here, we report efficient four-wave mixing (FWM) over micrometer-scale interaction lengths at telecommunications wavelengths on silicon. We used an integrated plasmonic gap waveguide that strongly confines light within a nonlinear organic polymer. The gap waveguide intensifies light by nanofocusing it to a mode cross-section of a few tens of nanometers, thus generating a nonlinear response so strong that efficient FWM accumulates over wavelength-scale distances. This technique opens up nonlinear optics to a regime of relaxed phase matching, with the possibility of compact, broadband, and efficient frequency mixing integrated with silicon photonics.

  19. Nano-optomechanical transducer

    DOEpatents

    Rakich, Peter T; El-Kady, Ihab F; Olsson, Roy H; Su, Mehmet Fatih; Reinke, Charles; Camacho, Ryan; Wang, Zheng; Davids, Paul

    2013-12-03

    A nano-optomechanical transducer provides ultrabroadband coherent optomechanical transduction based on Mach-wave emission that uses enhanced photon-phonon coupling efficiencies by low impedance effective phononic medium, both electrostriction and radiation pressure to boost and tailor optomechanical forces, and highly dispersive electromagnetic modes that amplify both electrostriction and radiation pressure. The optomechanical transducer provides a large operating bandwidth and high efficiency while simultaneously having a small size and minimal power consumption, enabling a host of transformative phonon and signal processing capabilities. These capabilities include optomechanical transduction via pulsed phonon emission and up-conversion, broadband stimulated phonon emission and amplification, picosecond pulsed phonon lasers, broadband phononic modulators, and ultrahigh bandwidth true time delay and signal processing technologies.

  20. An index-based algorithm for fast on-line query processing of latent semantic analysis

    PubMed Central

    Li, Pohan; Wang, Wei

    2017-01-01

    Latent Semantic Analysis (LSA) is widely used for finding the documents whose semantic is similar to the query of keywords. Although LSA yield promising similar results, the existing LSA algorithms involve lots of unnecessary operations in similarity computation and candidate check during on-line query processing, which is expensive in terms of time cost and cannot efficiently response the query request especially when the dataset becomes large. In this paper, we study the efficiency problem of on-line query processing for LSA towards efficiently searching the similar documents to a given query. We rewrite the similarity equation of LSA combined with an intermediate value called partial similarity that is stored in a designed index called partial index. For reducing the searching space, we give an approximate form of similarity equation, and then develop an efficient algorithm for building partial index, which skips the partial similarities lower than a given threshold θ. Based on partial index, we develop an efficient algorithm called ILSA for supporting fast on-line query processing. The given query is transformed into a pseudo document vector, and the similarities between query and candidate documents are computed by accumulating the partial similarities obtained from the index nodes corresponds to non-zero entries in the pseudo document vector. Compared to the LSA algorithm, ILSA reduces the time cost of on-line query processing by pruning the candidate documents that are not promising and skipping the operations that make little contribution to similarity scores. Extensive experiments through comparison with LSA have been done, which demonstrate the efficiency and effectiveness of our proposed algorithm. PMID:28520747

  1. An index-based algorithm for fast on-line query processing of latent semantic analysis.

    PubMed

    Zhang, Mingxi; Li, Pohan; Wang, Wei

    2017-01-01

    Latent Semantic Analysis (LSA) is widely used for finding the documents whose semantic is similar to the query of keywords. Although LSA yield promising similar results, the existing LSA algorithms involve lots of unnecessary operations in similarity computation and candidate check during on-line query processing, which is expensive in terms of time cost and cannot efficiently response the query request especially when the dataset becomes large. In this paper, we study the efficiency problem of on-line query processing for LSA towards efficiently searching the similar documents to a given query. We rewrite the similarity equation of LSA combined with an intermediate value called partial similarity that is stored in a designed index called partial index. For reducing the searching space, we give an approximate form of similarity equation, and then develop an efficient algorithm for building partial index, which skips the partial similarities lower than a given threshold θ. Based on partial index, we develop an efficient algorithm called ILSA for supporting fast on-line query processing. The given query is transformed into a pseudo document vector, and the similarities between query and candidate documents are computed by accumulating the partial similarities obtained from the index nodes corresponds to non-zero entries in the pseudo document vector. Compared to the LSA algorithm, ILSA reduces the time cost of on-line query processing by pruning the candidate documents that are not promising and skipping the operations that make little contribution to similarity scores. Extensive experiments through comparison with LSA have been done, which demonstrate the efficiency and effectiveness of our proposed algorithm.

  2. 50.4% slope efficiency thulium-doped large-mode-area fiber laser fabricated by powder technology.

    PubMed

    Darwich, Dia; Dauliat, Romain; Jamier, Raphaël; Benoit, Aurélien; Auguste, Jean-Louis; Grimm, Stephan; Kobelke, Jens; Schwuchow, Anka; Schuster, Kay; Roy, Philippe

    2016-01-15

    We report on a triple clad large-mode-area Tm-doped fiber laser with 18 μm core diameter manufactured for the first time by an alternative manufacturing process named REPUSIL. This reactive powder sinter material enables similar properties compared to conventional CVD-made fiber lasers, while offering the potential of producing larger and more uniform material. The fiber characterization in a laser configuration provides a slope efficiency of 47.7% at 20°C, and 50.4% at 0°C with 8 W output power, with a laser peak emission at 1970 nm. Finally, a beam quality near the diffraction-limit (M(x,y)2<1.1) is proved.

  3. GraphReduce: Processing Large-Scale Graphs on Accelerator-Based Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Dipanjan; Song, Shuaiwen; Agarwal, Kapil

    2015-11-15

    Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host andmore » device.« less

  4. Implementationof a modular software system for multiphysical processes in porous media

    NASA Astrophysics Data System (ADS)

    Naumov, Dmitri; Watanabe, Norihiro; Bilke, Lars; Fischer, Thomas; Lehmann, Christoph; Rink, Karsten; Walther, Marc; Wang, Wenqing; Kolditz, Olaf

    2016-04-01

    Subsurface georeservoirs are a candidate technology for large scale energy storage required as part of the transition to renewable energy sources. The increased use of the subsurface results in competing interests and possible impacts on protected entities. To optimize and plan the use of the subsurface in large scale scenario analyses,powerful numerical frameworks are required that aid process understanding and can capture the coupled thermal (T), hydraulic (H), mechanical (M), and chemical (C) processes with high computational efficiency. Due to having a multitude of different couplings between basic T, H, M, or C processes and the necessity to implement new numerical schemes the development focus has moved to software's modularity. The decreased coupling between the components results in two major advantages: easier addition of specialized processes and improvement of the code's testability and therefore its quality. The idea of modularization is implemented on several levels, in addition to library based separation of the previous code version, by using generalized algorithms available in the Standard Template Library and the Boost library, relying on efficient implementations of liner algebra solvers, using concepts when designing new types, and localization of frequently accessed data structures. This procedure shows certain benefits for a flexible high-performance framework applied to the analysis of multipurpose georeservoirs.

  5. Kinetic isotopic fractionation and the origin of HDO and CH3D in the solar system

    NASA Technical Reports Server (NTRS)

    Yung, Yuk L.; Wen, Jun-Shan; Friedl, Randall R.; Pinto, Joseph P.; Bayes, Kyle D.

    1988-01-01

    It is suggested that photochemical enrichment processes driven by stellar UV emissions could result in a large deuterium fractionation of water and methane relative to H2 in the primitive solar nebula. These enrichment processes could have profoundly influenced the isotopic content of water in the terrestrial planets, if a large fraction of their volatiles had been added by impacts of meteorites and comets formed in the outer parts of the solar nebula. Efficient mixing could have exposed the material in the interior of the solar nebula to starlight.

  6. Parameterization of Photon Tunneling with Application to Ice Cloud Optical Properties at Terrestrial Wavelengths

    NASA Astrophysics Data System (ADS)

    Mitchell, D. L.

    2006-12-01

    Sometimes deep physical insights can be gained through the comparison of two theories of light scattering. Comparing van de Hulst's anomalous diffraction approximation (ADA) with Mie theory yielded insights on the behavior of the photon tunneling process that resulted in the modified anomalous diffraction approximation (MADA). (Tunneling is the process by which radiation just beyond a particle's physical cross-section may undergo large angle diffraction or absorption, contributing up to 40% of the absorption when wavelength and particle size are comparable.) Although this provided a means of parameterizing the tunneling process in terms of the real index of refraction and size parameter, it did not predict the efficiency of the tunneling process, where an efficiency of 100% is predicted for spheres by Mie theory. This tunneling efficiency, Tf, depends on particle shape and ranges from 0 to 1.0, with 1.0 corresponding to spheres. Similarly, by comparing absorption efficiencies predicted by the Finite Difference Time Domain Method (FDTD) with efficiencies predicted by MADA, Tf was determined for nine different ice particle shapes, including aggregates. This comparison confirmed that Tf is a strong function of ice crystal shape, including the aspect ratio when applicable. Tf was lowest (< 0.36) for aggregates and plates, and largest (> 0.9) for quasi- spherical shapes. A parameterization of Tf was developed in terms of (1) ice particle shape and (2) mean particle size regarding the large mode (D > 70 mm) of the ice particle size distribution. For the small mode, Tf is only a function of ice particle shape. When this Tf parameterization is used in MADA, absorption and extinction efficiency differences between MADA and FDTD are within 14% over the terrestrial wavelength range 3-100 mm for all size distributions and most crystal shapes likely to be found in cirrus clouds. Using hyperspectral radiances, it is demonstrated that Tf can be retrieved from ice clouds. Since Tf is a function of ice particle shape, this may provide a means of retrieving qualitative information on ice particle shape.

  7. Highly Efficient Erythritol Recovery from Waste Erythritol Mother Liquor by a Yeast-Mediated Biorefinery Process.

    PubMed

    Wang, Siqi; Wang, Hengwei; Lv, Jiyang; Deng, Zixin; Cheng, Hairong

    2017-12-20

    Erythritol, a natural sugar alcohol, is produced industrially by fermentation and crystallization, but this process leaves a large amount of waste erythritol mother liquor (WEML) which contains more than 200 g/L erythritol as well as other polyol byproducts. These impurities make it very difficult to crystallize more erythritol. In our study, an efficient process for the recovery of erythritol from the WEML is described. The polyol impurities were first identified by high-performance liquid chromatography and gas chromatography-mass spectrometry, and a yeast strain Candida maltosa CGMCC 7323 was then isolated to metabolize those impurities to purify erythritol. Our results demonstrated that the process could remarkably improve the purity of erythritol and thus make the subsequent crystallization easier. This newly developed strategy is expected to have advantages in WEML treatment and provide helpful information with regard to green cell factories and zero-waste processing.

  8. Analysis and evaluation in the production process and equipment area of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Goldman, H.; Wolf, M.

    1979-01-01

    Analyses of slicing processes and junction formation processes are presented. A simple method for evaluation of the relative economic merits of competing process options with respect to the cost of energy produced by the system is described. An energy consumption analysis was developed and applied to determine the energy consumption in the solar module fabrication process sequence, from the mining of the SiO2 to shipping. The analysis shows that, in current technology practice, inordinate energy use in the purification step, and large wastage of the invested energy through losses, particularly poor conversion in slicing, as well as inadequate yields throughout. The cell process energy expenditures already show a downward trend based on increased throughput rates. The large improvement, however, depends on the introduction of a more efficient purification process and of acceptable ribbon growing techniques.

  9. Perovskite Solar Cells with Near 100% Internal Quantum Efficiency Based on Large Single Crystalline Grains and Vertical Bulk Heterojunctions

    DOE PAGES

    Yang, Bin; Dyck, Ondrej; Poplawsky, Jonathan; ...

    2015-07-09

    Grain boundaries (GBs) as defects in the crystal lattice detrimentally impact the power conversion efficiency (PCE) of polycrystalline solar cells, particularly in recently emerging hybrid perovskites where non-radiative recombination processes lead to significant carrier losses. Here, the beneficial effects of activated vertical GBs are demonstrated by first growing large, vertically-oriented methylammonium lead tri-iodide (CH 3NH 3PbI 3) single-crystalline grains. We show that infiltration of p-type doped 2 -7,7 -tetrakis(N,Ndi-p-methoxyphenylamine)-9,9-spirobifluorene (Spiro-OMeTAD) into CH 3NH 3PbI 3 films along the GBs creates space charge regions to suppress non-radiative recombination and enhance carrier collection efficiency. Solar cells with such activated GBs yielded averagemore » PCE of 16.3 ± 0.9%, which are among the best solution-processed perovskite devices. As an important alternative to growing ideal CH 3NH 3PbI 3 single crystal films, which is difficult to achieve for such fast-crystallizing perovskites, activating GBs paves a way to design a new type of bulk heterojunction hybrid perovskite photovoltaics toward theoretical maximum PCE.« less

  10. Comparing memory-efficient genome assemblers on stand-alone and cloud infrastructures.

    PubMed

    Kleftogiannis, Dimitrios; Kalnis, Panos; Bajic, Vladimir B

    2013-01-01

    A fundamental problem in bioinformatics is genome assembly. Next-generation sequencing (NGS) technologies produce large volumes of fragmented genome reads, which require large amounts of memory to assemble the complete genome efficiently. With recent improvements in DNA sequencing technologies, it is expected that the memory footprint required for the assembly process will increase dramatically and will emerge as a limiting factor in processing widely available NGS-generated reads. In this report, we compare current memory-efficient techniques for genome assembly with respect to quality, memory consumption and execution time. Our experiments prove that it is possible to generate draft assemblies of reasonable quality on conventional multi-purpose computers with very limited available memory by choosing suitable assembly methods. Our study reveals the minimum memory requirements for different assembly programs even when data volume exceeds memory capacity by orders of magnitude. By combining existing methodologies, we propose two general assembly strategies that can improve short-read assembly approaches and result in reduction of the memory footprint. Finally, we discuss the possibility of utilizing cloud infrastructures for genome assembly and we comment on some findings regarding suitable computational resources for assembly.

  11. How to achieve optimal organization of primary care service delivery at system level: lessons from Europe.

    PubMed

    Pelone, Ferruccio; Kringos, Dionne S; Spreeuwenberg, Peter; De Belvis, Antonio G; Groenewegen, Peter P

    2013-09-01

    To measure the relative efficiency of primary care (PC) in turning their structures into services delivery and turning their services delivery into quality outcomes. Cross-sectional study based on the dataset of the Primary Healthcare Activity Monitor for Europe project. Two Data Envelopment models were run to compare the relative technical efficiency. A sensitivity analysis of the resulting efficiency scores was performed. PC systems in 22 European countries in 2009/2010. Model 1 included data on PC governance, workforce development and economic conditions as inputs and access, coordination, continuity and comprehensiveness of care as outputs. Model 2 included the previous process dimensions as inputs and quality indicators as outputs. There is relatively reasonable efficiency in all countries at delivering as many as possible PC processes at a given level of PC structure. It is particularly important to invest in economic conditions to achieve an efficient structure-process balance. Only five countries have fully efficient PC systems in turning their services delivery into high quality outcomes, using a similar combination of access, continuity and comprehensiveness, although they differ on the adoption of coordination of services. There is a large variation in efficiency levels obtained by countries with inefficient PC in turning their services delivery into quality outcomes. Maximizing the individual functions of PC without taking into account the coherence within the health-care system is not sufficient from a policymaker's point of view when aiming to achieve efficiency.

  12. Visual analysis of inter-process communication for large-scale parallel computing.

    PubMed

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  13. Designing a place for automation.

    PubMed

    Bazzoli, F

    1995-05-01

    Re-engineering is a hot topic in health care as market forces increase pressure to cut costs. Providers and payers that are redesigning their business processes are counting on information systems to help achieve simplification and make large gains in efficiency. But these same organizations say they're reluctant to make large upfront investments in information systems until they know exactly what role technology will play in the re-engineered entity.

  14. The Management Challenge: Handling Exams Involving Large Quantities of Students, on and off Campus--A Design Concept

    ERIC Educational Resources Information Center

    Larsson, Ken

    2014-01-01

    This paper looks at the process of managing large numbers of exams efficiently and secure with the use of a dedicated IT support. The system integrates regulations on different levels, from national to local, (even down to departments) and ensures that the rules are employed in all stages of handling the exams. The system has a proven record of…

  15. Large-Eddy/Lattice Boltzmann Simulations of Micro-blowing Strategies for Subsonic and Supersonic Drag Control

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    2003-01-01

    This report summarizes the progress made in the first 8 to 9 months of this research. The Lattice Boltzmann Equation (LBE) methodology for Large-eddy Simulations (LES) of microblowing has been validated using a jet-in-crossflow test configuration. In this study, the flow intake is also simulated to allow the interaction to occur naturally. The Lattice Boltzmann Equation Large-eddy Simulations (LBELES) approach is capable of capturing not only the flow features associated with the flow, such as hairpin vortices and recirculation behind the jet, but also is able to show better agreement with experiments when compared to previous RANS predictions. The LBELES is shown to be computationally very efficient and therefore, a viable method for simulating the injection process. Two strategies have been developed to simulate multi-hole injection process as in the experiment. In order to allow natural interaction between the injected fluid and the primary stream, the flow intakes for all the holes have to be simulated. The LBE method is computationally efficient but is still 3D in nature and therefore, there may be some computational penalty. In order to study a large number or holes, a new 1D subgrid model has been developed that will simulate a reduced form of the Navier-Stokes equation in these holes.

  16. Edge control in a computer controlled optical surfacing process using a heterocercal tool influence function.

    PubMed

    Hu, Haixiang; Zhang, Xin; Ford, Virginia; Luo, Xiao; Qi, Erhui; Zeng, Xuefeng; Zhang, Xuejun

    2016-11-14

    Edge effect is regarded as one of the most difficult technical issues in a computer controlled optical surfacing (CCOS) process. Traditional opticians have to even up the consequences of the two following cases. Operating CCOS in a large overhang condition affects the accuracy of material removal, while in a small overhang condition, it achieves a more accurate performance, but leaves a narrow rolled-up edge, which takes time and effort to remove. In order to control the edge residuals in the latter case, we present a new concept of the 'heterocercal' tool influence function (TIF). Generated from compound motion equipment, this type of TIF can 'transfer' the material removal from the inner place to the edge, meanwhile maintaining the high accuracy and efficiency of CCOS. We call it the 'heterocercal' TIF, because of the inspiration from the heterocercal tails of sharks, whose upper lobe provides most of the explosive power. The heterocercal TIF was theoretically analyzed, and physically realized in CCOS facilities. Experimental and simulation results showed good agreement. It enables significant control of the edge effect and convergence of entire surface errors in large tool-to-mirror size-ratio conditions. This improvement will largely help manufacturing efficiency in some extremely large optical system projects, like the tertiary mirror of the Thirty Meter Telescope.

  17. Enhancing performance and uniformity of CH3NH3PbI3-xClx perovskite solar cells by air-heated-oven assisted annealing under various humidities

    NASA Astrophysics Data System (ADS)

    Zhou, Qing; Jin, Zhiwen; Li, Hui; Wang, Jizheng

    2016-02-01

    To fabricate high-performance metal-halide perovskite solar cells, a thermal annealing process is indispensable in preparing high quality perovskite film. And usually such annealing is performed on hot plate. However hot-plate annealing could cause problems such as inhomogeneous heating (induced by non-tight contact between the sample and the plate), it is also not fit for large scale manufactory. In this paper, we conduct the annealing process in air-heated oven under various humidity environments, and compared the resulted films (CH3NH3PbI3-xClx) and devices (Al/PC61BM/CH3NH3PbI3-xClx/PEDOT:PSS/ITO/glass) with that obtained via hot-plate annealing. It is found that the air-heated-oven annealing is superior to the hot-plate annealing: the annealing time is shorter, the films are more uniform, and the devices exhibit higher power conversion efficiency and better uniformity. The highest efficiencies achieved for the oven and hot-plate annealing processes are 14.9% and 13.5%, and the corresponding standard deviations are 0.5% and 0.8%, respectively. Our work here indicates that air-heated-oven annealing could be a more reliable and more efficient way for both lab research and large-scale production.

  18. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Forest M.; Bochev, Pavel B.; Cameron-Smith, Philip J..

    The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.

  19. Large-scale modular biofiltration system for effective odor removal in a composting facility.

    PubMed

    Lin, Yueh-Hsien; Chen, Yu-Pei; Ho, Kuo-Ling; Lee, Tsung-Yih; Tseng, Ching-Ping

    2013-01-01

    Several different foul odors such as nitrogen-containing groups, sulfur-containing groups, and short-chain fatty-acids commonly emitted from composting facilities. In this study, an experimental laboratory-scale bioreactor was scaled up to build a large-scale modular biofiltration system that can process 34 m(3)min(-1)waste gases. This modular reactor system was proven effective in eliminating odors, with a 97% removal efficiency for 96 ppm ammonia, a 98% removal efficiency for 220 ppm amines, and a 100% removal efficiency of other odorous substances. The results of operational parameters indicate that this modular biofiltration system offers long-term operational stability. Specifically, a low pressure drop (<45 mmH2O m(-1)) was observed, indicating that the packing carrier in bioreactor units does not require frequent replacement. Thus, this modular biofiltration system can be used in field applications to eliminate various odors with compact working volume.

  20. Large scale nanoparticle screening for small molecule analysis in laser desorption ionization mass spectrometry

    DOE PAGES

    Yagnik, Gargey B.; Hansen, Rebecca L.; Korte, Andrew R.; ...

    2016-08-30

    Nanoparticles (NPs) have been suggested as efficient matrixes for small molecule profiling and imaging by laser-desorption ionization mass spectrometry (LDI-MS), but so far there has been no systematic study comparing different NPs in the analysis of various classes of small molecules. Here, we present a large scale screening of 13 NPs for the analysis of two dozen small metabolite molecules. Many NPs showed much higher LDI efficiency than organic matrixes in positive mode and some NPs showed comparable efficiencies for selected analytes in negative mode. Our results suggest that a thermally driven desorption process is a key factor for metalmore » oxide NPs, but chemical interactions are also very important, especially for other NPs. Furthermore, the screening results provide a useful guideline for the selection of NPs in the LDI-MS analysis of small molecules.« less

  1. Large scale nanoparticle screening for small molecule analysis in laser desorption ionization mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yagnik, Gargey B.; Hansen, Rebecca L.; Korte, Andrew R.

    Nanoparticles (NPs) have been suggested as efficient matrixes for small molecule profiling and imaging by laser-desorption ionization mass spectrometry (LDI-MS), but so far there has been no systematic study comparing different NPs in the analysis of various classes of small molecules. Here, we present a large scale screening of 13 NPs for the analysis of two dozen small metabolite molecules. Many NPs showed much higher LDI efficiency than organic matrixes in positive mode and some NPs showed comparable efficiencies for selected analytes in negative mode. Our results suggest that a thermally driven desorption process is a key factor for metalmore » oxide NPs, but chemical interactions are also very important, especially for other NPs. Furthermore, the screening results provide a useful guideline for the selection of NPs in the LDI-MS analysis of small molecules.« less

  2. A European mobile satellite system concept exploiting CDMA and OBP

    NASA Technical Reports Server (NTRS)

    Vernucci, A.; Craig, A. D.

    1993-01-01

    This paper describes a novel Land Mobile Satellite System (LMSS) concept applicable to networks allowing access to a large number of gateway stations ('Hubs'), utilizing low-cost Very Small Aperture Terminals (VSAT's). Efficient operation of the Forward-Link (FL) repeater can be achieved by adopting a synchronous Code Division Multiple Access (CDMA) technique, whereby inter-code interference (self-noise) is virtually eliminated by synchronizing orthogonal codes. However, with a transparent FL repeater, the requirements imposed by the highly decentralized ground segment can lead to significant efficiency losses. The adoption of a FL On-Board Processing (OBP) repeater is proposed as a means of largely recovering this efficiency impairment. The paper describes the network architecture, the system design and performance, the OBP functions and impact on implementation. The proposed concept, applicable to a future generation of the European LMSS, was developed in the context of a European Space Agency (ESA) study contract.

  3. An energy-efficient data gathering protocol in large wireless sensor network

    NASA Astrophysics Data System (ADS)

    Wang, Yamin; Zhang, Ruihua; Tao, Shizhong

    2006-11-01

    Wireless sensor network consisting of a large number of small sensors with low-power transceiver can be an effective tool for gathering data in a variety of environment. The collected data must be transmitted to the base station for further processing. Since a network consists of sensors with limited battery energy, the method for data gathering and routing must be energy efficient in order to prolong the lifetime of the network. In this paper, we presented an energy-efficient data gathering protocol in wireless sensor network. The new protocol used data fusion technology clusters nodes into groups and builds a chain among the cluster heads according to a hybrid of the residual energy and distance to the base station. Results in stochastic geometry are used to derive the optimum parameter of our algorithm that minimizes the total energy spent in the network. Simulation results show performance superiority of the new protocol.

  4. Research on key technologies of data processing in internet of things

    NASA Astrophysics Data System (ADS)

    Zhu, Yangqing; Liang, Peiying

    2017-08-01

    The data of Internet of things (IOT) has the characteristics of polymorphism, heterogeneous, large amount and processing real-time. The traditional structured and static batch processing method has not met the requirements of data processing of IOT. This paper studied a middleware that can integrate heterogeneous data of IOT, and integrated different data formats into a unified format. Designed a data processing model of IOT based on the Storm flow calculation architecture, integrated the existing Internet security technology to build the Internet security system of IOT data processing, which provided reference for the efficient transmission and processing of IOT data.

  5. Color in the corners: ITO-free white OLEDs with angular color stability.

    PubMed

    Gaynor, Whitney; Hofmann, Simone; Christoforo, M Greyson; Sachse, Christoph; Mehra, Saahil; Salleo, Alberto; McGehee, Michael D; Gather, Malte C; Lüssem, Björn; Müller-Meskamp, Lars; Peumans, Peter; Leo, Karl

    2013-08-07

    High-efficiency white OLEDs fabricated on silver nanowire-based composite transparent electrodes show almost perfectly Lambertian emission and superior angular color stability, imparted by electrode light scattering. The OLED efficiencies are comparable to those fabricated using indium tin oxide. The transparent electrodes are fully solution-processable, thin-film compatible, and have a figure of merit suitable for large-area devices. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Techniques for Efficiently Managing Large Geosciences Data Sets

    NASA Astrophysics Data System (ADS)

    Kruger, A.; Krajewski, W. F.; Bradley, A. A.; Smith, J. A.; Baeck, M. L.; Steiner, M.; Lawrence, R. E.; Ramamurthy, M. K.; Weber, J.; Delgreco, S. A.; Domaszczynski, P.; Seo, B.; Gunyon, C. A.

    2007-12-01

    We have developed techniques and software tools for efficiently managing large geosciences data sets. While the techniques were developed as part of an NSF-Funded ITR project that focuses on making NEXRAD weather data and rainfall products available to hydrologists and other scientists, they are relevant to other geosciences disciplines that deal with large data sets. Metadata, relational databases, data compression, and networking are central to our methodology. Data and derived products are stored on file servers in a compressed format. URLs to, and metadata about the data and derived products are managed in a PostgreSQL database. Virtually all access to the data and products is through this database. Geosciences data normally require a number of processing steps to transform the raw data into useful products: data quality assurance, coordinate transformations and georeferencing, applying calibration information, and many more. We have developed the concept of crawlers that manage this scientific workflow. Crawlers are unattended processes that run indefinitely, and at set intervals query the database for their next assignment. A database table functions as a roster for the crawlers. Crawlers perform well-defined tasks that are, except for perhaps sequencing, largely independent from other crawlers. Once a crawler is done with its current assignment, it updates the database roster table, and gets its next assignment by querying the database. We have developed a library that enables one to quickly add crawlers. The library provides hooks to external (i.e., C-language) compiled codes, so that developers can work and contribute independently. Processes called ingesters inject data into the system. The bulk of the data are from a real-time feed using UCAR/Unidata's IDD/LDM software. An exciting recent development is the establishment of a Unidata HYDRO feed that feeds value-added metadata over the IDD/LDM. Ingesters grab the metadata and populate the PostgreSQL tables. These and other concepts we have developed have enabled us to efficiently manage a 70 Tb (and growing) data weather radar data set.

  7. Efficient kinetic Monte Carlo method for reaction-diffusion problems with spatially varying annihilation rates

    NASA Astrophysics Data System (ADS)

    Schwarz, Karsten; Rieger, Heiko

    2013-03-01

    We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.

  8. Index Compression and Efficient Query Processing in Large Web Search Engines

    ERIC Educational Resources Information Center

    Ding, Shuai

    2013-01-01

    The inverted index is the main data structure used by all the major search engines. Search engines build an inverted index on their collection to speed up query processing. As the size of the web grows, the length of the inverted list structures, which can easily grow to hundreds of MBs or even GBs for common terms (roughly linear in the size of…

  9. Pu Anion Exchange Process Intensification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor-Pashow, Kathryn M. L.

    This research is focused on improving the efficiency of the anion exchange process for purifying plutonium. While initially focused on plutonium, the technology could also be applied to other ion-exchange processes. Work in FY17 focused on the improvement and optimization of porous foam columns that were initially developed in FY16. These foam columns were surface functionalized with poly(4-vinylpyridine) (PVP) to provide the Pu specific anion-exchange sites. Two different polymerization methods were explored for maximizing the surface functionalization with the PVP. The open-celled polymeric foams have large open pores and large surface areas available for sorption. The fluid passes through themore » large open pores of this material, allowing convection to be the dominant mechanism by which mass transport takes place. These materials generally have very low densities, open-celled structures with high cell interconnectivity, small cell sizes, uniform cell size distributions, and high structural integrity. These porous foam columns provide advantages over the typical porous resin beads by eliminating the slow diffusion through resin beads, making the anion-exchange sites easily accessible on the foam surfaces. The best performing samples exceeded the Pu capacity of the commercially available resin, and also offered the advantage of sharper elution profiles, resulting in a more concentrated product, with less loss of material to the dilute heads and tails cuts. An alternate approach to improving the efficiency of this process was also explored through the development of a microchannel array system for performing the anion exchange.« less

  10. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    NASA Astrophysics Data System (ADS)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  11. Modulation-doped growth of mosaic graphene with single-crystalline p–n junctions for efficient photocurrent generation

    PubMed Central

    Yan, Kai; Wu, Di; Peng, Hailin; Jin, Li; Fu, Qiang; Bao, Xinhe; Liu, Zhongfan

    2012-01-01

    Device applications of graphene such as ultrafast transistors and photodetectors benefit from the combination of both high-quality p- and n-doped components prepared in a large-scale manner with spatial control and seamless connection. Here we develop a well-controlled chemical vapour deposition process for direct growth of mosaic graphene. Mosaic graphene is produced in large-area monolayers with spatially modulated, stable and uniform doping, and shows considerably high room temperature carrier mobility of ~5,000 cm2 V−1 s−1 in intrinsic portion and ~2,500 cm2 V−1 s−1 in nitrogen-doped portion. The unchanged crystalline registry during modulation doping indicates the single-crystalline nature of p–n junctions. Efficient hot carrier-assisted photocurrent was generated by laser excitation at the junction under ambient conditions. This study provides a facile avenue for large-scale synthesis of single-crystalline graphene p–n junctions, allowing for batch fabrication and integration of high-efficiency optoelectronic and electronic devices within the atomically thin film. PMID:23232410

  12. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  13. Induction Consolidation of Thermoplastic Composites Using Smart Susceptors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsen, Marc R

    2012-06-14

    This project has focused on the area of energy efficient consolidation and molding of fiber reinforced thermoplastic composite components as an energy efficient alternative to the conventional processing methods such as autoclave processing. The expanding application of composite materials in wind energy, automotive, and aerospace provides an attractive energy efficiency target for process development. The intent is to have this efficient processing along with the recyclable thermoplastic materials ready for large scale application before these high production volume levels are reached. Therefore, the process can be implemented in a timely manner to realize the maximum economic, energy, and environmental efficiencies.more » Under this project an increased understanding of the use of induction heating with smart susceptors applied to consolidation of thermoplastic has been achieved. This was done by the establishment of processing equipment and tooling and the subsequent demonstration of this fabrication technology by consolidating/molding of entry level components for each of the participating industrial segments, wind energy, aerospace, and automotive. This understanding adds to the nation's capability to affordably manufacture high quality lightweight high performance components from advanced recyclable composite materials in a lean and energy efficient manner. The use of induction heating with smart susceptors is a precisely controlled low energy method for the consolidation and molding of thermoplastic composites. The smart susceptor provides intrinsic thermal control based on the interaction with the magnetic field from the induction coil thereby producing highly repeatable processing. The low energy usage is enabled by the fact that only the smart susceptor surface of the tool is heated, not the entire tool. Therefore much less mass is heated resulting in significantly less required energy to consolidate/mold the desired composite components. This energy efficiency results in potential energy savings of {approx}75% as compared to autoclave processing in aerospace, {approx}63% as compared to compression molding in automotive, and {approx}42% energy savings as compared to convectively heated tools in wind energy. The ability to make parts in a rapid and controlled manner provides significant economic advantages for each of the industrial segments. These attributes were demonstrated during the processing of the demonstration components on this project.« less

  14. An efficient ASIC implementation of 16-channel on-line recursive ICA processor for real-time EEG system.

    PubMed

    Fang, Wai-Chi; Huang, Kuan-Ju; Chou, Chia-Ching; Chang, Jui-Chung; Cauwenberghs, Gert; Jung, Tzyy-Ping

    2014-01-01

    This is a proposal for an efficient very-large-scale integration (VLSI) design, 16-channel on-line recursive independent component analysis (ORICA) processor ASIC for real-time EEG system, implemented with TSMC 40 nm CMOS technology. ORICA is appropriate to be used in real-time EEG system to separate artifacts because of its highly efficient and real-time process features. The proposed ORICA processor is composed of an ORICA processing unit and a singular value decomposition (SVD) processing unit. Compared with previous work [1], this proposed ORICA processor has enhanced effectiveness and reduced hardware complexity by utilizing a deeper pipeline architecture, shared arithmetic processing unit, and shared registers. The 16-channel random signals which contain 8-channel super-Gaussian and 8-channel sub-Gaussian components are used to analyze the dependence of the source components, and the average correlation coefficient is 0.95452 between the original source signals and extracted ORICA signals. Finally, the proposed ORICA processor ASIC is implemented with TSMC 40 nm CMOS technology, and it consumes 15.72 mW at 100 MHz operating frequency.

  15. Regional-scale calculation of the LS factor using parallel processing

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong

    2015-05-01

    With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.

  16. Does input influence uptake? Links between maternal talk, processing speed and vocabulary size in Spanish-learning children

    PubMed Central

    Hurtado, Nereyda; Marchman, Virginia A.; Fernald, Anne

    2010-01-01

    It is well established that variation in caregivers' speech is associated with language outcomes, yet little is known about the learning principles that mediate these effects. This longitudinal study (n = 27) explores whether Spanish-learning children's early experiences with language predict efficiency in real-time comprehension and vocabulary learning. Measures of mothers' speech at 18 months were examined in relation to children's speech processing efficiency and reported vocabulary at 18 and 24 months. Children of mothers who provided more input at 18 months knew more words and were faster in word recognition at 24 months. Moreover, multiple regression analyses indicated that the influences of caregiver speech on speed of word recognition and vocabulary were largely overlapping. This study provides the first evidence that input shapes children's lexical processing efficiency and that vocabulary growth and increasing facility in spoken word comprehension work together to support the uptake of the information that rich input affords the young language learner. PMID:19046145

  17. Hot Charge Carrier Transmission from Plasmonic Nanostructures

    NASA Astrophysics Data System (ADS)

    Christopher, Phillip; Moskovits, Martin

    2017-05-01

    Surface plasmons have recently been harnessed to carry out processes such as photovoltaic current generation, redox photochemistry, photocatalysis, and photodetection, all of which are enabled by separating energetic (hot) electrons and holes—processes that, previously, were the domain of semiconductor junctions. Currently, the power conversion efficiencies of systems using plasmon excitation are low. However, the very large electron/hole per photon quantum efficiencies observed for plasmonic devices fan the hope of future improvements through a deeper understanding of the processes involved and through better device engineering, especially of critical interfaces such as those between metallic and semiconducting nanophases (or adsorbed molecules). In this review, we focus on the physics and dynamics governing plasmon-derived hot charge carrier transfer across, and the electronic structure at, metal-semiconductor (molecule) interfaces, where we feel the barriers contributing to low efficiencies reside. We suggest some areas of opportunity that deserve early attention in the still-evolving field of hot carrier transmission from plasmonic nanostructures to neighboring phases.

  18. Generalized Chirp Scaling Combined with Baseband Azimuth Scaling Algorithm for Large Bandwidth Sliding Spotlight SAR Imaging

    PubMed Central

    Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-01-01

    This paper presents an efficient and precise imaging algorithm for the large bandwidth sliding spotlight synthetic aperture radar (SAR). The existing sub-aperture processing method based on the baseband azimuth scaling (BAS) algorithm cannot cope with the high order phase coupling along the range and azimuth dimensions. This coupling problem causes defocusing along the range and azimuth dimensions. This paper proposes a generalized chirp scaling (GCS)-BAS processing algorithm, which is based on the GCS algorithm. It successfully mitigates the deep focus along the range dimension of a sub-aperture of the large bandwidth sliding spotlight SAR, as well as high order phase coupling along the range and azimuth dimensions. Additionally, the azimuth focusing can be achieved by this azimuth scaling method. Simulation results demonstrate the ability of the GCS-BAS algorithm to process the large bandwidth sliding spotlight SAR data. It is proven that great improvements of the focus depth and imaging accuracy are obtained via the GCS-BAS algorithm. PMID:28555057

  19. A Comparative Analysis of Extract, Transformation and Loading (ETL) Process

    NASA Astrophysics Data System (ADS)

    Runtuwene, J. P. A.; Tangkawarow, I. R. H. T.; Manoppo, C. T. M.; Salaki, R. J.

    2018-02-01

    The current growth of data and information occurs rapidly in varying amount and media. These types of development will eventually produce large number of data better known as the Big Data. Business Intelligence (BI) utilizes large number of data and information for analysis so that one can obtain important information. This type of information can be used to support decision-making process. In practice a process integrating existing data and information into data warehouse is needed. This data integration process is known as Extract, Transformation and Loading (ETL). In practice, many applications have been developed to carry out the ETL process, but selection which applications are more time, cost and power effective and efficient may become a challenge. Therefore, the objective of the study was to provide comparative analysis through comparison between the ETL process using Microsoft SQL Server Integration Service (SSIS) and one using Pentaho Data Integration (PDI).

  20. Advances in multi-scale modeling of solidification and casting processes

    NASA Astrophysics Data System (ADS)

    Liu, Baicheng; Xu, Qingyan; Jing, Tao; Shen, Houfa; Han, Zhiqiang

    2011-04-01

    The development of the aviation, energy and automobile industries requires an advanced integrated product/process R&D systems which could optimize the product and the process design as well. Integrated computational materials engineering (ICME) is a promising approach to fulfill this requirement and make the product and process development efficient, economic, and environmentally friendly. Advances in multi-scale modeling of solidification and casting processes, including mathematical models as well as engineering applications are presented in the paper. Dendrite morphology of magnesium and aluminum alloy of solidification process by using phase field and cellular automaton methods, mathematical models of segregation of large steel ingot, and microstructure models of unidirectionally solidified turbine blade casting are studied and discussed. In addition, some engineering case studies, including microstructure simulation of aluminum casting for automobile industry, segregation of large steel ingot for energy industry, and microstructure simulation of unidirectionally solidified turbine blade castings for aviation industry are discussed.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco

    High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (“big data”) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode “Mosaic Datacube” approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, butmore » requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.« less

  2. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  3. Use of parallel computing in mass processing of laser data

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  4. Efficiency and kinetics of the in vitro fertilization process in bovine oocytes with different meiotic competence.

    PubMed

    Horakova, J; Hanzalova, K; Reckova, Z; Hulinska, P; Machatkova, M

    2007-08-01

    The aim of the study was to investigate the efficiency and kinetics of fertilization in oocytes with different meiotic competence, as defined by the phase of the follicular wave and follicle size. Oocytes were recovered from cows with synchronized estrus cycles, slaughtered in either the growth (day 3) or the dominant (day 7) phase, separately from large, medium and small follicles. The oocytes were matured and fertilized by a standard protocol. Twenty-four hours after fertilization, the oocytes were denuded from cumulus cells, fixed and stained with bisbensimid Hoechst-PBS. Fertilization was more efficient and the first cleavage was accelerated in growth phase-derived oocytes, as shown by significantly higher (p < or = 0.01) proportions of both normally fertilized and cleaved oocytes (68.8 and 25.1%), in comparison with dominant phase-derived oocytes (44.2 and 10.3%). In the growth-phase derived oocytes, proportions of normally fertilized and cleaved oocytes were significantly higher (p < or = 0.01) in oocytes from large (100.0 and 36.4%) and medium (83.3 and 36.5%) follicles than in those from small (54.8 and 14.6%) follicles. The dominant phase-derived oocytes showed higher proportions of normally fertilized and cleaved oocytes in the populations recovered from small (51.5 and 10.0%) and medium (43.1 and 12.0%) follicles than in those from large (25.0 and 0%) follicles; however, the differences were not significant. It can be concluded that: (i) efficiency and kinetics of fertilization differ in relation to oocyte's meiotic competence; (ii) improved development of embryos from oocytes with greater meiotic competence is associated with a more effective fertilization process.

  5. Efficiency of encounter-controlled reaction between diffusing reactants in a finite lattice: Non-nearest-neighbor effects

    NASA Astrophysics Data System (ADS)

    Bentz, Jonathan L.; Kozak, John J.; Nicolis, Gregoire

    2005-08-01

    The influence of non-nearest-neighbor displacements on the efficiency of diffusion-reaction processes involving one and two mobile diffusing reactants is studied. An exact analytic result is given for dimension d=1 from which, for large lattices, one can recover the asymptotic estimate reported 30 years ago by Lakatos-Lindenberg and Shuler. For dimensions d=2,3 we present numerically exact values for the mean time to reaction, as gauged by the mean walklength before reactive encounter, obtained via the theory of finite Markov processes and supported by Monte Carlo simulations. Qualitatively different results are found between processes occurring on d=1 versus d>1 lattices, and between results obtained assuming nearest-neighbor (only) versus non-nearest-neighbor displacements.

  6. Process improvement methods increase the efficiency, accuracy, and utility of a neurocritical care research repository.

    PubMed

    O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor

    2012-08-01

    Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.

  7. A user-friendly tool to transform large scale administrative data into wide table format using a MapReduce program with a Pig Latin based script.

    PubMed

    Horiguchi, Hiromasa; Yasunaga, Hideo; Hashimoto, Hideki; Ohe, Kazuhiko

    2012-12-22

    Secondary use of large scale administrative data is increasingly popular in health services and clinical research, where a user-friendly tool for data management is in great demand. MapReduce technology such as Hadoop is a promising tool for this purpose, though its use has been limited by the lack of user-friendly functions for transforming large scale data into wide table format, where each subject is represented by one row, for use in health services and clinical research. Since the original specification of Pig provides very few functions for column field management, we have developed a novel system called GroupFilterFormat to handle the definition of field and data content based on a Pig Latin script. We have also developed, as an open-source project, several user-defined functions to transform the table format using GroupFilterFormat and to deal with processing that considers date conditions. Having prepared dummy discharge summary data for 2.3 million inpatients and medical activity log data for 950 million events, we used the Elastic Compute Cloud environment provided by Amazon Inc. to execute processing speed and scaling benchmarks. In the speed benchmark test, the response time was significantly reduced and a linear relationship was observed between the quantity of data and processing time in both a small and a very large dataset. The scaling benchmark test showed clear scalability. In our system, doubling the number of nodes resulted in a 47% decrease in processing time. Our newly developed system is widely accessible as an open resource. This system is very simple and easy to use for researchers who are accustomed to using declarative command syntax for commercial statistical software and Structured Query Language. Although our system needs further sophistication to allow more flexibility in scripts and to improve efficiency in data processing, it shows promise in facilitating the application of MapReduce technology to efficient data processing with large scale administrative data in health services and clinical research.

  8. Public-Private Partnership: Joint recommendations to improve downloads of large Earth observation data

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Murphy, K. J.; Baynes, K.; Lynnes, C.

    2016-12-01

    With the volume of Earth observation data expanding rapidly, cloud computing is quickly changing the way Earth observation data is processed, analyzed, and visualized. The cloud infrastructure provides the flexibility to scale up to large volumes of data and handle high velocity data streams efficiently. Having freely available Earth observation data collocated on a cloud infrastructure creates opportunities for innovation and value-added data re-use in ways unforeseen by the original data provider. These innovations spur new industries and applications and spawn new scientific pathways that were previously limited due to data volume and computational infrastructure issues. NASA, in collaboration with Amazon, Google, and Microsoft, have jointly developed a set of recommendations to enable efficient transfer of Earth observation data from existing data systems to a cloud computing infrastructure. The purpose of these recommendations is to provide guidelines against which all data providers can evaluate existing data systems and be used to improve any issues uncovered to enable efficient search, access, and use of large volumes of data. Additionally, these guidelines ensure that all cloud providers utilize a common methodology for bulk-downloading data from data providers thus preventing the data providers from building custom capabilities to meet the needs of individual cloud providers. The intent is to share these recommendations with other Federal agencies and organizations that serve Earth observation to enable efficient search, access, and use of large volumes of data. Additionally, the adoption of these recommendations will benefit data users interested in moving large volumes of data from data systems to any other location. These data users include the cloud providers, cloud users such as scientists, and other users working in a high performance computing environment who need to move large volumes of data.

  9. Preliminary Evaluation of a Diagnostic Tool for Prosthetics

    DTIC Science & Technology

    2017-10-01

    volume change. Processing algorithms for data from the activity monitors were modified to run more efficiently so that large datasets could be...left) and blade style prostheses (right). Figure 4: Ankle ActiGraph correct position demonstrated for a left leg below-knee amputee cylindrical

  10. Memory Efficient Ranking.

    ERIC Educational Resources Information Center

    Moffat, Alistair; And Others

    1994-01-01

    Describes an approximate document ranking process that uses a compact array of in-memory, low-precision approximations for document length. Combined with another rule for reducing the memory required by partial similarity accumulators, the approximation heuristic allows the ranking of large document collections using less than one byte of memory…

  11. An Efficient Pipeline Wavefront Phase Recovery for the CAFADIS Camera for Extremely Large Telescopes

    PubMed Central

    Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations. PMID:22315523

  12. An efficient pipeline wavefront phase recovery for the CAFADIS camera for extremely large telescopes.

    PubMed

    Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations.

  13. Statistical mechanics of complex economies

    NASA Astrophysics Data System (ADS)

    Bardoscia, Marco; Livan, Giacomo; Marsili, Matteo

    2017-04-01

    In the pursuit of ever increasing efficiency and growth, our economies have evolved to remarkable degrees of complexity, with nested production processes feeding each other in order to create products of greater sophistication from less sophisticated ones, down to raw materials. The engine of such an expansion have been competitive markets that, according to general equilibrium theory (GET), achieve efficient allocations under specific conditions. We study large random economies within the GET framework, as templates of complex economies, and we find that a non-trivial phase transition occurs: the economy freezes in a state where all production processes collapse when either the number of primary goods or the number of available technologies fall below a critical threshold. As in other examples of phase transitions in large random systems, this is an unintended consequence of the growth in complexity. Our findings suggest that the Industrial Revolution can be regarded as a sharp transition between different phases, but also imply that well developed economies can collapse if too many intermediate goods are introduced.

  14. Fabrication of 20.19% Efficient Single-Crystalline Silicon Solar Cell with Inverted Pyramid Microstructure.

    PubMed

    Zhang, Chunyang; Chen, Lingzhi; Zhu, Yingjie; Guan, Zisheng

    2018-04-03

    This paper reports inverted pyramid microstructure-based single-crystalline silicon (sc-Si) solar cell with a conversion efficiency up to 20.19% in standard size of 156.75 × 156.75 mm 2 . The inverted pyramid microstructures were fabricated jointly by metal-assisted chemical etching process (MACE) with ultra-low concentration of silver ions and optimized alkaline anisotropic texturing process. And the inverted pyramid sizes were controlled by changing the parameters in both MACE and alkaline anisotropic texturing. Regarding passivation efficiency, the textured sc-Si with normal reflectivity of 9.2% and inverted pyramid size of 1 μm was used to fabricate solar cells. The best batch of solar cells showed a 0.19% higher of conversion efficiency and a 0.22 mA cm -2 improvement in short-circuit current density, and the excellent photoelectric property surpasses that of the same structure solar cell reported before. This technology shows great potential to be an alternative for large-scale production of high efficient sc-Si solar cells in the future.

  15. webpic: A flexible web application for collecting distance and count measurements from images

    PubMed Central

    2018-01-01

    Despite increasing ability to store and analyze large amounts of data for organismal and ecological studies, the process of collecting distance and count measurements from images has largely remained time consuming and error-prone, particularly for tasks for which automation is difficult or impossible. Improving the efficiency of these tasks, which allows for more high quality data to be collected in a shorter amount of time, is therefore a high priority. The open-source web application, webpic, implements common web languages and widely available libraries and productivity apps to streamline the process of collecting distance and count measurements from images. In this paper, I introduce the framework of webpic and demonstrate one readily available feature of this application, linear measurements, using fossil leaf specimens. This application fills the gap between workflows accomplishable by individuals through existing software and those accomplishable by large, unmoderated crowds. It demonstrates that flexible web languages can be used to streamline time-intensive research tasks without the use of specialized equipment or proprietary software and highlights the potential for web resources to facilitate data collection in research tasks and outreach activities with improved efficiency. PMID:29608592

  16. Reuse potential of low-calcium bottom ash as aggregate through pelletization.

    PubMed

    Geetha, S; Ramamurthy, K

    2010-01-01

    Coal combustion residues which include fly ash, bottom ash and boiler slag is one of the major pollutants as these residues require large land area for their disposal. Among these residues, utilization of bottom ash in the construction industry is very low. This paper explains the use of bottom ash through pelletization. Raw bottom ash could not be pelletized as such due to its coarseness. Though pulverized bottom ash could be pelletized, the pelletization efficiency was low, and the aggregates were too weak to withstand the handling stresses. To improve the pelletization efficiency, different clay and cementitious binders were used with bottom ash. The influence of different factors and their interaction effects were studied on the duration of pelletization process and the pelletization efficiency through fractional factorial design. Addition of binders facilitated conversion of low-calcium bottom ash into aggregates. To achieve maximum pelletization efficiency, the binder content and moisture requirements vary with type of binder. Addition of Ca(OH)(2) improved the (i) pelletization efficiency, (ii) reduced the duration of pelletization process from an average of 14-7 min, and (iii) reduced the binder dosage for a given pelletization efficiency. For aggregate with clay binders and cementitious binder, Ca(OH)(2) and binder dosage have significant effect in reducing the duration of pelletization process. 2010 Elsevier Ltd. All rights reserved.

  17. Inverted organic electronic and optoelectronic devices

    NASA Astrophysics Data System (ADS)

    Small, Cephas E.

    The research and development of organic electronics for commercial application has received much attention due to the unique properties of organic semiconductors and the potential for low-cost high-throughput manufacturing. For improved large-scale processing compatibility and enhanced device stability, an inverted geometry has been employed for devices such as organic light emitting diodes and organic photovoltaic cells. These improvements are attributed to the added flexibility to incorporate more air-stable materials into the inverted device geometry. However, early work on organic electronic devices with an inverted geometry typically showed reduced device performance compared to devices with a conventional structure. In the case of organic light emitting diodes, inverted devices typically show high operating voltages due to insufficient carrier injection. Here, a method for enhancing hole injection in inverted organic electronic devices is presented. By incorporating an electron accepting interlayer into the inverted device, a substantial enhancement in hole injection efficiency was observed as compared to conventional devices. Through a detailed carrier injection study, it is determined that the injection efficiency enhancements in the inverted devices are due to enhanced charge transfer at the electron acceptor/organic semiconductor interface. A similar situation is observed for organic photovoltaic cells, in which devices with an inverted geometry show limited carrier extraction in early studies. In this work, enhanced carrier extraction is demonstrated for inverted polymer solar cells using a surface-modified ZnO-polymer composite electron-transporting layer. The insulating polymer in the composite layer inhibited aggregation of the ZnO nanoparticles, while the surface-modification of the composite interlayer improved the electronic coupling with the photoactive layer. As a result, inverted polymer solar cells with power conversion efficiencies of over 8% were obtained. To further study carrier extraction in inverted polymer solar cells, the active layer thickness dependence of the efficiency was investigated. For devices with active layer thickness < 200 nm, power conversion efficiencies over 8% was obtained. This result is important for demonstrating improved large-scale processing compatibility. Above 200 nm, significant reduction in cell efficiency were observed. A detailed study of the loss processes that contributed to the reduction in efficiency for thick-film devices are presented.

  18. Advances in polycrystalline thin-film photovoltaics for space applications

    NASA Technical Reports Server (NTRS)

    Lanning, Bruce R.; Armstrong, Joseph H.; Misra, Mohan S.

    1994-01-01

    Polycrystalline, thin-film photovoltaics represent one of the few (if not the only) renewable power sources which has the potential to satisfy the demanding technical requirements for future space applications. The demand in space is for deployable, flexible arrays with high power-to-weight ratios and long-term stability (15-20 years). In addition, there is also the demand that these arrays be produced by scalable, low-cost, high yield, processes. An approach to significantly reduce costs and increase reliability is to interconnect individual cells series via monolithic integration. Both CIS and CdTe semiconductor films are optimum absorber materials for thin-film n-p heterojunction solar cells, having band gaps between 0.9-1.5 ev and demonstrated small area efficiencies, with cadmium sulfide window layers, above 16.5 percent. Both CIS and CdTe polycrystalline thin-film cells have been produced on a laboratory scale by a variety of physical and chemical deposition methods, including evaporation, sputtering, and electrodeposition. Translating laboratory processes which yield these high efficiency, small area cells into the design of a manufacturing process capable of producing 1-sq ft modules, however, requires a quantitative understanding of each individual step in the process and its (each step) effect on overall module performance. With a proper quantification and understanding of material transport and reactivity for each individual step, manufacturing process can be designed that is not 'reactor-specific' and can be controlled intelligently with the design parameters of the process. The objective of this paper is to present an overview of the current efforts at MMC to develop large-scale manufacturing processes for both CIS and CdTe thin-film polycrystalline modules. CIS cells/modules are fabricated in a 'substrate configuration' by physical vapor deposition techniques and CdTe cells/modules are fabricated in a 'superstrate configuration' by wet chemical methods. Both laser and mechanical scribing operations are used to monolithically integrate (series interconnect) the individual cells into modules. Results will be presented at the cell and module development levels with a brief description of the test methods used to qualify these devices for space applications. The approach and development efforts are directed towards large-scale manufacturability of established thin-film, polycrystalline processing methods for large area modules with less emphasis on maximizing small area efficiencies.

  19. How to use the world's scarce selenium resources efficiently to increase the selenium concentration in food

    PubMed Central

    Haug, Anna; Graham, Robin D.; Christophersen, Olav A.; Lyons, Graham H.

    2007-01-01

    The world's rare selenium resources need to be managed carefully. Selenium is extracted as a by-product of copper mining and there are no deposits that can be mined for selenium alone. Selenium has unique properties as a semi-conductor, making it of special value to industry, but it is also an essential nutrient for humans and animals and may promote plant growth and quality. Selenium deficiency is regarded as a major health problem for 0.5 to 1 billion people worldwide, while an even larger number may consume less selenium than required for optimal protection against cancer, cardiovascular diseases and severe infectious diseases including HIV disease. Efficient recycling of selenium is difficult. Selenium is added in some commercial fertilizers, but only a small proportion is taken up by plants and much of the remainder is lost for future utilization. Large biofortification programmes with selenium added to commercial fertilizers may therefore be a fortification method that is too wasteful to be applied to large areas of our planet. Direct addition of selenium compounds to food (process fortification) can be undertaken by the food industry. If selenomethionine is added directly to food, however, oxidation due to heat processing needs to be avoided. New ways to biofortify food products are needed, and it is generally observed that there is less wastage if selenium is added late in the production chain rather than early. On these bases we have proposed adding selenium-enriched, sprouted cereal grain during food processing as an efficient way to introduce this nutrient into deficient diets. Selenium is a non-renewable resource. There is now an enormous wastage of selenium associated with large-scale mining and industrial processing. We recommend that this must be changed and that much of the selenium that is extracted should be stockpiled for use as a nutrient by future generations. PMID:18833333

  20. Toward large-scale solar energy systems with peak concentrations of 20,000 suns

    NASA Astrophysics Data System (ADS)

    Kribus, Abraham

    1997-10-01

    The heliostat field plays a crucial role in defining the achievable limits for central receiver system efficiency and cost. Increasing system efficiency, thus reducing the reflective area and system cost, can be achieved by increasing the concentration and the receiver temperature. The concentration achievable in central receiver plants, however, is constrained by current heliostat technology and design practices. The factors affecting field performance are surface and tracking errors, astigmatism, shadowing, blocking and dilution. These are geometric factors that can be systematically treated and reduced. We present improvements in collection optics and technology that may boost concentration (up to 20,000 peak), achievable temperature (2,000 K), and efficiency in solar central receiver plants. The increased performance may significantly reduce the cost of solar energy in existing applications, and enable solar access to new ultra-high-temperature applications, such as: future gas turbines approaching 60% combined cycle efficiency; high-temperature thermo-chemical processes; and gas-dynamic processes.

  1. Efficient Time-Domain Imaging Processing for One-Stationary Bistatic Forward-Looking SAR Including Motion Errors

    PubMed Central

    Xie, Hongtu; Shi, Shaoying; Xiao, Hui; Xie, Chao; Wang, Feng; Fang, Qunle

    2016-01-01

    With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement. PMID:27845757

  2. A learnable parallel processing architecture towards unity of memory and computing

    NASA Astrophysics Data System (ADS)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  3. A learnable parallel processing architecture towards unity of memory and computing.

    PubMed

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  4. Effect of a new regeneration process by adsorption-coagulation and flocculation on the physicochemical properties and the detergent efficiency of regenerated cleaning solutions.

    PubMed

    Blel, Walid; Dif, Mehdi; Sire, Olivier

    2015-05-15

    Reprocessing soiled cleaning-in-place (CIP) solutions has large economic and environmental costs, and it would be cheaper and greener to recycle them. In food industries, recycling of CIP solutions requires a suitable green process engineered to take into account the extreme physicochemical conditions of cleaning while not altering the process efficiency. To this end, an innovative treatment process combining adsorption-coagulation with flocculation was tested on multiple recycling of acid and basic cleaning solutions. In-depth analysis of time-course evolutions was carried out in the physicochemical properties (concentration, surface tension, viscosity, COD, total nitrogen) of these solutions over the course of successive regenerations. Cleaning and disinfection efficiencies were assessed based on both microbiological analyses and organic matter detachment and solubilization from fouled stainless steel surfaces. Microbiological analyses using a resistant bacterial strain (Bacillus subtilis spores) highlighted that solutions regenerated up to 20 times maintained the same bactericidal efficiency as de novo NaOH solutions. The cleanability of stainless steel surfaces showed that regenerated solutions allow better surface wettability, which goes to explain the improved detachment and solubilization found on different types of organic and inorganic fouling. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. On the design and operation of primary settling tanks in state of the art wastewater treatment and water resources recovery.

    PubMed

    Patziger, Miklos; Günthert, Frank Wolfgang; Jardin, Norbert; Kainz, Harald; Londong, Jörg

    2016-11-01

    In state of the art wastewater treatment, primary settling tanks (PSTs) are considered as an integral part of the biological wastewater and sludge treatment process, as well as of the biogas and electric energy production. Consequently they strongly influence the efficiency of the entire wastewater treatment plant. However, in the last decades the inner physical processes of PSTs, largely determining their efficiency, have been poorly addressed. In common practice PSTs are still solely designed and operated based on the surface overflow rate and the hydraulic retention time (HRT) as a black box. The paper shows the results of a comprehensive investigation programme, including 16 PSTs. Their removal efficiency and inner physical processes (like the settling process of primary sludge), internal flow structures within PSTs and their impact on performance were investigated. The results show that: (1) the removal rates of PSTs are generally often underestimated in current design guidelines, (2) the removal rate of different PSTs shows a strongly fluctuating pattern even in the same range of the HRT, and (3) inlet design of PSTs becomes highly relevant in the removal efficiency at rather high surface overflow rates, above 5 m/h, which is the upper design limit of PSTs for dry weather load.

  6. GPU-accelerated element-free reverse-time migration with Gauss points partition

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong

    2018-06-01

    An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.

  7. Modes of Large-Scale Brain Network Organization during Threat Processing and Posttraumatic Stress Disorder Symptom Reduction during TF-CBT among Adolescent Girls.

    PubMed

    Cisler, Josh M; Sigel, Benjamin A; Kramer, Teresa L; Smitherman, Sonet; Vanderzee, Karin; Pemberton, Joy; Kilts, Clinton D

    2016-01-01

    Posttraumatic stress disorder (PTSD) is often chronic and disabling across the lifespan. The gold standard treatment for adolescent PTSD is Trauma-Focused Cognitive-Behavioral Therapy (TF-CBT), though treatment response is variable and mediating neural mechanisms are not well understood. Here, we test whether PTSD symptom reduction during TF-CBT is associated with individual differences in large-scale brain network organization during emotion processing. Twenty adolescent girls, aged 11-16, with PTSD related to assaultive violence completed a 12-session protocol of TF-CBT. Participants completed an emotion processing task, in which neutral and fearful facial expressions were presented either overtly or covertly during 3T fMRI, before and after treatment. Analyses focused on characterizing network properties of modularity, assortativity, and global efficiency within an 824 region-of-interest brain parcellation separately during each of the task blocks using weighted functional connectivity matrices. We similarly analyzed an existing dataset of healthy adolescent girls undergoing an identical emotion processing task to characterize normative network organization. Pre-treatment individual differences in modularity, assortativity, and global efficiency during covert fear vs neutral blocks predicted PTSD symptom reduction. Patients who responded better to treatment had greater network modularity and assortativity but lesser efficiency, a pattern that closely resembled the control participants. At a group level, greater symptom reduction was associated with greater pre-to-post-treatment increases in network assortativity and modularity, but this was more pronounced among participants with less symptom improvement. The results support the hypothesis that modularized and resilient brain organization during emotion processing operate as mechanisms enabling symptom reduction during TF-CBT.

  8. Inversion of very large matrices encountered in large scale problems of photogrammetry and photographic astrometry

    NASA Technical Reports Server (NTRS)

    Brown, D. C.

    1971-01-01

    The simultaneous adjustment of very large nets of overlapping plates covering the celestial sphere becomes computationally feasible by virtue of a twofold process that generates a system of normal equations having a bordered-banded coefficient matrix, and solves such a system in a highly efficient manner. Numerical results suggest that when a well constructed spherical net is subjected to a rigorous, simultaneous adjustment, the exercise of independently established control points is neither required for determinancy nor for production of accurate results.

  9. A robust real-time abnormal region detection framework from capsule endoscopy images

    NASA Astrophysics Data System (ADS)

    Cheng, Yanfen; Liu, Xu; Li, Huiping

    2009-02-01

    In this paper we present a novel method to detect abnormal regions from capsule endoscopy images. Wireless Capsule Endoscopy (WCE) is a recent technology where a capsule with an embedded camera is swallowed by the patient to visualize the gastrointestinal tract. One challenge is one procedure of diagnosis will send out over 50,000 images, making physicians' reviewing process expensive. Physicians' reviewing process involves in identifying images containing abnormal regions (tumor, bleeding, etc) from this large number of image sequence. In this paper we construct a novel framework for robust and real-time abnormal region detection from large amount of capsule endoscopy images. The detected potential abnormal regions can be labeled out automatically to let physicians review further, therefore, reduce the overall reviewing process. In this paper we construct an abnormal region detection framework with the following advantages: 1) Trainable. Users can define and label any type of abnormal region they want to find; The abnormal regions, such as tumor, bleeding, etc., can be pre-defined and labeled using the graphical user interface tool we provided. 2) Efficient. Due to the large number of image data, the detection speed is very important. Our system can detect very efficiently at different scales due to the integral image features we used; 3) Robust. After feature selection we use a cascade of classifiers to further enforce the detection accuracy.

  10. Silicon-on-ceramic Process: Silicon Sheet Growth and Device Development for the Large-area Silicon Sheet and Cell Development Tasks of the Low-cost Solar Array Project

    NASA Technical Reports Server (NTRS)

    Chapman, P. W.; Zook, J. D.; Heaps, J. D.; Grung, B. L.; Koepke, B.; Schuldt, S. B.

    1979-01-01

    Significant progress is reported in fabricating a 4 sq cm cell having a 10.1 percent conversion efficiency and a 10 sq cm cell having a 9.2 percent conversion efficiency. The continuous (SCIM) coater succeeded in producing a 16 sq cm coating exhibiting unidirectional solidification and large grain size. A layer was grown at 0.2 cm/sec in the experimental coater which was partially dendritic but also contained a large smooth area approximately 100 micron m thick. The dark characteristic measurements of a typical SCC solar cell yield shunt resistance values of 10K ohms and series resistance values and 0.4 ohm. The production dip-coater is operating at over 50 percent yield in terms of good cell quality material. The most recent run yielded 13 good substrates out of 15.

  11. Simulation based energy-resource efficient manufacturing integrated with in-process virtual management

    NASA Astrophysics Data System (ADS)

    Katchasuwanmanee, Kanet; Cheng, Kai; Bateman, Richard

    2016-09-01

    As energy efficiency is one of the key essentials towards sustainability, the development of an energy-resource efficient manufacturing system is among the great challenges facing the current industry. Meanwhile, the availability of advanced technological innovation has created more complex manufacturing systems that involve a large variety of processes and machines serving different functions. To extend the limited knowledge on energy-efficient scheduling, the research presented in this paper attempts to model the production schedule at an operation process by considering the balance of energy consumption reduction in production, production work flow (productivity) and quality. An innovative systematic approach to manufacturing energy-resource efficiency is proposed with the virtual simulation as a predictive modelling enabler, which provides real-time manufacturing monitoring, virtual displays and decision-makings and consequentially an analytical and multidimensional correlation analysis on interdependent relationships among energy consumption, work flow and quality errors. The regression analysis results demonstrate positive relationships between the work flow and quality errors and the work flow and energy consumption. When production scheduling is controlled through optimization of work flow, quality errors and overall energy consumption, the energy-resource efficiency can be achieved in the production. Together, this proposed multidimensional modelling and analysis approach provides optimal conditions for the production scheduling at the manufacturing system by taking account of production quality, energy consumption and resource efficiency, which can lead to the key competitive advantages and sustainability of the system operations in the industry.

  12. Tunable resonance-domain diffraction gratings based on electrostrictive polymers.

    PubMed

    Axelrod, Ramon; Shacham-Diamand, Yosi; Golub, Michael A

    2017-03-01

    Critical combination of high diffraction efficiency and large diffraction angles can be delivered by resonance-domain diffractive optics with high aspect ratio and wavelength-scale grating periods. To advance from static to electrically tunable resonance-domain diffraction grating, we resorted to its replication onto 2-5 μm thick P(VDF-TrFE-CFE) electrostrictive ter-polymer membranes. Electromechanical and optical computer simulations provided higher than 90% diffraction efficiency, a large continuous deflection range exceeding 20°, and capabilities for adiabatic spatial modulation of the grating period and slant. A prototype of the tunable resonance-domain diffraction grating was fabricated in a soft-stamp thermal nanoimprinting process, characterized, optically tested, and provided experimental feasibility proof for the tunable sub-micron-period gratings on electrostrictive polymers.

  13. Load Balancing Strategies for Multi-Block Overset Grid Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.

  14. Efficiency of the spectral-spatial classification of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Borzov, S. M.; Potaturkin, O. I.

    2017-01-01

    The efficiency of methods of the spectral-spatial classification of similarly looking types of vegetation on the basis of hyperspectral data of remote sensing of the Earth, which take into account local neighborhoods of analyzed image pixels, is experimentally studied. Algorithms that involve spatial pre-processing of the raw data and post-processing of pixel-based spectral classification maps are considered. Results obtained both for a large-size hyperspectral image and for its test fragment with different methods of training set construction are reported. The classification accuracy in all cases is estimated through comparisons of ground-truth data and classification maps formed by using the compared methods. The reasons for the differences in these estimates are discussed.

  15. High efficiency solution processed sintered CdTe nanocrystal solar cells: the role of interfaces.

    PubMed

    Panthani, Matthew G; Kurley, J Matthew; Crisp, Ryan W; Dietz, Travis C; Ezzyat, Taha; Luther, Joseph M; Talapin, Dmitri V

    2014-02-12

    Solution processing of photovoltaic semiconducting layers offers the potential for drastic cost reduction through improved materials utilization and high device throughput. One compelling solution-based processing strategy utilizes semiconductor layers produced by sintering nanocrystals into large-grain semiconductors at relatively low temperatures. Using n-ZnO/p-CdTe as a model system, we fabricate sintered CdTe nanocrystal solar cells processed at 350 °C with power conversion efficiencies (PCE) as high as 12.3%. JSC of over 25 mA cm(-2) are achieved, which are comparable or higher than those achieved using traditional, close-space sublimated CdTe. We find that the VOC can be substantially increased by applying forward bias for short periods of time. Capacitance measurements as well as intensity- and temperature-dependent analysis indicate that the increased VOC is likely due to relaxation of an energetic barrier at the ITO/CdTe interface.

  16. Efficient hybrid evolutionary algorithm for optimization of a strip coiling process

    NASA Astrophysics Data System (ADS)

    Pholdee, Nantiwat; Park, Won-Woong; Kim, Dong-Kyu; Im, Yong-Taek; Bureerat, Sujin; Kwon, Hyuck-Cheol; Chun, Myung-Sik

    2015-04-01

    This article proposes an efficient metaheuristic based on hybridization of teaching-learning-based optimization and differential evolution for optimization to improve the flatness of a strip during a strip coiling process. Differential evolution operators were integrated into the teaching-learning-based optimization with a Latin hypercube sampling technique for generation of an initial population. The objective function was introduced to reduce axial inhomogeneity of the stress distribution and the maximum compressive stress calculated by Love's elastic solution within the thin strip, which may cause an irregular surface profile of the strip during the strip coiling process. The hybrid optimizer and several well-established evolutionary algorithms (EAs) were used to solve the optimization problem. The comparative studies show that the proposed hybrid algorithm outperformed other EAs in terms of convergence rate and consistency. It was found that the proposed hybrid approach was powerful for process optimization, especially with a large-scale design problem.

  17. Enhanced Energy Density in Permanent Magnets using Controlled High Magnetic Field during Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rios, Orlando; Carter, Bill; Constantinides, Steve

    This ORNL Manufacturing Demonstraction Facility (MDF) technical collaboration focused on the use of high magnetic field processing (>2Tesla) using energy efficient large bore superconducting magnet technology and high frequency electromagnetics to improve magnet performance and reduce the energy budget associated with Alnico thermal processing. Alnico, alloys containing Al, Ni, Co and Fe, represent a class of functional nanostructured alloys, and show the greatest potential for supplementing or replacing commercial Nd-based rare-earth alloy magnets.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Paul W; Chen, Jingguang G.; Crooks, Richard M.

    Nitrogen is fundamental to all of life and many industrial processes. The interchange of nitrogen oxidation states in the industrial production of ammonia, nitric acid, and other commodity chemicals is largely powered by fossil fuels. A key goal of contemporary research in the field of nitrogen chemistry is to minimize the use of fossil fuels by developing more efficient heterogeneous, homogeneous, photo-, and electrocatalytic processes or by adapting the enzymatic processes underlying the natural nitrogen cycle. These approaches, as well as the challenges involved, are discussed in this Review.

  19. Meta-control of combustion performance with a data mining approach

    NASA Astrophysics Data System (ADS)

    Song, Zhe

    Large scale combustion process is complex and proposes challenges of optimizing its performance. Traditional approaches based on thermal dynamics have limitations on finding optimal operational regions due to time-shift nature of the process. Recent advances in information technology enable people collect large volumes of process data easily and continuously. The collected process data contains rich information about the process and, to some extent, represents a digital copy of the process over time. Although large volumes of data exist in industrial combustion processes, they are not fully utilized to the level where the process can be optimized. Data mining is an emerging science which finds patterns or models from large data sets. It has found many successful applications in business marketing, medical and manufacturing domains The focus of this dissertation is on applying data mining to industrial combustion processes, and ultimately optimizing the combustion performance. However the philosophy, methods and frameworks discussed in this research can also be applied to other industrial processes. Optimizing an industrial combustion process has two major challenges. One is the underlying process model changes over time and obtaining an accurate process model is nontrivial. The other is that a process model with high fidelity is usually highly nonlinear, solving the optimization problem needs efficient heuristics. This dissertation is set to solve these two major challenges. The major contribution of this 4-year research is the data-driven solution to optimize the combustion process, where process model or knowledge is identified based on the process data, then optimization is executed by evolutionary algorithms to search for optimal operating regions.

  20. Improving bed turnover time with a bed management system.

    PubMed

    Tortorella, Frank; Ukanowicz, Donna; Douglas-Ntagha, Pamela; Ray, Robert; Triller, Maureen

    2013-01-01

    Efficient patient throughput requires a high degree of coordination and communication. Opportunities abound to improve the patient experience by eliminating waste from the process and improving communication among the multiple disciplines involved in facilitating patient flow. In this article, we demonstrate how an interdisciplinary team at a large tertiary cancer center implemented an electronic bed management system to improve the bed turnover component of the patient throughput process.

  1. Propagation Environment Assessment Using UAV Electromagnetic Sensors

    DTIC Science & Technology

    2018-03-01

    could be added, we limit this study to two dimensions.) The computer program then processes the data and determines the existence of any atmospheric... computer to have large processing capacity, and a typical workstation desktop or laptop can perform the function. E. FLIGHT PATTERNS AND DATA...different types of flight patterns were studied , and our findings show that the vertical flight pattern using a rotary platform is more efficient

  2. Protein Folding Using a Vortex Fluidic Device.

    PubMed

    Britton, Joshua; Smith, Joshua N; Raston, Colin L; Weiss, Gregory A

    2017-01-01

    Essentially all biochemistry and most molecular biology experiments require recombinant proteins. However, large, hydrophobic proteins typically aggregate into insoluble and misfolded species, and are directed into inclusion bodies. Current techniques to fold proteins recovered from inclusion bodies rely on denaturation followed by dialysis or rapid dilution. Such approaches can be time consuming, wasteful, and inefficient. Here, we describe rapid protein folding using a vortex fluidic device (VFD). This process uses mechanical energy introduced into thin films to rapidly and efficiently fold proteins. With the VFD in continuous flow mode, large volumes of protein solution can be processed per day with 100-fold reductions in both folding times and buffer volumes.

  3. A Portable Computer System for Auditing Quality of Ambulatory Care

    PubMed Central

    McCoy, J. Michael; Dunn, Earl V.; Borgiel, Alexander E.

    1987-01-01

    Prior efforts to effectively and efficiently audit quality of ambulatory care based on comprehensive process criteria have been limited largely by the complexity and cost of data abstraction and management. Over the years, several demonstration projects have generated large sets of process criteria and mapping systems for evaluating quality of care, but these paper-based approaches have been impractical to implement on a routine basis. Recognizing that portable microcomputers could solve many of the technical problems in abstracting data from medical records, we built upon previously described criteria and developed a microcomputer-based abstracting system that facilitates reliable and cost-effective data abstraction.

  4. pyPcazip: A PCA-based toolkit for compression and analysis of molecular simulation data

    NASA Astrophysics Data System (ADS)

    Shkurti, Ardita; Goni, Ramon; Andrio, Pau; Breitmoser, Elena; Bethune, Iain; Orozco, Modesto; Laughton, Charles A.

    The biomolecular simulation community is currently in need of novel and optimised software tools that can analyse and process, in reasonable timescales, the large generated amounts of molecular simulation data. In light of this, we have developed and present here pyPcazip: a suite of software tools for compression and analysis of molecular dynamics (MD) simulation data. The software is compatible with trajectory file formats generated by most contemporary MD engines such as AMBER, CHARMM, GROMACS and NAMD, and is MPI parallelised to permit the efficient processing of very large datasets. pyPcazip is a Unix based open-source software (BSD licenced) written in Python.

  5. Genten: Software for Generalized Tensor Decompositions v. 1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phipps, Eric T.; Kolda, Tamara G.; Dunlavy, Daniel

    Tensors, or multidimensional arrays, are a powerful mathematical means of describing multiway data. This software provides computational means for decomposing or approximating a given tensor in terms of smaller tensors of lower dimension, focusing on decomposition of large, sparse tensors. These techniques have applications in many scientific areas, including signal processing, linear algebra, computer vision, numerical analysis, data mining, graph analysis, neuroscience and more. The software is designed to take advantage of parallelism present emerging computer architectures such has multi-core CPUs, many-core accelerators such as the Intel Xeon Phi, and computation-oriented GPUs to enable efficient processing of large tensors.

  6. Combining resources, combining forces: regionalizing hospital library services in a large statewide health system.

    PubMed

    Martin, Heather J; Delawska-Elliott, Basia

    2015-01-01

    After a reduction in full-time equivalents, 2 libraries in large teaching hospitals and 2 libraries in small community hospitals in a western US statewide health system saw opportunity for expansion through a regional reorganization. Despite a loss of 2/3 of the professional staff and a budgetary decrease of 27% over the previous 3 years, the libraries were able to grow business, usage, awareness, and collections through organizational innovation and improved efficiency. This paper describes the experience--including process, challenges, and lessons learned--of an organizational shift to regionalized services, collections, and staffing. Insights from this process may help similar organizations going through restructuring.

  7. In-situ Roll-to-Roll Printing of Highly Efficient Organic Solar Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Zhenan; Toney, Michael; Clancy, Paulette

    2016-05-30

    This project focuses on developing a roll-to-roll printing setup for organic solar cells with the capability to follow the film formation in situ with small and wide angle X-ray scattering, and to improve the performance of printed organic solar cells. We demonstrated the use of the printing setup to capture important aspects of existing industrial printing methods, which ensures that the solar cell performance achieved in our printing experiments would be largely retained in an industrial fabrication process. We employed both known and newly synthesized polymers as the donor and acceptor materials, and we studied the morphological changes in realmore » time during the printing process by X-ray scattering. Our experimental efforts are also accompanied by theoretical modeling of both the fluid dynamic aspects of the printing process and the nucleation and crystallization kinetics during the film formation. The combined insight into the printing process gained from the research provides a detailed understanding of the factors governing the printed solar cell’s performance. Finally using the knowledge we gained, we demonstrated large area ( > 10 cm2) printed organic solar cells with more than 5 percent power conversion efficiency, which is best achieved performance for roll-to-roll printed organic solar cells.« less

  8. A new dawn for industrial photosynthesis.

    PubMed

    Robertson, Dan E; Jacobson, Stuart A; Morgan, Frederick; Berry, David; Church, George M; Afeyan, Noubar B

    2011-03-01

    Several emerging technologies are aiming to meet renewable fuel standards, mitigate greenhouse gas emissions, and provide viable alternatives to fossil fuels. Direct conversion of solar energy into fungible liquid fuel is a particularly attractive option, though conversion of that energy on an industrial scale depends on the efficiency of its capture and conversion. Large-scale programs have been undertaken in the recent past that used solar energy to grow innately oil-producing algae for biomass processing to biodiesel fuel. These efforts were ultimately deemed to be uneconomical because the costs of culturing, harvesting, and processing of algal biomass were not balanced by the process efficiencies for solar photon capture and conversion. This analysis addresses solar capture and conversion efficiencies and introduces a unique systems approach, enabled by advances in strain engineering, photobioreactor design, and a process that contradicts prejudicial opinions about the viability of industrial photosynthesis. We calculate efficiencies for this direct, continuous solar process based on common boundary conditions, empirical measurements and validated assumptions wherein genetically engineered cyanobacteria convert industrially sourced, high-concentration CO(2) into secreted, fungible hydrocarbon products in a continuous process. These innovations are projected to operate at areal productivities far exceeding those based on accumulation and refining of plant or algal biomass or on prior assumptions of photosynthetic productivity. This concept, currently enabled for production of ethanol and alkane diesel fuel molecules, and operating at pilot scale, establishes a new paradigm for high productivity manufacturing of nonfossil-derived fuels and chemicals.

  9. Efficient mixing of the solar nebula from uniform Mo isotopic composition of meteorites.

    PubMed

    Becker, Harry; Walker, Richard J

    2003-09-11

    The abundances of elements and their isotopes in our Galaxy show wide variations, reflecting different nucleosynthetic processes in stars and the effects of Galactic evolution. These variations contrast with the uniformity of stable isotope abundances for many elements in the Solar System, which implies that processes efficiently homogenized dust and gas from different stellar sources within the young solar nebula. However, isotopic heterogeneity has been recognized on the subcentimetre scale in primitive meteorites, indicating that these preserve a compositional memory of their stellar sources. Small differences in the abundance of stable molybdenum isotopes in bulk rocks of some primitive and differentiated meteorites, relative to terrestrial Mo, suggest large-scale Mo isotopic heterogeneity between some inner Solar System bodies, which implies physical conditions that did not permit efficient mixing of gas and dust. Here we report Mo isotopic data for bulk samples of primitive and differentiated meteorites that show no resolvable deviations from terrestrial Mo. This suggests efficient mixing of gas and dust in the solar nebula at least to 3 au from the Sun, possibly induced by magnetohydrodynamic instabilities. These mixing processes must have occurred before isotopic fractionation of gas-phase elements and volatility-controlled chemical fractionations were established.

  10. High-Efficiency InGaN/GaN Quantum Well-Based Vertical Light-Emitting Diodes Fabricated on β-Ga2O3 Substrate.

    PubMed

    Muhammed, Mufasila M; Alwadai, Norah; Lopatin, Sergei; Kuramata, Akito; Roqan, Iman S

    2017-10-04

    We demonstrate a state-of-the-art high-efficiency GaN-based vertical light-emitting diode (VLED) grown on a transparent and conductive (-201)-oriented (β-Ga 2 O 3 ) substrate, obtained using a straightforward growth process that does not require a high-cost lift-off technique or complex fabrication process. The high-resolution scanning transmission electron microscopy (STEM) images confirm that we produced high quality upper layers, including a multiquantum well (MQW) grown on the masked β-Ga 2 O 3 substrate. STEM imaging also shows a well-defined MQW without InN diffusion into the barrier. Electroluminescence (EL) measurements at room temperature indicate that we achieved a very high internal quantum efficiency (IQE) of 78%; at lower temperatures, IQE reaches ∼86%. The photoluminescence (PL) and time-resolved PL analysis indicate that, at a high carrier injection density, the emission is dominated by radiative recombination with a negligible Auger effect; no quantum-confined Stark effect is observed. At low temperatures, no efficiency droop is observed at a high carrier injection density, indicating the superior VLED structure obtained without lift-off processing, which is cost-effective for large-scale devices.

  11. An efficient visualization method for analyzing biometric data

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; McGonagle, Mike; Yates, J. Harlan; Henning, Ronda; Hackett, Jay

    2013-05-01

    We introduce a novel application for biometric data analysis. This technology can be used as part of a unique and systematic approach designed to augment existing processing chains. Our system provides image quality control and analysis capabilities. We show how analysis and efficient visualization are used as part of an automated process. The goal of this system is to provide a unified platform for the analysis of biometric images that reduce manual effort and increase the likelihood of a match being brought to an examiner's attention from either a manual or lights-out application. We discuss the functionality of FeatureSCOPE™ which provides an efficient tool for feature analysis and quality control of biometric extracted features. Biometric databases must be checked for accuracy for a large volume of data attributes. Our solution accelerates review of features by a factor of up to 100 times. Review of qualitative results and cost reduction is shown by using efficient parallel visual review for quality control. Our process automatically sorts and filters features for examination, and packs these into a condensed view. An analyst can then rapidly page through screens of features and flag and annotate outliers as necessary.

  12. Resistivity Problems in Electrostatic Precipitation

    ERIC Educational Resources Information Center

    White, Harry J.

    1974-01-01

    The process of electrostatic precipitation has ever-increasing application in more efficient collection of fine particles from industrial air emissions. This article details a large number of new developments in the field. The emphasis is on high resistivity particles which are a common cause of poor precipitator performance. (LS)

  13. Systems and Cascades in Cognitive Development and Academic Achievement

    ERIC Educational Resources Information Center

    Bornstein, Marc H.; Hahn, Chun-Shin; Wolke, Dieter

    2013-01-01

    A large-scale ("N" = 552) controlled multivariate prospective 14-year longitudinal study of a developmental cascade embedded in a developmental system showed that information-processing efficiency in infancy (4 months), general mental development in toddlerhood (18 months), behavior difficulties in early childhood (36 months),…

  14. Uvf - Unified Volume Format: A General System for Efficient Handling of Large Volumetric Datasets.

    PubMed

    Krüger, Jens; Potter, Kristin; Macleod, Rob S; Johnson, Christopher

    2008-01-01

    With the continual increase in computing power, volumetric datasets with sizes ranging from only a few megabytes to petascale are generated thousands of times per day. Such data may come from an ordinary source such as simple everyday medical imaging procedures, while larger datasets may be generated from cluster-based scientific simulations or measurements of large scale experiments. In computer science an incredible amount of work worldwide is put into the efficient visualization of these datasets. As researchers in the field of scientific visualization, we often have to face the task of handling very large data from various sources. This data usually comes in many different data formats. In medical imaging, the DICOM standard is well established, however, most research labs use their own data formats to store and process data. To simplify the task of reading the many different formats used with all of the different visualization programs, we present a system for the efficient handling of many types of large scientific datasets (see Figure 1 for just a few examples). While primarily targeted at structured volumetric data, UVF can store just about any type of structured and unstructured data. The system is composed of a file format specification with a reference implementation of a reader. It is not only a common, easy to implement format but also allows for efficient rendering of most datasets without the need to convert the data in memory.

  15. Precipitation Efficiency in the Tropical Deep Convective Regime

    NASA Technical Reports Server (NTRS)

    Li, Xiaofan; Sui, C.-H.; Lau, K.-M.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    Precipitation efficiency in the tropical deep convective regime is analyzed based on a 2-D cloud resolving simulation. The cloud resolving model is forced by the large-scale vertical velocity and zonal wind and large-scale horizontal advections derived from TOGA COARE for a 20-day period. Precipitation efficiency may be defined as a ratio of surface rain rate to sum of surface evaporation and moisture convergence (LSPE) or a ratio of surface rain rate to sum of condensation and deposition rates of supersaturated vapor (CMPE). Moisture budget shows that the atmosphere is moistened (dryed) when the LSPE is less (more) than 100 %. The LSPE could be larger than 100 % for strong convection. This indicates that the drying processes should be included in cumulus parameterization to avoid moisture bias. Statistical analysis shows that the sum of the condensation and deposition rates is bout 80 % of the sum of the surface evaporation rate and moisture convergence, which ads to proportional relation between the two efficiencies when both efficiencies are less han 100 %. The CMPE increases with increasing mass-weighted mean temperature and creasing surface rain rate. This suggests that precipitation is more efficient for warm environment and strong convection. Approximate balance of rates among the condensation, deposition, rain, and the raindrop evaporation is used to derive an analytical solution of the CMPE.

  16. From field notes to data portal - An operational QA/QC framework for tower networks

    NASA Astrophysics Data System (ADS)

    Sturtevant, C.; Hackley, S.; Meehan, T.; Roberti, J. A.; Holling, G.; Bonarrigo, S.

    2016-12-01

    Quality assurance and control (QA/QC) is one of the most important yet challenging aspects of producing research-quality data. This is especially so for environmental sensor networks collecting numerous high-frequency measurement streams at distributed sites. Here, the quality issues are multi-faceted, including sensor malfunctions, unmet theoretical assumptions, and measurement interference from the natural environment. To complicate matters, there are often multiple personnel managing different sites or different steps in the data flow. For large, centrally managed sensor networks such as NEON, the separation of field and processing duties is in the extreme. Tower networks such as Ameriflux, ICOS, and NEON continue to grow in size and sophistication, yet tools for robust, efficient, scalable QA/QC have lagged. Quality control remains a largely manual process relying on visual inspection of the data. In addition, notes of observed measurement interference or visible problems are often recorded on paper without an explicit pathway to data flagging during processing. As such, an increase in network size requires a near-proportional increase in personnel devoted to QA/QC, quickly stressing the human resources available. There is a need for a scalable, operational QA/QC framework that combines the efficiency and standardization of automated tests with the power and flexibility of visual checks, and includes an efficient communication pathway from field personnel to data processors to end users. Here we propose such a framework and an accompanying set of tools in development, including a mobile application template for recording tower maintenance and an R/shiny application for efficiently monitoring and synthesizing data quality issues. This framework seeks to incorporate lessons learned from the Ameriflux community and provide tools to aid continued network advancements.

  17. Optical Fiber Design And Fabrication: Discussion On Recent Developments

    NASA Astrophysics Data System (ADS)

    Roy, Philippe; Devautour, Mathieu; Lavoute, Laure; Gaponov, Dmitry; Brasse, Gurvan; Hautreux, Stéphanie; Février, Sébastien; Restoin, Christine; Auguste, Jean-Louis; Gérôme, Frédéric; Humbert, Georges; Blondy, Jean-Marc

    2008-10-01

    Level of emitted power and beam quality of singlemode fiber lasers have been drastically increased at the expense of loss due to bend sensitivity, simplicity of manufacturing and packaging. Furthermore, the extension of the spectral coverage was primarily explored by exploiting non-linear effects, neglecting numerous possible transitions of rare earths. Through different research areas, we demonstrate the possibilities offered by new fiber designs and alternative methods of manufacturing. Photonic Band Gap fibers reconcile diffraction limited beam and large mode area with low bending loss. 80% slope efficiency is demonstrated together with a robust propagation allowing the fiber to be tightly bent until wounding radii as small as 6 cm. Highly ytterbium doped multimode core surrounded by high refractive index rods fiber exhibits a transverse singlemode behavior under continuous wave laser regime. A robust LP01 mode is observed and filtering effect is clearly observed. A non CVD process based on silica sand vitrification allows the synthesis of large and highly doped core with high index homogeneity, opening the way to design of efficient large mode area fiber lasers. 74% slope efficiency is measured, demonstrating the good quality of the core material. Finally, the use of rare earth (Er3+) doped zirconia nanocrystals in silica matrix offers a large panel of ignored energy transitions for visible or off-usual band of emission.

  18. Lasers in energy device manufacturing

    NASA Astrophysics Data System (ADS)

    Ostendorf, A.; Schoonderbeek, A.

    2008-02-01

    Global warming is a current topic all over the world. CO II emissions must be lowered to stop the already started climate change. Developing regenerative energy sources, like photovoltaics and fuel cells contributes to the solution of this problem. Innovative technologies and strategies need to be competitive with conventional energy sources. During the last years, the photovoltaic solar cell industry has experienced enormous growth. However, for solar cells to be competitive on the longer term, both an increase in efficiency as well as reduction in costs is necessary. An effective method to reduce costs of silicon solar cells is reducing the wafer thickness, because silicon makes up a large part of production costs. Consequently, contact free laser processing has a large advantage, because of the decrease in waste materials due to broken wafers as caused by other manufacturing processes. Additionally, many novel high efficiency solar cell concepts are only economically feasible with laser technology, e.g. for scribing silicon thin-film solar cells. This paper describes laser hole drilling, structuring and texturing of silicon wafer based solar cells and describes thin film solar cell scribing. Furthermore, different types of lasers are discussed with respect to processing quality and time.

  19. Investigating the Energy-Water Usage Efficiency of the Reuse of Treated Municipal Wastewater for Artificial Groundwater Recharge.

    PubMed

    Fournier, Eric D; Keller, Arturo A; Geyer, Roland; Frew, James

    2016-02-16

    This project investigates the energy-water usage efficiency of large scale civil infrastructure projects involving the artificial recharge of subsurface groundwater aquifers via the reuse of treated municipal wastewater. A modeling framework is introduced which explores the various ways in which spatially heterogeneous variables such as topography, landuse, and subsurface infiltration capacity combine to determine the physical layout of proposed reuse system components and their associated process energy-water demands. This framework is applied to the planning and evaluation of the energy-water usage efficiency of hypothetical reuse systems in five case study regions within the State of California. Findings from these case study analyses suggest that, in certain geographic contexts, the water requirements attributable to the process energy consumption of a reuse system can exceed the volume of water that it is able to recover by as much as an order of magnitude.

  20. On the front and back side quantum efficiency differences in semi-transparent organic solar cells and photodiodes

    NASA Astrophysics Data System (ADS)

    Bouthinon, B.; Clerc, R.; Verilhac, J. M.; Racine, B.; De Girolamo, J.; Jacob, S.; Lienhard, P.; Joimel, J.; Dhez, O.; Revaux, A.

    2018-03-01

    The External Quantum Efficiency (EQE) of semi-transparent Bulk Hetero-Junction (BHJ) organic photodiodes processed in air shows significant differences when measured from the front or back side contacts. This difference was found significantly reduced when decreasing the active layer thickness or by applying a negative bias. This work brings new elements to help understanding this effect, providing a large set of experiments featuring different applied voltages, active layers, process conditions, and electron and hole layers. By means of detailed electrical simulations, all these measurements have been found consistent with the mechanisms of irreversible photo-oxidation, modeled as deep trap states (and not as p-type doping). The EQE measurement from front and back sides is thus a simple and efficient way of monitoring the presence and amplitude of oxygen contamination in BHJ organic solar cells and photodiodes.

  1. Trust models for efficient communication in Mobile Cloud Computing and their applications to e-Commerce

    NASA Astrophysics Data System (ADS)

    Pop, Florin; Dobre, Ciprian; Mocanu, Bogdan-Costel; Citoteanu, Oana-Maria; Xhafa, Fatos

    2016-11-01

    Managing the large dimensions of data processed in distributed systems that are formed by datacentres and mobile devices has become a challenging issue with an important impact on the end-user. Therefore, the management process of such systems can be achieved efficiently by using uniform overlay networks, interconnected through secure and efficient routing protocols. The aim of this article is to advance our previous work with a novel trust model based on a reputation metric that actively uses the social links between users and the model of interaction between them. We present and evaluate an adaptive model for the trust management in structured overlay networks, based on a Mobile Cloud architecture and considering a honeycomb overlay. Such a model can be useful for supporting advanced mobile market-share e-Commerce platforms, where users collaborate and exchange reliable information about, for example, products of interest and supporting ad-hoc business campaigns

  2. Carbon membranes for efficient water-ethanol separation.

    PubMed

    Gravelle, Simon; Yoshida, Hiroaki; Joly, Laurent; Ybert, Christophe; Bocquet, Lydéric

    2016-09-28

    We demonstrate, on the basis of molecular dynamics simulations, the possibility of an efficient water-ethanol separation using nanoporous carbon membranes, namely, carbon nanotube membranes, nanoporous graphene sheets, and multilayer graphene membranes. While these carbon membranes are in general permeable to both pure liquids, they exhibit a counter-intuitive "self-semi-permeability" to water in the presence of water-ethanol mixtures. This originates in a preferred ethanol adsorption in nanoconfinement that prevents water molecules from entering the carbon nanopores. An osmotic pressure is accordingly expressed across the carbon membranes for the water-ethanol mixture, which agrees with the classic van't Hoff type expression. This suggests a robust and versatile membrane-based separation, built on a pressure-driven reverse-osmosis process across these carbon-based membranes. In particular, the recent development of large-scale "graphene-oxide" like membranes then opens an avenue for a versatile and efficient ethanol dehydration using this separation process, with possible application for bio-ethanol fabrication.

  3. Carbon membranes for efficient water-ethanol separation

    NASA Astrophysics Data System (ADS)

    Gravelle, Simon; Yoshida, Hiroaki; Joly, Laurent; Ybert, Christophe; Bocquet, Lydéric

    2016-09-01

    We demonstrate, on the basis of molecular dynamics simulations, the possibility of an efficient water-ethanol separation using nanoporous carbon membranes, namely, carbon nanotube membranes, nanoporous graphene sheets, and multilayer graphene membranes. While these carbon membranes are in general permeable to both pure liquids, they exhibit a counter-intuitive "self-semi-permeability" to water in the presence of water-ethanol mixtures. This originates in a preferred ethanol adsorption in nanoconfinement that prevents water molecules from entering the carbon nanopores. An osmotic pressure is accordingly expressed across the carbon membranes for the water-ethanol mixture, which agrees with the classic van't Hoff type expression. This suggests a robust and versatile membrane-based separation, built on a pressure-driven reverse-osmosis process across these carbon-based membranes. In particular, the recent development of large-scale "graphene-oxide" like membranes then opens an avenue for a versatile and efficient ethanol dehydration using this separation process, with possible application for bio-ethanol fabrication.

  4. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  5. Energy Factors in Commercial Mortgages: Gaps and Opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, Paul; Coleman, Philip; Wallace, Nancy

    2016-09-01

    The commercial real estate mortgage market is enormous, with almost half a trillion dollars in deals originated in 2015. Relative to other energy efficiency financing mechanisms, very little attention has been paid to the potential of commercial mortgages as a channel for promoting energy efficiency investments. The valuation and underwriting elements of the business are largely driven by the “net operating income” (NOI) metric – essentially, rents minus expenses. While NOI ostensibly includes all expenses, energy factors are in several ways given short shrift in the underwriting process. This is particularly interesting when juxtaposed upon a not insignificant body ofmore » research revealing that there are in fact tangible benefits (such as higher valuations and lower vacancy and default rates) for energy-efficient and “green” commercial buildings. This scoping report characterizes the current status and potential interventions to promote greater inclusion of energy factors in the commercial mortgage process.« less

  6. A series connection architecture for large-area organic photovoltaic modules with a 7.5% module efficiency.

    PubMed

    Hong, Soonil; Kang, Hongkyu; Kim, Geunjin; Lee, Seongyu; Kim, Seok; Lee, Jong-Hoon; Lee, Jinho; Yi, Minjin; Kim, Junghwan; Back, Hyungcheol; Kim, Jae-Ryoung; Lee, Kwanghee

    2016-01-05

    The fabrication of organic photovoltaic modules via printing techniques has been the greatest challenge for their commercial manufacture. Current module architecture, which is based on a monolithic geometry consisting of serially interconnecting stripe-patterned subcells with finite widths, requires highly sophisticated patterning processes that significantly increase the complexity of printing production lines and cause serious reductions in module efficiency due to so-called aperture loss in series connection regions. Herein we demonstrate an innovative module structure that can simultaneously reduce both patterning processes and aperture loss. By using a charge recombination feature that occurs at contacts between electron- and hole-transport layers, we devise a series connection method that facilitates module fabrication without patterning the charge transport layers. With the successive deposition of component layers using slot-die and doctor-blade printing techniques, we achieve a high module efficiency reaching 7.5% with area of 4.15 cm(2).

  7. Large Spun Formed Friction-Stir Welded Tank Domes for Liquid Propellant Tanks Made from AA2195: A Technology Demonstration for the Next Generation of Heavy Lift Launchers

    NASA Technical Reports Server (NTRS)

    Stachulla, M.; Pernpeinter, R.; Brewster J.; Curreri, P.; Hoffman, E.

    2010-01-01

    Improving structural efficiency while reducing manufacturing costs are key objectives when making future heavy-lift launchers more performing and cost efficient. The main enabling technologies are the application of advanced high performance materials as well as cost effective manufacture processes. This paper presents the status and main results of a joint industrial research & development effort to demonstrate TRL 6 of a novel manufacturing process for large liquid propellant tanks for launcher applications. Using high strength aluminium-lithium alloy combined with the spin forming manufacturing technique, this development aims at thinner wall thickness and weight savings up to 25% as well as a significant reduction in manufacturing effort. In this program, the concave spin forming process is used to manufacture tank domes from a single flat plate. Applied to aluminium alloy, this process allows reaching the highest possible material strength status T8, eliminating numerous welding steps which are typically necessary to assemble tank domes from 3D-curved panels. To minimize raw material costs for large diameter tank domes for launchers, the dome blank has been composed from standard plates welded together prior to spin forming by friction stir welding. After welding, the dome blank is contoured in order to meet the required wall thickness distribution. For achieving a material state of T8, also in the welding seams, the applied spin forming process allows the required cold stretching of the 3D-curved dome, with a subsequent ageing in a furnace. This combined manufacturing process has been demonstrated up to TRL 6 for tank domes with a 5.4 m diameter. In this paper, the manufacturing process as well as test results are presented. Plans are shown how this process could be applied to future heavy-lift launch vehicles developments, also for larger dome diameters.

  8. Visual attention mitigates information loss in small- and large-scale neural codes

    PubMed Central

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  9. Rapid thermal processing for production of chalcopyrite thin films for solar cells: Design, analysis, and experimental implementation

    NASA Astrophysics Data System (ADS)

    Lovelett, Robert J.

    The direct conversion of solar energy to electricity, or photovoltaic energy conversion, has a number of environmental, social, and economic advantages over conventional electricity generation from fossil fuels. Currently, the most commonly-used material for photovoltaics is crystalline silicon, which is now produced at large scale and silicon-based devices have achieved power conversion efficiencies over 25% However, alternative materials, such as inorganic thin films, offer a number of advantages including the potential for lower manufacturing costs, higher theoretical efficiencies, and better performance in the field. One of these materials is the chalcopyrite Cu(InGa)(SeS) 2, which has demonstrated module efficiencies over 17% and cell efficiencies over 22%. Cu(InGa)(SeS)2 is now in the early stages of commercialization using a precursor reaction process referred to as a "selenization/sulfization" reaction. The precursor reaction process is promising because it has demonstrated high efficiency along with the large area (approximately 1 m2) uniformity that is required for modules. However, some challenges remain that limit the growth of the chalcopyrite solar cell industry including: slow reactions that limit process throughput, a limited understanding of complex reaction kinetics and transport phenomena that affect the through-film composition, and the use of highly toxic H2Se in the reaction process. In this work, I approach each of these challenges. First, to improve process throughput, I designed and implemented a rapid thermal processing (RTP) reactor, whereby the samples are heated by a 1000 W quartz-halogen lamp that is capable of fast temperature ramps and high temperature dwells. With the reactor in place, however, achieving effective temperature control in the thin film material system is complicated by two intrinsic process characteristics: (i) the temperature of the Cu(InGa)(SeS)2 film cannot be measured directly, which leaves the system without complete state feedback; and (ii), the process is significantly nonlinear due to the dominance of radiative heat transfer at high temperatures. Therefore, I developed a novel control system using a first principles-based observer and a specialized temperature controller. Next, to understand the complex kinetics governing the selenization/sulfization processes, a stochastic model of solid state reaction kinetics was developed and applied to the system. The model is capable of predicting several important phenomena observed experimentally, including steep through-film gradients in gallium mole fraction. Furthermore, the model is mathematically general and can be useful for understanding a number of solid state reaction systems. Finally, the RTP system was then used to produce and characterize chalcopyrite films using two general methods: (i) single stage and multi stage reactions in H2Se and H2S, and (ii), reaction of a selenium "capped" precursor in H2S, where selenium was deposited on the precursor by thermal evaporation and the use of toxic H2Se was avoided. It was found that the processing conditions could be used to control material properties including relative sulfur incorporation, crystallinity, and through-film gallium and sulfur profiles. Films produced using the selenium-capped precursor reaction process were used to fabricate solar cell devices using a Mo/Cu(InGa)(SeS)2/CdS/ZnO/ITO substrate device structure, and the devices were tested by measuring the current-voltage characteristic under standard conditions. Devices with approximately 10% efficiency were obtained over a range of compositions and the best device obtained in this work had an efficiency of 12.7%.

  10. Large zeolites - Why and how to grow in space

    NASA Technical Reports Server (NTRS)

    Sacco, Albert, Jr.

    1991-01-01

    The growth of zeolite crystals which are considered to be the most valuable catalytic and adsorbent materials of the chemical processing industry are discussed. It is proposed to use triethanolamine as a nucleation control agent to control the time release of Al in a zeolite A solution and to increase the average and maximum crystal size by 25-50 times. Large zeolites could be utilized to make membranes for reactors/separators which will substantially increase their efficiency.

  11. Adapting a large database of point of care summarized guidelines: a process description.

    PubMed

    Delvaux, Nicolas; Van de Velde, Stijn; Aertgeerts, Bert; Goossens, Martine; Fauquert, Benjamin; Kunnamo, Ilka; Van Royen, Paul

    2017-02-01

    Questions posed at the point of care (POC) can be answered using POC summarized guidelines. To implement a national POC information resource, we subscribed to a large database of POC summarized guidelines to complement locally available guidelines. Our challenge was in developing a sustainable strategy for adapting almost 1000 summarized guidelines. The aim of this paper was to describe our process for adapting a database of POC summarized guidelines. An adaptation process based on the ADAPTE framework was tailored to be used by a heterogeneous group of participants. Guidelines were assessed on content and on applicability to the Belgian context. To improve efficiency, we chose to first aim our efforts towards those guidelines most important to primary care doctors. Over a period of 3 years, we screened about 80% of 1000 international summarized guidelines. For those guidelines identified as most important for primary care doctors, we noted that in about half of the cases, remarks were made concerning content. On the other hand, at least two-thirds of all screened guidelines required no changes when evaluating their local usability. Adapting a large body of POC summarized guidelines using a formal adaptation process is possible, even when faced with limited resources. This can be done by creating an efficient and collaborative effort and ensuring user-friendly procedures. Our experiences show that even though in most cases guidelines can be adopted without adaptations, careful review of guidelines developed in a different context remains necessary. Streamlining international efforts in adapting international POC information resources and adopting similar adaptation processes may lessen duplication efforts and prove more cost-effective. © 2015 The Authors. Journal of Evaluation in Clinical Practice published by John Wiley & Sons, Ltd.

  12. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics

    PubMed Central

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-01-01

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry. PMID:24603964

  13. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics.

    PubMed

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-03-07

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.

  14. Improved Reproducibility for Perovskite Solar Cells with 1 cm2 Active Area by a Modified Two-Step Process.

    PubMed

    Shen, Heping; Wu, Yiliang; Peng, Jun; Duong, The; Fu, Xiao; Barugkin, Chog; White, Thomas P; Weber, Klaus; Catchpole, Kylie R

    2017-02-22

    With rapid progress in recent years, organohalide perovskite solar cells (PSC) are promising candidates for a new generation of highly efficient thin-film photovoltaic technologies, for which up-scaling is an essential step toward commercialization. In this work, we propose a modified two-step method to deposit the CH 3 NH 3 PbI 3 (MAPbI 3 ) perovskite film that improves the uniformity, photovoltaic performance, and repeatability of large-area perovskite solar cells. This method is based on the commonly used two-step method, with one additional process involving treating the perovskite film with concentrated methylammonium iodide (MAI) solution. This additional treatment is proved to be helpful for tailoring the residual PbI 2 level to an optimal range that is favorable for both optical absorption and inhibition of recombination. Scanning electron microscopy and photoluminescence image analysis further reveal that, compared to the standard two-step and one-step methods, this method is very robust for achieving uniform and pinhole-free large-area films. This is validated by the photovoltaic performance of the prototype devices with an active area of 1 cm 2 , where we achieved the champion efficiency of ∼14.5% and an average efficiency of ∼13.5%, with excellent reproducibility.

  15. Pd/RGO modified carbon felt cathode for electro-Fenton removing of EDTA-Ni.

    PubMed

    Zhang, Zhen; Zhang, Junya; Ye, Xiaokun; Hu, Yongyou; Chen, Yuancai

    Ethylenediaminetetraacetic acid (EDTA) forms stable complexes with toxic metals such as nickel due to its strong chelation. The electro-Fenton (EF) process using a cathode made from palladium (Pd), reduced graphene oxide (RGO) and carbon felt, fed with air, exhibited high activities and stability for the removal of 10 mg L(-1) EDTA-Ni solution. Pd/RGO catalyst was prepared by one-pot synthesis; the scanning electron microscopy and X-ray diffraction analysis indicated nanoparticles and RGO were well distributed on carbon felt, forming three dimensional architecture with both large macropores and a mesoporous structure. The cyclic voltammetric results showed that the presence of RGO in Pd/RGO/carbon felt significantly increased the current response of two-electron reduction of O2 (0.45 V). The key factors influencing the removal efficiency of EDTA-Ni, such as pH, current and Fe(2+) concentration, were investigated. Under the optimum conditions, the removal efficiency of EDTA-Ni reached 83.8% after 100 min EF treatment. Mechanism analysis indicated that the introduction of RGO in Pd/RGO/carbon felt significantly enhanced the electrocatalytic activities by inducing •OH in the EF process; direct H2O2 oxidation still accounted for a large amount of EDTA-Ni removal efficiency.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Jeff

    This report discusses the UHSP monitoring program, a radioactive material accounting process and its purpose. The systematic approach to implementing Lean principles, determining key requirements, root causes of variation and disruption that interfere with program efficiency and effectiveness. Preexisting issues within the UHSP are modeled to illustrate the impact that they have on the large and extensive systems.

  17. Processes to improve energy efficiency during pumping and aeration of recirculating water in circular tank systems

    USDA-ARS?s Scientific Manuscript database

    Conventional gas transfer technologies for aquaculture systems occupy a large amount of space, require considerable capital investment, and can contribute to high electricity demand. In addition, diffused aeration in a circular tank can interfere with the hydrodynamics of water rotation and the spee...

  18. Data-Informed Language Learning

    ERIC Educational Resources Information Center

    Godwin-Jones, Robert

    2017-01-01

    Although data collection has been used in language learning settings for some time, it is only in recent decades that large corpora have become available, along with efficient tools for their use. Advances in natural language processing (NLP) have enabled rich tagging and annotation of corpus data, essential for their effective use in language…

  19. The microbial composition and metabolic potential of the ovine rumen

    USDA-ARS?s Scientific Manuscript database

    The rumen is efficient at biotransforming nitroaromatic explosive compounds, such as TNT, RDX, and HMX, which have been used widely in US military munitions. These compounds are present in > 4,000 military items, from large bombs to very small igniters. However, their manufacturing processes have g...

  20. Parallel processing optimization strategy based on MapReduce model in cloud storage environment

    NASA Astrophysics Data System (ADS)

    Cui, Jianming; Liu, Jiayi; Li, Qiuyan

    2017-05-01

    Currently, a large number of documents in the cloud storage process employed the way of packaging after receiving all the packets. From the local transmitter this stored procedure to the server, packing and unpacking will consume a lot of time, and the transmission efficiency is low as well. A new parallel processing algorithm is proposed to optimize the transmission mode. According to the operation machine graphs model work, using MPI technology parallel execution Mapper and Reducer mechanism. It is good to use MPI technology to implement Mapper and Reducer parallel mechanism. After the simulation experiment of Hadoop cloud computing platform, this algorithm can not only accelerate the file transfer rate, but also shorten the waiting time of the Reducer mechanism. It will break through traditional sequential transmission constraints and reduce the storage coupling to improve the transmission efficiency.

  1. Using Analytics to Support Petabyte-Scale Science on the NASA Earth Exchange (NEX)

    NASA Astrophysics Data System (ADS)

    Votava, P.; Michaelis, A.; Ganguly, S.; Nemani, R. R.

    2014-12-01

    NASA Earth Exchange (NEX) is a data, supercomputing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to address large-scale challenges in Earth sciences. Analytics within NEX occurs at several levels - data, workflows, science and knowledge. At the data level, we are focusing on collecting and analyzing any information that is relevant to efficient acquisition, processing and management of data at the smallest granularity, such as files or collections. This includes processing and analyzing all local and many external metadata that are relevant to data quality, size, provenance, usage and other attributes. This then helps us better understand usage patterns and improve efficiency of data handling within NEX. When large-scale workflows are executed on NEX, we capture information that is relevant to processing and that can be analyzed in order to improve efficiencies in job scheduling, resource optimization, or data partitioning that would improve processing throughput. At this point we also collect data provenance as well as basic statistics of intermediate and final products created during the workflow execution. These statistics and metrics form basic process and data QA that, when combined with analytics algorithms, helps us identify issues early in the production process. We have already seen impact in some petabyte-scale projects, such as global Landsat processing, where we were able to reduce processing times from days to hours and enhance process monitoring and QA. While the focus so far has been mostly on support of NEX operations, we are also building a web-based infrastructure that enables users to perform direct analytics on science data - such as climate predictions or satellite data. Finally, as one of the main goals of NEX is knowledge acquisition and sharing, we began gathering and organizing information that associates users and projects with data, publications, locations and other attributes that can then be analyzed as a part of the NEX knowledge graph and used to greatly improve advanced search capabilities. Overall, we see data analytics at all levels as an important part of NEX as we are continuously seeking improvements in data management, workflow processing, use of resources, usability and science acceleration.

  2. Highly uniform and vertically aligned SnO2 nanochannel arrays for photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Yup; Kang, Jin Soo; Shin, Junyoung; Kim, Jin; Han, Seung-Joo; Park, Jongwoo; Min, Yo-Sep; Ko, Min Jae; Sung, Yung-Eun

    2015-04-01

    Nanostructured electrodes with vertical alignment have been considered ideal structures for electron transport and interfacial contact with redox electrolytes in photovoltaic devices. Here, we report large-scale vertically aligned SnO2 nanochannel arrays with uniform structures, without lateral cracks fabricated by a modified anodic oxidation process. In the modified process, ultrasonication is utilized to avoid formation of partial compact layers and lateral cracks in the SnO2 nanochannel arrays. Building on this breakthrough, we first demonstrate the photovoltaic application of these vertically aligned SnO2 nanochannel arrays. These vertically aligned arrays were directly and successfully applied in quasi-solid state dye-sensitized solar cells (DSSCs) as photoanodes, yielding reasonable conversion efficiency under back-side illumination. In addition, a significantly short process time (330 s) for achieving the optimal thickness (7.0 μm) and direct utilization of the anodized electrodes enable a simple, rapid and low-cost fabrication process. Furthermore, a TiO2 shell layer was coated on the SnO2 nanochannel arrays by the atomic layer deposition (ALD) process for enhancement of dye-loading and prolonging the electron lifetime in the DSSC. Owing to the presence of the ALD TiO2 layer, the short-circuit photocurrent density (Jsc) and conversion efficiency were increased by 20% and 19%, respectively, compared to those of the DSSC without the ALD TiO2 layer. This study provides valuable insight into the development of efficient SnO2-based photoanodes for photovoltaic application by a simple and rapid fabrication process.Nanostructured electrodes with vertical alignment have been considered ideal structures for electron transport and interfacial contact with redox electrolytes in photovoltaic devices. Here, we report large-scale vertically aligned SnO2 nanochannel arrays with uniform structures, without lateral cracks fabricated by a modified anodic oxidation process. In the modified process, ultrasonication is utilized to avoid formation of partial compact layers and lateral cracks in the SnO2 nanochannel arrays. Building on this breakthrough, we first demonstrate the photovoltaic application of these vertically aligned SnO2 nanochannel arrays. These vertically aligned arrays were directly and successfully applied in quasi-solid state dye-sensitized solar cells (DSSCs) as photoanodes, yielding reasonable conversion efficiency under back-side illumination. In addition, a significantly short process time (330 s) for achieving the optimal thickness (7.0 μm) and direct utilization of the anodized electrodes enable a simple, rapid and low-cost fabrication process. Furthermore, a TiO2 shell layer was coated on the SnO2 nanochannel arrays by the atomic layer deposition (ALD) process for enhancement of dye-loading and prolonging the electron lifetime in the DSSC. Owing to the presence of the ALD TiO2 layer, the short-circuit photocurrent density (Jsc) and conversion efficiency were increased by 20% and 19%, respectively, compared to those of the DSSC without the ALD TiO2 layer. This study provides valuable insight into the development of efficient SnO2-based photoanodes for photovoltaic application by a simple and rapid fabrication process. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr00202h

  3. Big Data Analysis of Manufacturing Processes

    NASA Astrophysics Data System (ADS)

    Windmann, Stefan; Maier, Alexander; Niggemann, Oliver; Frey, Christian; Bernardi, Ansgar; Gu, Ying; Pfrommer, Holger; Steckel, Thilo; Krüger, Michael; Kraus, Robert

    2015-11-01

    The high complexity of manufacturing processes and the continuously growing amount of data lead to excessive demands on the users with respect to process monitoring, data analysis and fault detection. For these reasons, problems and faults are often detected too late, maintenance intervals are chosen too short and optimization potential for higher output and increased energy efficiency is not sufficiently used. A possibility to cope with these challenges is the development of self-learning assistance systems, which identify relevant relationships by observation of complex manufacturing processes so that failures, anomalies and need for optimization are automatically detected. The assistance system developed in the present work accomplishes data acquisition, process monitoring and anomaly detection in industrial and agricultural processes. The assistance system is evaluated in three application cases: Large distillation columns, agricultural harvesting processes and large-scale sorting plants. In this paper, the developed infrastructures for data acquisition in these application cases are described as well as the developed algorithms and initial evaluation results.

  4. Adaptive MCMC in Bayesian phylogenetics: an application to analyzing partitioned data in BEAST.

    PubMed

    Baele, Guy; Lemey, Philippe; Rambaut, Andrew; Suchard, Marc A

    2017-06-15

    Advances in sequencing technology continue to deliver increasingly large molecular sequence datasets that are often heavily partitioned in order to accurately model the underlying evolutionary processes. In phylogenetic analyses, partitioning strategies involve estimating conditionally independent models of molecular evolution for different genes and different positions within those genes, requiring a large number of evolutionary parameters that have to be estimated, leading to an increased computational burden for such analyses. The past two decades have also seen the rise of multi-core processors, both in the central processing unit (CPU) and Graphics processing unit processor markets, enabling massively parallel computations that are not yet fully exploited by many software packages for multipartite analyses. We here propose a Markov chain Monte Carlo (MCMC) approach using an adaptive multivariate transition kernel to estimate in parallel a large number of parameters, split across partitioned data, by exploiting multi-core processing. Across several real-world examples, we demonstrate that our approach enables the estimation of these multipartite parameters more efficiently than standard approaches that typically use a mixture of univariate transition kernels. In one case, when estimating the relative rate parameter of the non-coding partition in a heterochronous dataset, MCMC integration efficiency improves by > 14-fold. Our implementation is part of the BEAST code base, a widely used open source software package to perform Bayesian phylogenetic inference. guy.baele@kuleuven.be. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  5. Fast maximum intensity projections of large medical data sets by exploiting hierarchical memory architectures.

    PubMed

    Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen

    2006-04-01

    Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.

  6. Enhancing the absorption and energy transfer process via quantum entanglement

    NASA Astrophysics Data System (ADS)

    Zong, Xiao-Lan; Song, Wei; Zhou, Jian; Yang, Ming; Yu, Long-Bao; Cao, Zhuo-Liang

    2018-07-01

    The quantum network model is widely used to describe the dynamics of excitation energy transfer in photosynthesis complexes. Different from the previous schemes, we explore a specific network model, which includes both light-harvesting and energy transfer process. Here, we define a rescaled measure to manifest the energy transfer efficiency from external driving to the sink, and the external driving fields are used to simulate the energy absorption process. To study the role of initial state in the light-harvesting and energy transfer process, we assume the initial state of the donors to be two-qubit and three-qubit entangled states, respectively. In the two-qubit initial state case, we find that the initial entanglement between the donors can help to improve the absorption and energy transfer process for both the near-resonant and large-detuning cases. For the case of three-qubit initial state, we can see that the transfer efficiency will reach a larger value faster in the tripartite entanglement case compared to the bipartite entanglement case.

  7. A Technical Survey on Optimization of Processing Geo Distributed Data

    NASA Astrophysics Data System (ADS)

    Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.

    2018-04-01

    With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.

  8. Mulifunctional Dendritic Emitter: Aggregation-Induced Emission Enhanced, Thermally Activated Delayed Fluorescent Material for Solution-Processed Multilayered Organic Light-Emitting Diodes

    PubMed Central

    Matsuoka, Kenichi; Albrecht, Ken; Yamamoto, Kimihisa; Fujita, Katsuhiko

    2017-01-01

    Thermally activated delayed fluorescence (TADF) materials emerged as promising light sources in third generation organic light-emitting diodes (OLED). Much effort has been invested for the development of small molecular TADF materials and vacuum process-based efficient TADF-OLEDs. In contrast, a limited number of solution processable high-molecular weight TADF materials toward low cost, large area, and scalable manufacturing of solution processed TADF-OLEDs have been reported so far. In this context, we report benzophenone-core carbazole dendrimers (GnB, n = generation) showing TADF and aggregation-induced emission enhancement (AIEE) properties along with alcohol resistance enabling further solution-based lamination of organic materials. The dendritic structure was found to play an important role for both TADF and AIEE activities in the neat films. By using these multifunctional dendritic emitters as non-doped emissive layers, OLED devices with fully solution processed organic multilayers were successfully fabricated and achieved maximum external quantum efficiency of 5.7%. PMID:28139768

  9. Mulifunctional Dendritic Emitter: Aggregation-Induced Emission Enhanced, Thermally Activated Delayed Fluorescent Material for Solution-Processed Multilayered Organic Light-Emitting Diodes

    NASA Astrophysics Data System (ADS)

    Matsuoka, Kenichi; Albrecht, Ken; Yamamoto, Kimihisa; Fujita, Katsuhiko

    2017-01-01

    Thermally activated delayed fluorescence (TADF) materials emerged as promising light sources in third generation organic light-emitting diodes (OLED). Much effort has been invested for the development of small molecular TADF materials and vacuum process-based efficient TADF-OLEDs. In contrast, a limited number of solution processable high-molecular weight TADF materials toward low cost, large area, and scalable manufacturing of solution processed TADF-OLEDs have been reported so far. In this context, we report benzophenone-core carbazole dendrimers (GnB, n = generation) showing TADF and aggregation-induced emission enhancement (AIEE) properties along with alcohol resistance enabling further solution-based lamination of organic materials. The dendritic structure was found to play an important role for both TADF and AIEE activities in the neat films. By using these multifunctional dendritic emitters as non-doped emissive layers, OLED devices with fully solution processed organic multilayers were successfully fabricated and achieved maximum external quantum efficiency of 5.7%.

  10. Novel Binders and Methods for Agglomeration of Ore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. K. Kawatra; T. C. Eisele; K. A. Lewandowski

    2006-03-31

    Many metal extraction operations, such as leaching of copper, leaching of precious metals, and reduction of metal oxides to metal in high-temperature furnaces, require agglomeration of ore to ensure that reactive liquids or gases are evenly distributed throughout the ore being processed. Agglomeration of ore into coarse, porous masses achieves this even distribution of fluids by preventing fine particles from migrating and clogging the spaces and channels between the larger ore particles. Binders are critically necessary to produce agglomerates that will not break down during processing. However, for many important metal extraction processes there are no binders known that willmore » work satisfactorily at a reasonable cost. A primary example of this is copper heap leaching, where there are no binders currently encountered in this acidic environment process. As a result, operators of many facilities see a large loss of process efficiency due to their inability to take advantage of agglomeration. The large quantities of ore that must be handled in metal extraction processes also means that the binder must be inexpensive and useful at low dosages to be economical. The acid-resistant binders and agglomeration procedures developed in this project will also be adapted for use in improving the energy efficiency and performance of a broad range of mineral agglomeration applications, particularly heap leaching. The active involvement of our industrial partners will help to ensure rapid commercialization of any agglomeration technologies developed by this project.« less

  11. Novel Binders and Methods for Agglomeration of Ore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. K. Kawatra; T. C. Eisele; J. A. Gurtler

    2005-09-30

    Many metal extraction operations, such as leaching of copper, leaching of precious metals, and reduction of metal oxides to metal in high-temperature furnaces, require agglomeration of ore to ensure that reactive liquids or gases are evenly distributed throughout the ore being processed. Agglomeration of ore into coarse, porous masses achieves this even distribution of fluids by preventing fine particles from migrating and clogging the spaces and channels between the larger ore particles. Binders are critically necessary to produce agglomerates that will not break down during processing. However, for many important metal extraction processes there are no binders known that willmore » work satisfactorily at a reasonable cost. A primary example of this is copper heap leaching, where there are no binders currently encountered in this acidic environment process. As a result, operators of many facilities see a large loss of process efficiency due to their inability to take advantage of agglomeration. The large quantities of ore that must be handled in metal extraction processes also means that the binder must be inexpensive and useful at low dosages to be economical. The acid-resistant binders and agglomeration procedures developed in this project will also be adapted for use in improving the energy efficiency and performance of a broad range of mineral agglomeration applications, particularly heap leaching. The active involvement of our industrial partners will help to ensure rapid commercialization of any agglomeration technologies developed by this project.« less

  12. Large Spatial Scale Ground Displacement Mapping through the P-SBAS Processing of Sentinel-1 Data on a Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Casu, F.; Bonano, M.; de Luca, C.; Lanari, R.; Manunta, M.; Manzo, M.; Zinno, I.

    2017-12-01

    Since its launch in 2014, the Sentinel-1 (S1) constellation has played a key role on SAR data availability and dissemination all over the World. Indeed, the free and open access data policy adopted by the European Copernicus program together with the global coverage acquisition strategy, make the Sentinel constellation as a game changer in the Earth Observation scenario. Being the SAR data become ubiquitous, the technological and scientific challenge is focused on maximizing the exploitation of such huge data flow. In this direction, the use of innovative processing algorithms and distributed computing infrastructures, such as the Cloud Computing platforms, can play a crucial role. In this work we present a Cloud Computing solution for the advanced interferometric (DInSAR) processing chain based on the Parallel SBAS (P-SBAS) approach, aimed at processing S1 Interferometric Wide Swath (IWS) data for the generation of large spatial scale deformation time series in efficient, automatic and systematic way. Such a DInSAR chain ingests Sentinel 1 SLC images and carries out several processing steps, to finally compute deformation time series and mean deformation velocity maps. Different parallel strategies have been designed ad hoc for each processing step of the P-SBAS S1 chain, encompassing both multi-core and multi-node programming techniques, in order to maximize the computational efficiency achieved within a Cloud Computing environment and cut down the relevant processing times. The presented P-SBAS S1 processing chain has been implemented on the Amazon Web Services platform and a thorough analysis of the attained parallel performances has been performed to identify and overcome the major bottlenecks to the scalability. The presented approach is used to perform national-scale DInSAR analyses over Italy, involving the processing of more than 3000 S1 IWS images acquired from both ascending and descending orbits. Such an experiment confirms the big advantage of exploiting large computational and storage resources of Cloud Computing platforms for large scale DInSAR analysis. The presented Cloud Computing P-SBAS processing chain can be a precious tool in the perspective of developing operational services disposable for the EO scientific community related to hazard monitoring and risk prevention and mitigation.

  13. [Influence of wall polymer and preparation process on the particle size and encapsulation of hemoglobin microcapsules].

    PubMed

    Qiu, Wei; Ma, Guang-Hui; Meng, Fan-Tao; Su, Zhi-Guo

    2004-03-01

    Methoxypoly (ethylene glycol)- block-poly (DL-lactide) (PELA) microcapsules containing bovine hemoglobin (BHb) were prepared by a W/O/W double emulsion-solvent diffusion process. The P50 and Hill coeffcient were 3466 Pa and 2.4 respectively, which were near to the natural bioactivity of bovine hemoglobin. The results suggested that polymer composition had significant influence on encapsulation efficiency and particle size of microcapsules. The encapsulation efficiency could reach 90% and the particle size 3 - 5 microm when the PELA copolymer containing MPEG 2000 block was used. The encapsulation efficiency and particle size increased with the concentration of PELA. Increasing the concentrations of NaCl in outer aqueous solution resulted in the increase of encapsulation efficiency and the decrease of particle size. As the concentration of stabilizer in outer aqueous solution increased in the range of 10 g/L to 20 g/L, the particle size reduced while encapsulation efficiency was increased, further increase of the stabilizer concentration would decrease encapsulation efficiency. Increasing of primary emulsion stirring rate was advantageous to the improvement of encapsulation efficiency though it had little influence on the particle size. The influence of re-emulsion stirring rate was complicated, which was not apparent in the case of large volume of re-emulsion solution. When the wall polymer and primary emulsion stirring rate were fixed, the encapsulation efficiency decreased as the particle size reduced.

  14. Analysis of the energy efficiency of an integrated ethanol processor for PEM fuel cell systems

    NASA Astrophysics Data System (ADS)

    Francesconi, Javier A.; Mussati, Miguel C.; Mato, Roberto O.; Aguirre, Pio A.

    The aim of this work is to investigate the energy integration and to determine the maximum efficiency of an ethanol processor for hydrogen production and fuel cell operation. Ethanol, which can be produced from renewable feedstocks or agriculture residues, is an attractive option as feed to a fuel processor. The fuel processor investigated is based on steam reforming, followed by high- and low-temperature shift reactors and preferential oxidation, which are coupled to a polymeric fuel cell. Applying simulation techniques and using thermodynamic models the performance of the complete system has been evaluated for a variety of operating conditions and possible reforming reactions pathways. These models involve mass and energy balances, chemical equilibrium and feasible heat transfer conditions (Δ T min). The main operating variables were determined for those conditions. The endothermic nature of the reformer has a significant effect on the overall system efficiency. The highest energy consumption is demanded by the reforming reactor, the evaporator and re-heater operations. To obtain an efficient integration, the heat exchanged between the reformer outgoing streams of higher thermal level (reforming and combustion gases) and the feed stream should be maximized. Another process variable that affects the process efficiency is the water-to-fuel ratio fed to the reformer. Large amounts of water involve large heat exchangers and the associated heat losses. A net electric efficiency around 35% was calculated based on the ethanol HHV. The responsibilities for the remaining 65% are: dissipation as heat in the PEMFC cooling system (38%), energy in the flue gases (10%) and irreversibilities in compression and expansion of gases. In addition, it has been possible to determine the self-sufficient limit conditions, and to analyze the effect on the net efficiency of the input temperatures of the clean-up system reactors, combustion preheating, expander unit and crude ethanol as fuel.

  15. Physical and Mathematical Questions on Signal Processing in Multibase Phase Direction Finders

    NASA Astrophysics Data System (ADS)

    Denisov, V. P.; Dubinin, D. V.; Meshcheryakov, A. A.

    2018-02-01

    Questions on improving the accuracy of multiple-base phase direction finders by rejecting anomalously large errors in the process of resolving the measurement ambiguities are considered. A physical basis is derived and calculated relationships characterizing the efficiency of the proposed solutions are obtained. Results of a computer simulation of a three-base direction finder are analyzed, along with field measurements of a three-base direction finder along near-ground paths.

  16. SamSelect: a sample sequence selection algorithm for quorum planted motif search on large DNA datasets.

    PubMed

    Yu, Qiang; Wei, Dingbang; Huo, Hongwei

    2018-06-18

    Given a set of t n-length DNA sequences, q satisfying 0 < q ≤ 1, and l and d satisfying 0 ≤ d < l < n, the quorum planted motif search (qPMS) finds l-length strings that occur in at least qt input sequences with up to d mismatches and is mainly used to locate transcription factor binding sites in DNA sequences. Existing qPMS algorithms have been able to efficiently process small standard datasets (e.g., t = 20 and n = 600), but they are too time consuming to process large DNA datasets, such as ChIP-seq datasets that contain thousands of sequences or more. We analyze the effects of t and q on the time performance of qPMS algorithms and find that a large t or a small q causes a longer computation time. Based on this information, we improve the time performance of existing qPMS algorithms by selecting a sample sequence set D' with a small t and a large q from the large input dataset D and then executing qPMS algorithms on D'. A sample sequence selection algorithm named SamSelect is proposed. The experimental results on both simulated and real data show (1) that SamSelect can select D' efficiently and (2) that the qPMS algorithms executed on D' can find implanted or real motifs in a significantly shorter time than when executed on D. We improve the ability of existing qPMS algorithms to process large DNA datasets from the perspective of selecting high-quality sample sequence sets so that the qPMS algorithms can find motifs in a short time in the selected sample sequence set D', rather than take an unfeasibly long time to search the original sequence set D. Our motif discovery method is an approximate algorithm.

  17. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    PubMed Central

    Zhang, Qianghui; Wu, Junjie; Li, Wenchao; Huang, Yulin; Yang, Jianyu; Yang, Haiguang

    2016-01-01

    Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR) equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS), which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR) provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP) is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD) based on Stolt interpolation. Finally, a modified TSP (MTSP) is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application. PMID:27472341

  18. The design and application of large area intensive lens array focal spots measurement system

    NASA Astrophysics Data System (ADS)

    Chen, Bingzhen; Yao, Shun; Yang, Guanghui; Dai, Mingchong; Wang, Zhiyong

    2014-12-01

    Concentrating Photovoltaic (CPV) modules are getting thinner and using smaller cells now days. Correspondingly, large area intensive lens arrays with smaller unit dimension and shorter focal length are wanted. However, the size and power center of lens array focal spots usually differ from the design value and are hard to measure, especially under large area situation. It is because the machining error and deformation of material of the lens array are hard to simulate in the optical design process. Thus the alignment error between solar cells and focal spots in the module assembly process will be hard to control. Under this kind of situation, the efficiency of CPV module with thinner body and smaller cells is much lower than expected. In this paper, a design of large area lens array focal spots automatic measurement system is presented, as well as its prototype application results. In this system, a four-channel parallel light path and its corresponding image capture and process modules are designed. These modules can simulate focal spots under sunlight and have the spots image captured and processed using charge coupled devices and certain gray level algorithm. Thus the important information of focal spots such as spot size and location will be exported. Motion control module based on grating scale signal and interval measurement method are also employed in this system in order to get test results with high speed and high precision on large area lens array no less than 1m×0.8m. The repeatability of the system prototype measurement is +/-10μm with a velocity of 90 spot/min. Compared to the original module assembled using coordinates from optical design, modules assembled using data exported from the prototype is 18% higher in output power, reaching a conversion efficiency of over 31%. This system and its design can be used in the focal spot measurement of planoconvex lens array and Fresnel lens array, as well as other kinds of large area lens array application with small focal spots.

  19. LARGE-SCALE HYDROGEN PRODUCTION FROM NUCLEAR ENERGY USING HIGH TEMPERATURE ELECTROLYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James E. O'Brien

    2010-08-01

    Hydrogen can be produced from water splitting with relatively high efficiency using high-temperature electrolysis. This technology makes use of solid-oxide cells, running in the electrolysis mode to produce hydrogen from steam, while consuming electricity and high-temperature process heat. When coupled to an advanced high temperature nuclear reactor, the overall thermal-to-hydrogen efficiency for high-temperature electrolysis can be as high as 50%, which is about double the overall efficiency of conventional low-temperature electrolysis. Current large-scale hydrogen production is based almost exclusively on steam reforming of methane, a method that consumes a precious fossil fuel while emitting carbon dioxide to the atmosphere. Demandmore » for hydrogen is increasing rapidly for refining of increasingly low-grade petroleum resources, such as the Athabasca oil sands and for ammonia-based fertilizer production. Large quantities of hydrogen are also required for carbon-efficient conversion of biomass to liquid fuels. With supplemental nuclear hydrogen, almost all of the carbon in the biomass can be converted to liquid fuels in a nearly carbon-neutral fashion. Ultimately, hydrogen may be employed as a direct transportation fuel in a “hydrogen economy.” The large quantity of hydrogen that would be required for this concept should be produced without consuming fossil fuels or emitting greenhouse gases. An overview of the high-temperature electrolysis technology will be presented, including basic theory, modeling, and experimental activities. Modeling activities include both computational fluid dynamics and large-scale systems analysis. We have also demonstrated high-temperature electrolysis in our laboratory at the 15 kW scale, achieving a hydrogen production rate in excess of 5500 L/hr.« less

  20. Fine grained event processing on HPCs with the ATLAS Yoda system

    NASA Astrophysics Data System (ADS)

    Calafiura, Paolo; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; Van Gemmeren, Peter; Wenaus, Torre

    2015-12-01

    High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HEP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficiency and scheduling flexibility of preemption without requiring the application actually support or employ check-pointing. We will present the new Yoda system, its motivations, architecture, implementation, and applications in ATLAS data processing at several US HPC centers.

  1. Towards efficient next generation light sources: combined solution processed and evaporated layers for OLEDs

    NASA Astrophysics Data System (ADS)

    Hartmann, D.; Sarfert, W.; Meier, S.; Bolink, H.; García Santamaría, S.; Wecker, J.

    2010-05-01

    Typically high efficient OLED device structures are based on a multitude of stacked thin organic layers prepared by thermal evaporation. For lighting applications these efficient device stacks have to be up-scaled to large areas which is clearly challenging in terms of high through-put processing at low-cost. One promising approach to meet cost-efficiency, high through-put and high light output is the combination of solution and evaporation processing. Moreover, the objective is to substitute as many thermally evaporated layers as possible by solution processing without sacrificing the device performance. Hence, starting from the anode side, evaporated layers of an efficient white light emitting OLED stack are stepwise replaced by solution processable polymer and small molecule layers. In doing so different solutionprocessable hole injection layers (= polymer HILs) are integrated into small molecule devices and evaluated with regard to their electro-optical performance as well as to their planarizing properties, meaning the ability to cover ITO spikes, defects and dust particles. Thereby two approaches are followed whereas in case of the "single HIL" approach only one polymer HIL is coated and in case of the "combined HIL" concept the coated polymer HIL is combined with a thin evaporated HIL. These HIL architectures are studied in unipolar as well as bipolar devices. As a result the combined HIL approach facilitates a better control over the hole current, an improved device stability as well as an improved current and power efficiency compared to a single HIL as well as pure small molecule based OLED stacks. Furthermore, emitting layers based on guest/host small molecules are fabricated from solution and integrated into a white hybrid stack (WHS). Up to three evaporated layers were successfully replaced by solution-processing showing comparable white light emission spectra like an evaporated small molecule reference stack and lifetime values of several 100 h.

  2. Potentialities of silicon nanowire forests for thermoelectric generation

    NASA Astrophysics Data System (ADS)

    Dimaggio, Elisabetta; Pennelli, Giovanni

    2018-04-01

    Silicon is a material with very good thermoelectric properties, with regard to Seebeck coefficient and electrical conductivity. Low thermal conductivities, and hence high thermal to electrical conversion efficiencies, can be achieved in nanostructures, which are smaller than the phonon mean free path but large enough to preserve the electrical conductivity. We demonstrate that it is possible to fabricate a leg of a thermoelectric generator based on large collections of long nanowires, placed perpendicularly to the two faces of a silicon wafer. The process exploits the metal assisted etching technique which is simple, low cost, and can be easily applied to large surfaces. Copper can be deposited by electrodeposition on both faces, so that contacts can be provided, on top of the nanowires. Thermal conductivity of silicon nanowire forests with more than 107 nanowires mm-2 have been measured; the result is comparable with that achieved by several groups on devices based on few nanowires. On the basis of the measured parameters, numerical calculations of the efficiency of silicon-based thermoelectric generators are reported, and the potentialities of these devices for thermal to electrical energy conversion are shown. Criteria to improve the conversion efficiency are suggested and described.

  3. Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce.

    PubMed

    Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng

    2013-11-01

    The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS - a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing.

  4. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.

  5. Deployment of ERP Systems at Automotive Industries, Security Inspection (Case Study: IRAN KHODRO Automotive Company)

    NASA Astrophysics Data System (ADS)

    Ali, Hatamirad; Hasan, Mehrjerdi

    Automotive industry and car production process is one of the most complex and large-scale production processes. Today, information technology (IT) and ERP systems incorporates a large portion of production processes. Without any integrated systems such as ERP, the production and supply chain processes will be tangled. The ERP systems, that are last generation of MRP systems, make produce and sale processes of these industries easier and this is the major factor of development of these industries anyhow. Today many of large-scale companies are developing and deploying the ERP systems. The ERP systems facilitate many of organization processes and make organization to increase efficiency. The security is a very important part of the ERP strategy at the organization, Security at the ERP systems, because of integrity and extensive, is more important of local and legacy systems. Disregarding of this point can play a giant role at success or failure of this kind of systems. The IRANKHODRO is the biggest automotive factory in the Middle East with an annual production over 600.000 cars. This paper presents ERP security deployment experience at the "IRANKHODRO Company". Recently, by launching ERP systems, it moved a big step toward more developments.

  6. 10 CFR 431.96 - Uniform test method for the measurement of energy efficiency of small, large, and very large...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL... measurement of energy efficiency of small, large, and very large commercial package air conditioning and... section contains test procedures for measuring, pursuant to EPCA, the energy efficiency of any small...

  7. Seismic signal processing on heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that require dedicated HPC solutions. The chosen application is using a wide range of common signal processing methods, which include various IIR filter designs, amplitude and phase correlation, computing the analytic signal, and discrete Fourier transforms. Furthermore, various processing methods specific for seismology, like rotation of seismic traces, are used. Efficient implementation of all these methods on the GPU-accelerated systems represents several challenges. In particular, it requires a careful distribution of work between the sequential processors and accelerators. Furthermore, since the application is designed to process very large volumes of data, special attention had to be paid to the efficient use of the available memory and networking hardware resources in order to reduce intensity of data input and output. In our contribution we will explain the software architecture as well as principal engineering decisions used to address these challenges. We will also describe the programming model based on C++ and CUDA that we used to develop the software. Finally, we will demonstrate performance improvements achieved by using the heterogeneous computing architecture. This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID d26.

  8. Impact of grain boundaries on efficiency and stability of organic-inorganic trihalide perovskites

    DOE PAGES

    Chu, Zhaodong; Yang, Mengjin; Schulz, Philip; ...

    2017-12-20

    Organic-inorganic perovskite solar cells have attracted tremendous attention because of their remarkably high power conversion efficiencies. To further improve device performance, it is imperative to obtain fundamental understandings on the photo-response and long-term stability down to the microscopic level. Here, we report the quantitative nanoscale photoconductivity imaging on two methylammonium lead triiodide thin films with different efficiencies by light-stimulated microwave impedance microscopy. The microwave signals are largely uniform across grains and grain boundaries, suggesting that microstructures do not lead to strong spatial variations of the intrinsic photo-response. In contrast, the measured photoconductivity and lifetime are strongly affected by bulk propertiesmore » such as the sample crystallinity. As visualized by the spatial evolution of local photoconductivity, the degradation process begins with the disintegration of grains rather than nucleation and propagation from visible boundaries between grains. In conclusion, our findings provide insights to improve the electro-optical properties of perovskite thin films towards large-scale commercialization.« less

  9. Aggregate formation affects ultrasonic disruption of microalgal cells.

    PubMed

    Wang, Wei; Lee, Duu-Jong; Lai, Juin-Yih

    2015-12-01

    Ultrasonication is a cell disruption process of low energy efficiency. This study dosed K(+), Ca(2+) and Al(3+) to Chlorella vulgaris cultured in Bold's Basal Medium at 25°C and measured the degree of cell disruption under ultrasonication. Adding these metal ions yielded less negatively charged surfaces of cells, while with the latter two ions large and compact cell aggregates were formed. The degree of cell disruption followed: control=K(+)>Ca(2+)>Al(3+) samples. Surface charges of cells and microbubbles have minimal effects on the microbubble number in the proximity of the microalgal cells. Conversely, cell aggregates with large size and compact interior resist cell disruption under ultrasonication. Staining tests revealed high diffusional resistance of stains over the aggregate interior. Microbubbles may not be effective generated and collapsed inside the compact aggregates, hence leading to low cell disruption efficiencies. Effective coagulation/flocculation in cell harvesting may lead to adverse effect on subsequent cell disruption efficiency. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Continuous Flow Polymer Synthesis toward Reproducible Large-Scale Production for Efficient Bulk Heterojunction Organic Solar Cells.

    PubMed

    Pirotte, Geert; Kesters, Jurgen; Verstappen, Pieter; Govaerts, Sanne; Manca, Jean; Lutsen, Laurence; Vanderzande, Dirk; Maes, Wouter

    2015-10-12

    Organic photovoltaics (OPV) have attracted great interest as a solar cell technology with appealing mechanical, aesthetical, and economies-of-scale features. To drive OPV toward economic viability, low-cost, large-scale module production has to be realized in combination with increased top-quality material availability and minimal batch-to-batch variation. To this extent, continuous flow chemistry can serve as a powerful tool. In this contribution, a flow protocol is optimized for the high performance benzodithiophene-thienopyrroledione copolymer PBDTTPD and the material quality is probed through systematic solar-cell evaluation. A stepwise approach is adopted to turn the batch process into a reproducible and scalable continuous flow procedure. Solar cell devices fabricated using the obtained polymer batches deliver an average power conversion efficiency of 7.2 %. Upon incorporation of an ionic polythiophene-based cathodic interlayer, the photovoltaic performance could be enhanced to a maximum efficiency of 9.1 %. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Advances in Projection Moire Interferometry Development for Large Wind Tunnel Applications

    NASA Technical Reports Server (NTRS)

    Fleming, Gary A.; Soto, Hector L.; South, Bruce W.; Bartram, Scott M.

    1999-01-01

    An instrument development program aimed at using Projection Moire Interferometry (PMI) for acquiring model deformation measurements in large wind tunnels was begun at NASA Langley Research Center in 1996. Various improvements to the initial prototype PMI systems have been made throughout this development effort. This paper documents several of the most significant improvements to the optical hardware and image processing software, and addresses system implementation issues for large wind tunnel applications. The improvements have increased both measurement accuracy and instrument efficiency, promoting the routine use of PMI for model deformation measurements in production wind tunnel tests.

  12. Research on TCP/IP network communication based on Node.js

    NASA Astrophysics Data System (ADS)

    Huang, Jing; Cai, Lixiong

    2018-04-01

    In the face of big data, long connection and high synchronization, TCP/IP network communication will cause performance bottlenecks due to its blocking multi-threading service model. This paper presents a method of TCP/IP network communication protocol based on Node.js. On the basis of analyzing the characteristics of Node.js architecture and asynchronous non-blocking I/O model, the principle of its efficiency is discussed, and then compare and analyze the network communication model of TCP/IP protocol to expound the reasons why TCP/IP protocol stack is widely used in network communication. Finally, according to the large data and high concurrency in the large-scale grape growing environment monitoring process, a TCP server design based on Node.js is completed. The results show that the example runs stably and efficiently.

  13. Influence of Hybrid Perovskite Fabrication Methods on Film Formation, Electronic Structure, and Solar Cell Performance

    PubMed Central

    Schnier, Tobias; Emara, Jennifer; Olthof, Selina; Meerholz, Klaus

    2017-01-01

    Hybrid organic/inorganic halide perovskites have lately been a topic of great interest in the field of solar cell applications, with the potential to achieve device efficiencies exceeding other thin film device technologies. Yet, large variations in device efficiency and basic physical properties are reported. This is due to unintentional variations during film processing, which have not been sufficiently investigated so far. We therefore conducted an extensive study of the morphology and electronic structure of a large number of CH3NH3PbI3 perovskite where we show how the preparation method as well as the mixing ratio of educts methylammonium iodide and lead(II) iodide impact properties like film formation, crystal structure, density of states, energy levels, and ultimately the solar cell performance. PMID:28287555

  14. A new implementation of full resolution SBAS-DInSAR processing chain for the effective monitoring of structures and infrastructures

    NASA Astrophysics Data System (ADS)

    Bonano, Manuela; Buonanno, Sabatino; Ojha, Chandrakanta; Berardino, Paolo; Lanari, Riccardo; Zeni, Giovanni; Manunta, Michele

    2017-04-01

    The advanced DInSAR technique referred to as Small BAseline Subset (SBAS) algorithm has already largely demonstrated its effectiveness to carry out multi-scale and multi-platform surface deformation analyses relevant to both natural and man-made hazards. Thanks to its capability to generate displacement maps and long-term deformation time series at both regional (low resolution analysis) and local (full resolution analysis) spatial scales, it allows to get more insights on the spatial and temporal patterns of localized displacements relevant to single buildings and infrastructures over extended urban areas, with a key role in supporting risk mitigation and preservation activities. The extensive application of the multi-scale SBAS-DInSAR approach in many scientific contexts has gone hand in hand with new SAR satellite mission development, characterized by different frequency bands, spatial resolution, revisit times and ground coverage. This brought to the generation of huge DInSAR data stacks to be efficiently handled, processed and archived, with a strong impact on both the data storage and the computational requirements needed for generating the full resolution SBAS-DInSAR results. Accordingly, innovative and effective solutions for the automatic processing of massive SAR data archives and for the operational management of the derived SBAS-DInSAR products need to be designed and implemented, by exploiting the high efficiency (in terms of portability, scalability and computing performances) of the new ICT methodologies. In this work, we present a novel parallel implementation of the full resolution SBAS-DInSAR processing chain, aimed at investigating localized displacements affecting single buildings and infrastructures relevant to very large urban areas, relying on different granularity level parallelization strategies. The image granularity level is applied in most steps of the SBAS-DInSAR processing chain and exploits the multiprocessor systems with distributed memory. Moreover, in some processing steps very heavy from the computational point of view, the Graphical Processing Units (GPU) are exploited for the processing of blocks working on a pixel-by-pixel basis, requiring strong modifications on some key parts of the sequential full resolution SBAS-DInSAR processing chain. GPU processing is implemented by efficiently exploiting parallel processing architectures (as CUDA) for increasing the computing performances, in terms of optimization of the available GPU memory, as well as reduction of the Input/Output operations on the GPU and of the whole processing time for specific blocks w.r.t. the corresponding sequential implementation, particularly critical in presence of huge DInSAR datasets. Moreover, to efficiently handle the massive amount of DInSAR measurements provided by the new generation SAR constellations (CSK and Sentinel-1), we perform a proper re-design strategy aimed at the robust assimilation of the full resolution SBAS-DInSAR results into the web-based Geonode platform of the Spatial Data Infrastructure, thus allowing the efficient management, analysis and integration of the interferometric results with different data sources.

  15. Low-Temperature Forming of Beta Titanium Alloys

    NASA Technical Reports Server (NTRS)

    Kaneko, R. S.; Woods, C. A.

    1983-01-01

    Low cost methods for titanium structural fabrication using advanced cold-formable beta alloys were investigated for application in a Mach 2.7 supersonic cruise vehicle. This work focuses on improving processing and structural efficiencies as compared with standard hot formed and riveted construction of alpha-beta alloy sheet structure. Mechanical property data and manufacturing parameters were developed for cold forming, brazing, welding, and processing Ti-15V-3Cr-3Sn-3Al sheet, and Ti-3Al-8V-6Cr-4Zr on a more limited basis. Cost and structural benefits were assessed through the fabrication and evaluation of large structural panels. The feasibility of increasing structural efficiency of beta titanium structure by selective reinforcement with metal matrix composite was also explored.

  16. Power processing and control requirements of dispersed solar thermal electric generation systems

    NASA Technical Reports Server (NTRS)

    Das, R. L.

    1980-01-01

    Power Processing and Control requirements of Dispersed Receiver Solar Thermal Electric Generation Systems are presented. Kinematic Stirling Engines, Brayton Engines and Rankine Engines are considered as prime movers. Various types of generators are considered for ac and dc link generations. It is found that ac-ac Power Conversion is not suitable for implementation at this time. It is also found that ac-dc-ac Power Conversion with a large central inverter is more efficient than ac-dc-ac Power Conversion using small dispersed inverters. Ac-link solar thermal electric plants face potential stability and synchronization problems. Research and development efforts are needed in improving component performance characteristics and generation efficiency to make Solar Thermal Electric Generation economically attractive.

  17. Efficient organic solar cells using copper(I) iodide (CuI) hole transport layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Ying; Department of Physics and Centre for Plastic Electronics, Blackett Laboratory, Imperial College London, London SW7 2AZ; Yaacobi-Gross, Nir

    We report the fabrication of high power conversion efficiency (PCE) polymer/fullerene bulk heterojunction (BHJ) photovoltaic cells using solution-processed Copper (I) Iodide (CuI) as hole transport layer (HTL). Our devices exhibit a PCE value of ∼5.5% which is equivalent to that obtained for control devices based on the commonly used conductive polymer poly(3,4-ethylenedioxythiophene): polystyrenesulfonate as HTL. Inverted cells with PCE >3% were also demonstrated using solution-processed metal oxide electron transport layers, with a CuI HTL evaporated on top of the BHJ. The high optical transparency and suitable energetics of CuI make it attractive for application in a range of inexpensive large-area optoelectronicmore » devices.« less

  18. Analysis of area-time efficiency for an integrated focal plane architecture

    NASA Astrophysics Data System (ADS)

    Robinson, William H.; Wills, D. Scott

    2003-05-01

    Monolithic integration of photodetectors, analog-to-digital converters, digital processing, and data storage can improve the performance and efficiency of next-generation portable image products. Our approach combines these components into a single processing element, which is tiled to form a SIMD focal plane processor array with the capability to execute early image applications such as median filtering (noise removal), convolution (smoothing), and inside edge detection (segmentation). Digitizing and processing a pixel at the detection site presents new design challenges, including the allocation of silicon resources. This research investigates the area-time (A"T2) efficiency by adjusting the number of Pixels-per-Processing Element (PPE). Area calculations are based upon hardware implementations of components scaled for 250nm or 120nm technology. The total execution time is calculated from the sequential execution of each application on a generic focal plane architectural simulator. For a Quad-CIF system resolution (176×144), results show that 1 PPE provides the optimal area-time efficiency (5.7 μs2 x mm2 for 250nm, 1.7 μs2 x mm2 for 120nm) but requires a large silicon chip (2072mm2 for 250nm, 614mm2 for 120nm). Increasing the PPE to 4 or 16 can reduce silicon area by 48% and 60% respectively (120nm technology) while maintaining performance within real-time constraints.

  19. A cellular automata based FPGA realization of a new metaheuristic bat-inspired algorithm

    NASA Astrophysics Data System (ADS)

    Progias, Pavlos; Amanatiadis, Angelos A.; Spataro, William; Trunfio, Giuseppe A.; Sirakoulis, Georgios Ch.

    2016-10-01

    Optimization algorithms are often inspired by processes occuring in nature, such as animal behavioral patterns. The main concern with implementing such algorithms in software is the large amounts of processing power they require. In contrast to software code, that can only perform calculations in a serial manner, an implementation in hardware, exploiting the inherent parallelism of single-purpose processors, can prove to be much more efficient both in speed and energy consumption. Furthermore, the use of Cellular Automata (CA) in such an implementation would be efficient both as a model for natural processes, as well as a computational paradigm implemented well on hardware. In this paper, we propose a VHDL implementation of a metaheuristic algorithm inspired by the echolocation behavior of bats. More specifically, the CA model is inspired by the metaheuristic algorithm proposed earlier in the literature, which could be considered at least as efficient than other existing optimization algorithms. The function of the FPGA implementation of our algorithm is explained in full detail and results of our simulations are also demonstrated.

  20. Efficiency analysis for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.

  1. Cycle time and cost reduction in large-size optics production

    NASA Astrophysics Data System (ADS)

    Hallock, Bob; Shorey, Aric; Courtney, Tom

    2005-09-01

    Optical fabrication process steps have remained largely unchanged for decades. Raw glass blanks have been rough-machined, generated to near net shape, loose abrasive or fine bound diamond ground and then polished. This set of processes is sequential and each subsequent operation removes the damage and micro cracking induced by the prior operational step. One of the long-lead aspects of this process has been the glass polishing. Primarily, this has been driven by the need to remove relatively large volumes of glass material compared to the polishing removal rate to ensure complete damage removal. The secondary time driver has been poor convergence to final figure and the corresponding polish-metrology cycles. The overall cycle time and resultant cost due to labor, equipment utilization and shop efficiency is increased, often significantly, when the optical prescription is aspheric. In addition to the long polishing cycle times, the duration of the polishing time is often very difficult to predict given that current polishing processes are not deterministic processes. This paper will describe a novel approach to large optics finishing, relying on several innovative technologies to be presented and illustrated through a variety of examples. The cycle time reductions enabled by this approach promises to result in significant cost and lead-time reductions for large size optics. In addition, corresponding increases in throughput will provide for less capital expenditure per square meter of optic produced. This process, comparative cycles time estimates and preliminary results will be discussed.

  2. PET-Tool: a software suite for comprehensive processing and managing of Paired-End diTag (PET) sequence data.

    PubMed

    Chiu, Kuo Ping; Wong, Chee-Hong; Chen, Qiongyu; Ariyaratne, Pramila; Ooi, Hong Sain; Wei, Chia-Lin; Sung, Wing-Kin Ken; Ruan, Yijun

    2006-08-25

    We recently developed the Paired End diTag (PET) strategy for efficient characterization of mammalian transcriptomes and genomes. The paired end nature of short PET sequences derived from long DNA fragments raised a new set of bioinformatics challenges, including how to extract PETs from raw sequence reads, and correctly yet efficiently map PETs to reference genome sequences. To accommodate and streamline data analysis of the large volume PET sequences generated from each PET experiment, an automated PET data process pipeline is desirable. We designed an integrated computation program package, PET-Tool, to automatically process PET sequences and map them to the genome sequences. The Tool was implemented as a web-based application composed of four modules: the Extractor module for PET extraction; the Examiner module for analytic evaluation of PET sequence quality; the Mapper module for locating PET sequences in the genome sequences; and the Project Manager module for data organization. The performance of PET-Tool was evaluated through the analyses of 2.7 million PET sequences. It was demonstrated that PET-Tool is accurate and efficient in extracting PET sequences and removing artifacts from large volume dataset. Using optimized mapping criteria, over 70% of quality PET sequences were mapped specifically to the genome sequences. With a 2.4 GHz LINUX machine, it takes approximately six hours to process one million PETs from extraction to mapping. The speed, accuracy, and comprehensiveness have proved that PET-Tool is an important and useful component in PET experiments, and can be extended to accommodate other related analyses of paired-end sequences. The Tool also provides user-friendly functions for data quality check and system for multi-layer data management.

  3. Efficient and Scalable Cross-Matching of (Very) Large Catalogs

    NASA Astrophysics Data System (ADS)

    Pineau, F.-X.; Boch, T.; Derriere, S.

    2011-07-01

    Whether it be for building multi-wavelength datasets from independent surveys, studying changes in objects luminosities, or detecting moving objects (stellar proper motions, asteroids), cross-catalog matching is a technique widely used in astronomy. The need for efficient, reliable and scalable cross-catalog matching is becoming even more pressing with forthcoming projects which will produce huge catalogs in which astronomers will dig for rare objects, perform statistical analysis and classification, or real-time transients detection. We have developed a formalism and the corresponding technical framework to address the challenge of fast cross-catalog matching. Our formalism supports more than simple nearest-neighbor search, and handles elliptical positional errors. Scalability is improved by partitioning the sky using the HEALPix scheme, and processing independently each sky cell. The use of multi-threaded two-dimensional kd-trees adapted to managing equatorial coordinates enables efficient neighbor search. The whole process can run on a single computer, but could also use clusters of machines to cross-match future very large surveys such as GAIA or LSST in reasonable times. We already achieve performances where the 2MASS (˜470M sources) and SDSS DR7 (˜350M sources) can be matched on a single machine in less than 10 minutes. We aim at providing astronomers with a catalog cross-matching service, available on-line and leveraging on the catalogs present in the VizieR database. This service will allow users both to access pre-computed cross-matches across some very large catalogs, and to run customized cross-matching operations. It will also support VO protocols for synchronous or asynchronous queries.

  4. Towards fully spray coated organic light emitting devices

    NASA Astrophysics Data System (ADS)

    Gilissen, Koen; Stryckers, Jeroen; Manca, Jean; Deferme, Wim

    2014-10-01

    Pi-conjugated polymer light emitting devices have the potential to be the next generation of solid state lighting. In order to achieve this goal, a low cost, efficient and large area production process is essential. Polymer based light emitting devices are generally deposited using techniques based on solution processing e.g.: spin coating, ink jet printing. These techniques are not well suited for cost-effective, high throughput, large area mass production of these organic devices. Ultrasonic spray deposition however, is a deposition technique that is fast, efficient and roll to roll compatible which can be easily scaled up for the production of large area polymer light emitting devices (PLEDs). This deposition technique has already successfully been employed to produce organic photovoltaic devices (OPV)1. Recently the electron blocking layer PEDOT:PSS2 and metal top contact3 have been successfully spray coated as part of the organic photovoltaic device stack. In this study, the effects of ultrasonic spray deposition of polymer light emitting devices are investigated. For the first time - to our knowledge -, spray coating of the active layer in PLED is demonstrated. Different solvents are tested to achieve the best possible spray-able dispersion. The active layer morphology is characterized and optimized to produce uniform films with optimal thickness. Furthermore these ultrasonic spray coated films are incorporated in the polymer light emitting device stack to investigate the device characteristics and efficiency. Our results show that after careful optimization of the active layer, ultrasonic spray coating is prime candidate as deposition technique for mass production of PLEDs.

  5. Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Wimmer, Michael

    2016-02-01

    With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.

  6. Eco-efficiency improvements in industrial water-service systems: assessing options with stakeholders.

    PubMed

    Levidow, Les; Lindgaard-Jørgensen, Palle; Nilsson, Asa; Skenhall, Sara Alongi; Assimacopoulos, Dionysis

    2014-01-01

    The well-known eco-efficiency concept helps to assess the economic value and resource burdens of potential improvements by comparison with the baseline situation. But eco-efficiency assessments have generally focused on a specific site, while neglecting wider effects, for example, through interactions between water users and wastewater treatment (WWT) providers. To address the methodological gap, the EcoWater project has developed a method and online tools for meso-level analysis of the entire water-service value chain. This study investigated improvement options in two large manufacturing companies which have significant potential for eco-efficiency gains. They have been considering investment in extra processes which can lower resource burdens from inputs and wastewater, as well as internalising WWT processes. In developing its methodology, the EcoWater project obtained the necessary information from many agents, involved them in the meso-level assessment and facilitated their discussion on alternative options. Prior discussions with stakeholders stimulated their attendance at a workshop to discuss a comparative eco-efficiency assessment for whole-system improvement. Stakeholders expressed interest in jointly extending the EcoWater method to more options and in discussing investment strategies. In such ways, optimal solutions will depend on stakeholders overcoming fragmentation by sharing responsibility and knowledge.

  7. The Large Hadron Collider (LHC): The Energy Frontier

    NASA Astrophysics Data System (ADS)

    Brianti, Giorgio; Jenni, Peter

    The following sections are included: * Introduction * Superconducting Magnets: Powerful, Precise, Plentiful * LHC Cryogenics: Quantum Fluids at Work * Current Leads: High Temperature Superconductors to the Fore * A Pumping Vacuum Chamber: Ultimate Simplicity * Vertex Detectors at LHC: In Search of Beauty * Large Silicon Trackers: Fast, Precise, Efficient * Two Approaches to High Resolution Electromagnetic Calorimetry * Multigap Resistive Plate Chamber: Chronometry of Particles * The LHCb RICH: The Lord of the Cherenkov Rings * Signal Processing: Taming the LHC Data Avalanche * Giant Magnets for Giant Detectors

  8. Commentary: Environmental nanophotonics and energy

    NASA Astrophysics Data System (ADS)

    Smith, Geoff B.

    2011-01-01

    The reasons nanophotonics is proving central to meeting the need for large gains in energy efficiency and renewable energy supply are analyzed. It enables optimum management and use of environmental energy flows at low cost and on a sufficient scale by providing spectral, directional and temporal control in tune with radiant flows from the sun, and the local atmosphere. Benefits and problems involved in large scale manufacture and deployment are discussed including how managing and avoiding safety issues in some nanosystems will occur, a process long established in nature.

  9. Image processing for optical mapping.

    PubMed

    Ravindran, Prabu; Gupta, Aditya

    2015-01-01

    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  10. Enhanced light out-coupling efficiency of organic light-emitting diodes with an extremely low haze by plasma treated nanoscale corrugation

    NASA Astrophysics Data System (ADS)

    Hwang, Ju Hyun; Lee, Hyun Jun; Shim, Yong Sub; Park, Cheol Hwee; Jung, Sun-Gyu; Kim, Kyu Nyun; Park, Young Wook; Ju, Byeong-Kwon

    2015-01-01

    Extremely low-haze light extraction from organic light-emitting diodes (OLEDs) was achieved by utilizing nanoscale corrugation, which was simply fabricated with plasma treatment and sonication. The haze of the nanoscale corrugation for light extraction (NCLE) corresponds to 0.21% for visible wavelengths, which is comparable to that of bare glass. The OLEDs with NCLE showed enhancements of 34.19% in current efficiency and 35.75% in power efficiency. Furthermore, the OLEDs with NCLE exhibited angle-stable electroluminescence (EL) spectra for different viewing angles, with no change in the full width at half maximum (FWHM) and peak wavelength. The flexibility of the polymer used for the NCLE and plasma treatment process indicates that the NCLE can be applied to large and flexible OLED displays.Extremely low-haze light extraction from organic light-emitting diodes (OLEDs) was achieved by utilizing nanoscale corrugation, which was simply fabricated with plasma treatment and sonication. The haze of the nanoscale corrugation for light extraction (NCLE) corresponds to 0.21% for visible wavelengths, which is comparable to that of bare glass. The OLEDs with NCLE showed enhancements of 34.19% in current efficiency and 35.75% in power efficiency. Furthermore, the OLEDs with NCLE exhibited angle-stable electroluminescence (EL) spectra for different viewing angles, with no change in the full width at half maximum (FWHM) and peak wavelength. The flexibility of the polymer used for the NCLE and plasma treatment process indicates that the NCLE can be applied to large and flexible OLED displays. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr06547f

  11. Spaceport Command and Control System Software Development

    NASA Technical Reports Server (NTRS)

    Glasser, Abraham

    2017-01-01

    The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires a large amount of intensive testing that will properly measure the capabilities of the system. Automating the test procedures would save the project money from human labor costs, as well as making the testing process more efficient. Therefore, the Exploration Systems Division (formerly the Electrical Engineering Division) at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.

  12. Large-scale production of lipoplexes with long shelf-life.

    PubMed

    Clement, Jule; Kiefer, Karin; Kimpfler, Andrea; Garidel, Patrick; Peschka-Süss, Regine

    2005-01-01

    The instability of lipoplex formulations is a major obstacle to overcome before their commercial application in gene therapy. In this study, a continuous mixing technique for the large-scale preparation of lipoplexes followed by lyophilisation for increased stability and shelf-life has been developed. Lipoplexes were analysed for transfection efficiency and cytotoxicity in human aorta smooth muscle cells (HASMC) and a rat smooth muscle cell line (A-10 SMC). Homogeneity of lipid/DNA-products was investigated by photon correlation spectroscopy (PCS) and cryotransmission electron microscopy (cryo-TEM). Studies have been undertaken with DAC-30, a composition of 3beta-[N-(N,N'-dimethylaminoethane)-carbamoyl]-cholesterol (DAC-Chol) and dioleylphosphatidylethanolamine (DOPE) and a green fluorescent protein (GFP) expressing marker plasmid. A continuous mixing technique was compared to the small-scale preparation of lipoplexes by pipetting. Individual steps of the continuous mixing process were evaluated in order to optimise the manufacturing technique: lipid/plasmid ratio, composition of transfection medium, pre-treatment of the lipid, size of the mixing device, mixing procedure and the influence of the lyophilisation process. It could be shown that the method developed for production of lipoplexes on a large scale under sterile conditions led to lipoplexes with good transfection efficiencies combined with low cytotoxicity, improved characteristics and long shelf-life.

  13. Investigation of Recombination Processes In A Magnetized Plasma

    NASA Technical Reports Server (NTRS)

    Chavers, Greg; Chang-Diaz, Franklin; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    Interplanetary travel requires propulsion systems that can provide high specific impulse (Isp), while also having sufficient thrust to rapidly accelerate large payloads. One such propulsion system is the Variable Specific Impulse Magneto-plasma Rocket (VASIMR), which creates, heats, and exhausts plasma to provide variable thrust and Isp, optimally meeting the mission requirements. A large fraction of the energy to create the plasma is frozen in the exhaust in the form of ionization energy. This loss mechanism is common to all electromagnetic plasma thrusters and has an impact on their efficiency. When the device operates at high Isp, where the exhaust kinetic energy is high compared to the ionization energy, the frozen flow component is of little consequence; however, at low Isp, the effect of the frozen flow may be important. If some of this energy could be recovered through recombination processes, and re-injected as neutral kinetic energy, the efficiency of VASIMR, in its low Isp/high thrust mode may be improved. In this operating regime, the ionization energy is a large portion of the total plasma energy. An experiment is being conducted to investigate the possibility of recovering some of the energy used to create the plasma. This presentation will cover the progress and status of the experiment involving surface recombination of the plasma.

  14. Semipermeability Evolution of Wakkanai Mudstones During Isotropic Compression

    NASA Astrophysics Data System (ADS)

    Takeda, M.; Manaka, M.

    2015-12-01

    Precise identification of major processes that influence groundwater flow system is of fundamental importance for the performance assessment of waste disposal in subsurface. In the characterization of groundwater flow system, gravity- and pressure-driven flows have been conventionally assumed as dominant processes. However, recent studies have suggested that argillites can act as semipermeable membranes and they can cause chemically driven flow, i.e., chemical osmosis, under salinity gradients, which may generate erratic pore pressures in argillaceous formations. In order to identify the possibility that chemical osmosis is involved in erratic pore pressure generations in argillaceous formations, it is essential to measure the semipermeability of formation media; however, in the measurements of semipermeability, little consideration has been given to the stresses that the formation media would have experienced in past geologic processes. This study investigates the influence of stress history on the semipermeability of an argillite by an experimental approach. A series of chemical osmosis experiments were performed on Wakkanai mudstones to measure the evolution of semipermeability during loading and unloading confining pressure cycles. The osmotic efficiency, which represents the semipermeability, was estimated at each confining pressure. The results show that the osmotic efficiency increases almost linearly with increasing confining pressure; however, the increased osmotic efficiency does not recover during unloading unless the confining pressure is almost relieved. The observed unrecoverable change in osmotic efficiency may have an important implication on the evaluation of chemical osmosis in argillaceous formations that have been exposed to large stresses in past geologic processes. If the osmotic efficiency increased by the past stress can remain unchanged to date, the osmotic efficiency should be measured at the past highest stress rather than the current in-situ stress. Otherwise, the effect of chemical osmosis on the pore pressure generation would be underestimated.

  15. The Hungtsaiping landslide:A kinematic model based on morphology

    NASA Astrophysics Data System (ADS)

    Huang, W.-K.; Chu, H.-K.; Lo, C.-M.; Lin, M.-L.

    2012-04-01

    A large and deep-seated landslide at Hungtsaiping was triggered by the 7.3 magnitude 1999 Chi-Chi earthquake. Extensive site investigations of the landslide were conducted including field reconnaissance, geophysical exploration, borehole logs, and laboratory experiments. Thick colluvium was found around the landslide area and indicated the occurrence of a large ancient landslide. This study presents the catastrophic landslide event which occurred during the Chi-Chi earthquake. The mechanism of the 1999 landslide which cannot be revealed by the underground exploration data alone, is clarified. This research include investigations of the landslide kinematic process and the deposition geometry. A 3D discrete element method (program), PFC3D, was used to model the kinematic process that led to the landslide. The proposed procedure enables a rational and efficient way to simulate the landslide dynamic process. Key word: Hungtsaiping catastrophic landslide, kinematic process, deposition geometry, discrete element method

  16. Self-assembly of highly efficient, broadband plasmonic absorbers for solar steam generation.

    PubMed

    Zhou, Lin; Tan, Yingling; Ji, Dengxin; Zhu, Bin; Zhang, Pei; Xu, Jun; Gan, Qiaoqiang; Yu, Zongfu; Zhu, Jia

    2016-04-01

    The study of ideal absorbers, which can efficiently absorb light over a broad range of wavelengths, is of fundamental importance, as well as critical for many applications from solar steam generation and thermophotovoltaics to light/thermal detectors. As a result of recent advances in plasmonics, plasmonic absorbers have attracted a lot of attention. However, the performance and scalability of these absorbers, predominantly fabricated by the top-down approach, need to be further improved to enable widespread applications. We report a plasmonic absorber which can enable an average measured absorbance of ~99% across the wavelengths from 400 nm to 10 μm, the most efficient and broadband plasmonic absorber reported to date. The absorber is fabricated through self-assembly of metallic nanoparticles onto a nanoporous template by a one-step deposition process. Because of its efficient light absorption, strong field enhancement, and porous structures, which together enable not only efficient solar absorption but also significant local heating and continuous stream flow, plasmonic absorber-based solar steam generation has over 90% efficiency under solar irradiation of only 4-sun intensity (4 kW m(-2)). The pronounced light absorption effect coupled with the high-throughput self-assembly process could lead toward large-scale manufacturing of other nanophotonic structures and devices.

  17. Self-assembly of highly efficient, broadband plasmonic absorbers for solar steam generation

    PubMed Central

    Zhou, Lin; Tan, Yingling; Ji, Dengxin; Zhu, Bin; Zhang, Pei; Xu, Jun; Gan, Qiaoqiang; Yu, Zongfu; Zhu, Jia

    2016-01-01

    The study of ideal absorbers, which can efficiently absorb light over a broad range of wavelengths, is of fundamental importance, as well as critical for many applications from solar steam generation and thermophotovoltaics to light/thermal detectors. As a result of recent advances in plasmonics, plasmonic absorbers have attracted a lot of attention. However, the performance and scalability of these absorbers, predominantly fabricated by the top-down approach, need to be further improved to enable widespread applications. We report a plasmonic absorber which can enable an average measured absorbance of ~99% across the wavelengths from 400 nm to 10 μm, the most efficient and broadband plasmonic absorber reported to date. The absorber is fabricated through self-assembly of metallic nanoparticles onto a nanoporous template by a one-step deposition process. Because of its efficient light absorption, strong field enhancement, and porous structures, which together enable not only efficient solar absorption but also significant local heating and continuous stream flow, plasmonic absorber–based solar steam generation has over 90% efficiency under solar irradiation of only 4-sun intensity (4 kW m−2). The pronounced light absorption effect coupled with the high-throughput self-assembly process could lead toward large-scale manufacturing of other nanophotonic structures and devices. PMID:27152335

  18. Topological patterns in street networks of self-organized urban settlements

    NASA Astrophysics Data System (ADS)

    Buhl, J.; Gautrais, J.; Reeves, N.; Solé, R. V.; Valverde, S.; Kuntz, P.; Theraulaz, G.

    2006-02-01

    Many urban settlements result from a spatially distributed, decentralized building process. Here we analyze the topological patterns of organization of a large collection of such settlements using the approach of complex networks. The global efficiency (based on the inverse of shortest-path lengths), robustness to disconnections and cost (in terms of length) of these graphs is studied and their possible origins analyzed. A wide range of patterns is found, from tree-like settlements (highly vulnerable to random failures) to meshed urban patterns. The latter are shown to be more robust and efficient.

  19. Instrument to collect fogwater for chemical analysis

    NASA Astrophysics Data System (ADS)

    Jacob, Daniel J.; Waldman, Jed M.; Haghi, Mehrdad; Hoffmann, Michael R.; Flagan, Richard C.

    1985-06-01

    An instrument is presented which collects large samples of ambient fogwater by impaction of droplets on a screen. The collection efficiency of the instrument is determined as a function of droplet size, and it is shown that fog droplets in the range 3-100-μm diameter are efficiently collected. No significant evaporation or condensation occurs at any stage of the collection process. Field testing indicates that samples collected are representative of the ambient fogwater. The instrument may easily be automated, and is suitable for use in routine air quality monitoring programs.

  20. Beyond fossil fuel–driven nitrogen transformations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jingguang G.; Crooks, Richard M.; Seefeldt, Lance C.

    Nitrogen is fundamental to all of life and many industrial processes. The interchange of nitrogen oxidation states in the industrial production of ammonia, nitric acid, and other commodity chemicals is largely powered by fossil fuels. Here, a key goal of contemporary research in the field of nitrogen chemistry is to minimize the use of fossil fuels by developing more efficient heterogeneous, homogeneous, photo-, and electrocatalytic processes or by adapting the enzymatic processes underlying the natural nitrogen cycle. These approaches, as well as the challenges involved, are discussed in this Review.

  1. Extending Beowulf Clusters

    USGS Publications Warehouse

    Steinwand, Daniel R.; Maddox, Brian; Beckmann, Tim; Hamer, George

    2003-01-01

    Beowulf clusters can provide a cost-effective way to compute numerical models and process large amounts of remote sensing image data. Usually a Beowulf cluster is designed to accomplish a specific set of processing goals, and processing is very efficient when the problem remains inside the constraints of the original design. There are cases, however, when one might wish to compute a problem that is beyond the capacity of the local Beowulf system. In these cases, spreading the problem to multiple clusters or to other machines on the network may provide a cost-effective solution.

  2. Optically Controlled Distributed Quantum Computing Using Atomic Ensembles As Qubits

    DTIC Science & Technology

    2016-02-23

    Second, the lithium niobate material has a large nonlinear coefficient (>20 pm V–1) for efficient QFC and a wide transparent window (∼ 350 –5200 nm...for the 1550 nm + 1570 nm 780 nm process. Finally, to implement QFC for the 637 and 780 nm light, one would use a pump at 350 nm and a waveguide QPM...for the 637 nm + 780 nm 350 nm process. Again, the 350 nm laser can be produced adopting successive SHG and SFG processes using a 1050 nm laser

  3. Beyond fossil fuel–driven nitrogen transformations

    DOE PAGES

    Chen, Jingguang G.; Crooks, Richard M.; Seefeldt, Lance C.; ...

    2018-05-25

    Nitrogen is fundamental to all of life and many industrial processes. The interchange of nitrogen oxidation states in the industrial production of ammonia, nitric acid, and other commodity chemicals is largely powered by fossil fuels. Here, a key goal of contemporary research in the field of nitrogen chemistry is to minimize the use of fossil fuels by developing more efficient heterogeneous, homogeneous, photo-, and electrocatalytic processes or by adapting the enzymatic processes underlying the natural nitrogen cycle. These approaches, as well as the challenges involved, are discussed in this Review.

  4. The Galactic Chemical Evolution of r-Process Elements by Neutron Star Mergers

    NASA Astrophysics Data System (ADS)

    Komiya, Yutaka; Shigeyama, Toshikazu

    Neutron star mergers (NSMs) are prime candidate sources of r-process elements in the universe but it have been said that NSMs cannot reproduce r-process elements on extremely metal-poor (EMP) stars. We revisit this problem using a new chemical evolution model with merger trees of galaxies. We consider (1) propagation of NSM ejecta of kilo-parsec scale due to its very large velocity and (2) star formation efficiency depending on the galaxy mass. In our model with these ingredients, NSMs can successfully reproduce the abundance distribution of EMP stars.

  5. Big data challenges for large radio arrays

    NASA Astrophysics Data System (ADS)

    Jones, D. L.; Wagstaff, K.; Thompson, D. R.; D'Addario, L.; Navarro, R.; Mattmann, C.; Majid, W.; Lazio, J.; Preston, J.; Rebbapragada, U.

    2012-03-01

    Future large radio astronomy arrays, particularly the Square Kilometre Array (SKA), will be able to generate data at rates far higher than can be analyzed or stored affordably with current practices. This is, by definition, a "big data" problem, and requires an end-to-end solution if future radio arrays are to reach their full scientific potential. Similar data processing, transport, storage, and management challenges face next-generation facilities in many other fields. The Jet Propulsion Laboratory is developing technologies to address big data issues, with an emphasis in three areas: 1) Lower-power digital processing architectures to make highvolume data generation operationally affordable, 2) Date-adaptive machine learning algorithms for real-time analysis (or "data triage") of large data volumes, and 3) Scalable data archive systems that allow efficient data mining and remote user code to run locally where the data are stored.

  6. A factor involved in efficient breakdown of supersonic streamwise vortices

    NASA Astrophysics Data System (ADS)

    Hiejima, Toshihiko

    2015-03-01

    Spatially developing processes in supersonic streamwise vortices were numerically simulated at Mach number 5.0. The vortex evolution largely depended on the azimuthal vorticity thickness of the vortices, which governs the negative helicity profile. Large vorticity thickness greatly enhanced the centrifugal instability, with consequent development of perturbations with competing wavenumbers outside the vortex core. During the transition process, supersonic streamwise vortices could generate large-scale spiral structures and a number of hairpin like vortices. Remarkably, the transition caused a dramatic increase in the total fluctuation energy of hypersonic flows, because the negative helicity profile destabilizes the flows due to helicity instability. Unstable growth might also relate to the correlation length between the axial and azimuthal vorticities of the streamwise vortices. The knowledge gained in this study is important for realizing effective fuel-oxidizer mixing in supersonic combustion engines.

  7. Visual attention mitigates information loss in small- and large-scale neural codes.

    PubMed

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-04-01

    The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. A direct thin-film path towards low-cost large-area III-V photovoltaics

    PubMed Central

    Kapadia, Rehan; Yu, Zhibin; Wang, Hsin-Hua H.; Zheng, Maxwell; Battaglia, Corsin; Hettick, Mark; Kiriya, Daisuke; Takei, Kuniharu; Lobaccaro, Peter; Beeman, Jeffrey W.; Ager, Joel W.; Maboudian, Roya; Chrzan, Daryl C.; Javey, Ali

    2013-01-01

    III-V photovoltaics (PVs) have demonstrated the highest power conversion efficiencies for both single- and multi-junction cells. However, expensive epitaxial growth substrates, low precursor utilization rates, long growth times, and large equipment investments restrict applications to concentrated and space photovoltaics (PVs). Here, we demonstrate the first vapor-liquid-solid (VLS) growth of high-quality III-V thin-films on metal foils as a promising platform for large-area terrestrial PVs overcoming the above obstacles. We demonstrate 1–3 μm thick InP thin-films on Mo foils with ultra-large grain size up to 100 μm, which is ~100 times larger than those obtained by conventional growth processes. The films exhibit electron mobilities as high as 500 cm2/V-s and minority carrier lifetimes as long as 2.5 ns. Furthermore, under 1-sun equivalent illumination, photoluminescence efficiency measurements indicate that an open circuit voltage of up to 930 mV can be achieved, only 40 mV lower than measured on a single crystal reference wafer. PMID:23881474

  9. The Microbial Efficiency-Matrix Stabilization (MEMS) framework integrates plant litter decomposition with soil organic matter stabilization: do labile plant inputs form stable soil organic matter?

    PubMed

    Cotrufo, M Francesca; Wallenstein, Matthew D; Boot, Claudia M; Denef, Karolien; Paul, Eldor

    2013-04-01

    The decomposition and transformation of above- and below-ground plant detritus (litter) is the main process by which soil organic matter (SOM) is formed. Yet, research on litter decay and SOM formation has been largely uncoupled, failing to provide an effective nexus between these two fundamental processes for carbon (C) and nitrogen (N) cycling and storage. We present the current understanding of the importance of microbial substrate use efficiency and C and N allocation in controlling the proportion of plant-derived C and N that is incorporated into SOM, and of soil matrix interactions in controlling SOM stabilization. We synthesize this understanding into the Microbial Efficiency-Matrix Stabilization (MEMS) framework. This framework leads to the hypothesis that labile plant constituents are the dominant source of microbial products, relative to input rates, because they are utilized more efficiently by microbes. These microbial products of decomposition would thus become the main precursors of stable SOM by promoting aggregation and through strong chemical bonding to the mineral soil matrix. © 2012 Blackwell Publishing Ltd.

  10. Review of enhanced processes for anaerobic digestion treatment of sewage sludge

    NASA Astrophysics Data System (ADS)

    Liu, Xinyuan; Han, Zeyu; Yang, Jie; Ye, Tianyi; Yang, Fang; Wu, Nan; Bao, Zhenbo

    2018-02-01

    Great amount of sewage sludge had been produced each year, which led to serious environmental pollution. Many new technologies had been developed recently, but they were hard to be applied in large scales. As one of the traditional technologies, anaerobic fermentation process was capable of obtaining bioenergy by biogas production under the functions of microbes. However, the anaerobic process is facing new challenges due to the low fermentation efficiency caused by the characteristics of sewage sludge itself. In order to improve the energy yield, the enhancement technologies including sewage sludge pretreatment process, co-digestion process, high-solid digestion process and two-stage fermentation process were widely studied in the literatures, which were introduced in this article.

  11. New Design Tool Can Help Cut building Energy Use

    Science.gov Websites

    help almost any architect or engineer evaluate passive solar and efficiency design strategies in a tool that enables them to walk through the design process and understand the consequences of design , a feature that tells designers how large of a heating, ventilation and air conditioning (HVAC

  12. The Possibilities and Limitations of Applying "Open Data" Principles in Schools

    ERIC Educational Resources Information Center

    Selwyn, Neil; Henderson, Michael; Chao, Shu-Hua

    2017-01-01

    Large quantities of data are now being generated, collated and processed within schools through computerised systems and other digital technologies. In response to growing concerns over the efficiency and equity of how these data are used, the concept of "open data" has emerged as a potential means of using digital technology to…

  13. Enterocyte loss of polarity and gut wound healing rely upon the F-actin-severing function of villin

    USDA-ARS?s Scientific Manuscript database

    Efficient wound healing is required to maintain the integrity of the intestinal epithelial barrier because of its constant exposure to a large variety of environmental stresses. This process implies a partial cell depolarization and the acquisition of a motile phenotype that involves rearrangements ...

  14. Redox Flow Batteries, a Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knoxville, U. Tennessee; U. Texas Austin; U, McGill

    2011-07-15

    Redox flow batteries are enjoying a renaissance due to their ability to store large amounts of electrical energy relatively cheaply and efficiently. In this review, we examine the components of redox flow batteries with a focus on understanding the underlying physical processes. The various transport and kinetic phenomena are discussed along with the most common redox couples.

  15. Statistical Techniques for Efficient Indexing and Retrieval of Document Images

    ERIC Educational Resources Information Center

    Bhardwaj, Anurag

    2010-01-01

    We have developed statistical techniques to improve the performance of document image search systems where the intermediate step of OCR based transcription is not used. Previous research in this area has largely focused on challenges pertaining to generation of small lexicons for processing handwritten documents and enhancement of poor quality…

  16. Welding And Cutting A Nickel Alloy By Laser

    NASA Technical Reports Server (NTRS)

    Banas, C. M.

    1990-01-01

    Technique effective and energy-efficient. Report describes evaluation of laser welding and cutting of Inconel(R) 718. Notes that electron-beam welding processes developed for In-718, but difficult to use on large or complex structures. Cutting of In-718 by laser fast and produces only narrow kerf. Cut edge requires dressing, to endure fatigue.

  17. One dimensional Linescan x-ray detection of pits in fresh cherries

    USDA-ARS?s Scientific Manuscript database

    The presence of pits in processed cherries is a concern for both processors and consumers, in many cases causing injury and potential lawsuits. While machines used for pitting cherries are extremely efficient, if one or more plungers in a pitting head become misaligned, a large number of pits may p...

  18. Big Data Approaches for the Analysis of Large-Scale fMRI Data Using Apache Spark and GPU Processing: A Demonstration on Resting-State fMRI Data from the Human Connectome Project

    PubMed Central

    Boubela, Roland N.; Kalcher, Klaudius; Huf, Wolfgang; Našel, Christian; Moser, Ewald

    2016-01-01

    Technologies for scalable analysis of very large datasets have emerged in the domain of internet computing, but are still rarely used in neuroimaging despite the existence of data and research questions in need of efficient computation tools especially in fMRI. In this work, we present software tools for the application of Apache Spark and Graphics Processing Units (GPUs) to neuroimaging datasets, in particular providing distributed file input for 4D NIfTI fMRI datasets in Scala for use in an Apache Spark environment. Examples for using this Big Data platform in graph analysis of fMRI datasets are shown to illustrate how processing pipelines employing it can be developed. With more tools for the convenient integration of neuroimaging file formats and typical processing steps, big data technologies could find wider endorsement in the community, leading to a range of potentially useful applications especially in view of the current collaborative creation of a wealth of large data repositories including thousands of individual fMRI datasets. PMID:26778951

  19. Parallel Index and Query for Large Scale Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing ofmore » a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.« less

  20. Large-strain, multiform movements from designable electrothermal actuators based on large highly anisotropic carbon nanotube sheets.

    PubMed

    Li, Qingwei; Liu, Changhong; Lin, Yuan-Hua; Liu, Liang; Jiang, Kaili; Fan, Shoushan

    2015-01-27

    Many electroactive polymer (EAP) actuators use diverse configurations of carbon nanotubes (CNTs) as pliable electrodes to realize discontinuous, agile movements, for CNTs are conductive and flexible. However, the reported CNT-based EAP actuators could only accomplish simple, monotonous actions. Few actuators were extended to complex devices because efficiently preparing a large-area CNT electrode was difficult, and complex electrode design has not been carried out. In this work, we successfully prepared large-area CNT paper (buckypaper, BP) through an efficient approach. The BP is highly anisotropic, strong, and suitable as flexible electrodes. By means of artful graphic design and processing on BP, we fabricated various functional BP electrodes and developed a series of BP-polymer electrothermal actuators (ETAs). The prepared ETAs can realize various controllable movements, such as large-stain bending (>180°), helical curling (∼ 630°), or even bionic actuations (imitating human-hand actions). These functional and interesting movements benefit from flexible electrode design and the anisotropy of BP material. Owing to the advantages of low driving voltage (20-200 V), electrolyte-free and long service life (over 10000 times), we think the ETAs will have great potential applications in the actuator field.

  1. Microprocessor activity controls differential miRNA biogenesis In Vivo.

    PubMed

    Conrad, Thomas; Marsico, Annalisa; Gehre, Maja; Orom, Ulf Andersson

    2014-10-23

    In miRNA biogenesis, pri-miRNA transcripts are converted into pre-miRNA hairpins. The in vivo properties of this process remain enigmatic. Here, we determine in vivo transcriptome-wide pri-miRNA processing using next-generation sequencing of chromatin-associated pri-miRNAs. We identify a distinctive Microprocessor signature in the transcriptome profile from which efficiency of the endogenous processing event can be accurately quantified. This analysis reveals differential susceptibility to Microprocessor cleavage as a key regulatory step in miRNA biogenesis. Processing is highly variable among pri-miRNAs and a better predictor of miRNA abundance than primary transcription itself. Processing is also largely stable across three cell lines, suggesting a major contribution of sequence determinants. On the basis of differential processing efficiencies, we define functionality for short sequence features adjacent to the pre-miRNA hairpin. In conclusion, we identify Microprocessor as the main hub for diversified miRNA output and suggest a role for uncoupling miRNA biogenesis from host gene expression. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Sensitivity of measurement-based purification processes to inner interactions

    NASA Astrophysics Data System (ADS)

    Militello, Benedetto; Napoli, Anna

    2018-02-01

    The sensitivity of a repeated measurement-based purification scheme to additional undesired couplings is analyzed, focusing on the very simple and archetypical system consisting of two two-level systems interacting with a repeatedly measured one. Several regimes are considered and in the strong coupling limit (i.e., when the coupling constant of the undesired interaction is very large) the occurrence of a quantum Zeno effect is proven to dramatically jeopardize the efficiency of the purification process.

  3. Efficiency of Magnetic to Kinetic Energy Conversion in a Monopole Magnetosphere

    NASA Astrophysics Data System (ADS)

    Tchekhovskoy, Alexander; McKinney, Jonathan C.; Narayan, Ramesh

    2009-07-01

    Unconfined relativistic outflows from rotating, magnetized compact objects are often well modeled by assuming that the field geometry is approximately a split-monopole at large radii. Earlier work has indicated that such an unconfined flow has an inefficient conversion of magnetic energy to kinetic energy. This has led to the conclusion that ideal magnetohydrodynamical (MHD) processes fail to explain observations of, e.g., the Crab pulsar wind at large radii where energy conversion appears efficient. In addition, as a model for astrophysical jets, the monopole field geometry has been abandoned in favor of externally confined jets since the latter appeared to be generically more efficient jet accelerators. We perform time-dependent axisymmetric relativistic MHD simulations in order to find steady-state solutions for a wind from a compact object endowed with a monopole field geometry. Our simulations follow the outflow for 10 orders of magnitude in distance from the compact object, which is large enough to study both the initial "acceleration zone" of the magnetized wind as well as the asymptotic "coasting zone." We obtain the surprising result that acceleration is actually efficient in the polar region, which develops a jet despite not being confined by an external medium. Our models contain jets that have sufficient energy to account for moderately energetic long and short gamma-ray burst (GRB) events (~1051-1052 erg), collimate into narrow opening angles (opening half-angle θ j ≈ 0.03 rad), become matter-dominated at large radii (electromagnetic energy flux per unit matter energy flux σ < 1), and move at ultrarelativistic Lorentz factors (γ j ~ 200 for our fiducial model). The simulated jets have γ j θ j ~ 5-15, so they are in principle capable of generating "achromatic jet breaks" in GRB afterglow light curves. By defining a "causality surface" beyond which the jet cannot communicate with a generalized "magnetic nozzle" near the axis of rotation, we obtain approximate analytical solutions for the Lorentz factor that fit the numerical solutions well. This allows us to extend our results to monopole wind models with arbitrary magnetization. Overall, our results demonstrate that the production of ultrarelativistic jets is a more robust process than previously thought.

  4. Highly Efficient p-i-n Perovskite Solar Cells Utilizing Novel Low-Temperature Solution-Processed Hole Transport Materials with Linear π-Conjugated Structure.

    PubMed

    Li, Yang; Xu, Zheng; Zhao, Suling; Qiao, Bo; Huang, Di; Zhao, Ling; Zhao, Jiao; Wang, Peng; Zhu, Youqin; Li, Xianggao; Liu, Xicheng; Xu, Xurong

    2016-09-01

    Alternative low-temperature solution-processed hole-transporting materials (HTMs) without dopant are critical for highly efficient perovskite solar cells (PSCs). Here, two novel small molecule HTMs with linear π-conjugated structure, 4,4'-bis(4-(di-p-toyl)aminostyryl)biphenyl (TPASBP) and 1,4'-bis(4-(di-p-toyl)aminostyryl)benzene (TPASB), are applied as hole-transporting layer (HTL) by low-temperature (sub-100 °C) solution-processed method in p-i-n PSCs. Compared with standard poly(3,4-ethylenedioxythiophene): poly(styrenesulfonic acid) (PEDOT:PSS) HTL, both TPASBP and TPASB HTLs can promote the growth of perovskite (CH 3 NH 3 PbI 3 ) film consisting of large grains and less grain boundaries. Furthermore, the hole extraction at HTL/CH 3 NH 3 PbI 3 interface and the hole transport in HTL are also more efficient under the conditions of using TPASBP or TPASB as HTL. Hence, the photovoltaic performance of the PSCs is dramatically enhanced, leading to the high efficiencies of 17.4% and 17.6% for the PSCs using TPASBP and TPASB as HTL, respectively, which are ≈40% higher than that of the standard PSC using PEDOT:PSS HTL. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Efficient, inkjet-printed TADF-OLEDs with an ultra-soluble NHetPHOS complex

    NASA Astrophysics Data System (ADS)

    Verma, Anand; Zink, Daniel M.; Fléchon, Charlotte; Leganés Carballo, Jaime; Flügge, Harald; Navarro, José M.; Baumann, Thomas; Volz, Daniel

    2016-03-01

    Using printed organic light-emitting diodes (OLEDs) for lighting, smart-packaging and other mass-market applications has remained a dream since the first working OLED devices were demonstrated in the late 1980s. The realization of this long-term goal is hindered by the very low abundance of iridium and problems when using low-cost wet chemical production processes. Abundant, solution-processable Cu(I) complexes promise to lower the cost of OLEDs. A new copper iodide NHetPHOS emitter was prepared and characterized in solid state with photoluminescence spectroscopy and UV photoelectron spectroscopy under ambient conditions. The photoluminescence quantum efficiency was determined as 92 ± 5 % in a thin film with yellowish-green emission centered around 550 nm. This puts the material on par with the most efficient copper complexes known so far. The new compound showed superior solubility in non-polar solvents, which allowed for the fabrication of an inkjet-printed OLED device from a decalin-based ink formulation. The emission layer could be processed under ambient conditions and was annealed under air. In a very simple stack architecture, efficiency values up to 45 cd A-1 corresponding to 13.9 ± 1.9 % EQE were achieved. These promising results open the door to printed, large-scale OLED devices with abundant copper emitters.

  6. Limitation of Shrinkage Porosity in Aluminum Rotor Die Casting

    NASA Astrophysics Data System (ADS)

    Kim, Young-Chan; Choi, Se-Weon; Kim, Cheol-Woo; Cho, Jae-Ik; Lee, Sung-Ho; Kang, Chang-Seog

    Aluminum rotor prone to have many casting defects especially large amount of air and shrinkage porosity, which caused eccentricity, loss and noise during motor operation. Many attempts have been made to develop methods of shrinkage porosity control, but still there are some problems to solve. In this research, the process of vacuum squeeze die casting is proposed for limitation of defects. The 6 pin point gated dies which were in capable of local squeeze at the end ring were used. Influences of filling patterns on HPDC were evaluated and the important process control parameters were high injection speed, squeeze length, venting and process conditions. By using local squeeze and vacuum during filling and solidification, air and shrinkage porosity were significantly reduced and the feeding efficiency at the upper end ring was improved 10%. As a result of controlling the defects, the dynamometer test showed improved motor efficiency by more than 4%.

  7. A parallel computational model for GATE simulations.

    PubMed

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Master of Puppets: Cooperative Multitasking for In Situ Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Lukic, Zarija

    2016-01-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. Here, we present a novel design for running multiple codes in situ: using coroutines and position-independent executables we enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. We present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. This design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The techniques we present can also be integrated into other in situ frameworks.« less

  9. A Single-use Strategy to Enable Manufacturing of Affordable Biologics.

    PubMed

    Jacquemart, Renaud; Vandersluis, Melissa; Zhao, Mochao; Sukhija, Karan; Sidhu, Navneet; Stout, Jim

    2016-01-01

    The current processing paradigm of large manufacturing facilities dedicated to single product production is no longer an effective approach for best manufacturing practices. Increasing competition for new indications and the launch of biosimilars for the monoclonal antibody market have put pressure on manufacturers to produce at lower cost. Single-use technologies and continuous upstream processes have proven to be cost-efficient options to increase biomass production but as of today the adoption has been only minimal for the purification operations, partly due to concerns related to cost and scale-up. This review summarizes how a single-use holistic process and facility strategy can overcome scale limitations and enable cost-efficient manufacturing to support the growing demand for affordable biologics. Technologies enabling high productivity, right-sized, small footprint, continuous, and automated upstream and downstream operations are evaluated in order to propose a concept for the flexible facility of the future.

  10. Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.

  11. Excimer laser decontamination

    NASA Astrophysics Data System (ADS)

    Sentis, Marc L.; Delaporte, Philippe C.; Marine, Wladimir; Uteza, Olivier P.

    2000-04-01

    The application of excimer laser ablation process to the decontamination of radioactive surfaces is discussed. This technology is very attractive because it allows to efficiently remove the contaminated particles without secondary waste production. To demonstrate the capability of such technology to efficiently decontaminate large area, we studied and developed a prototype which include a XeCl laser, an optical fiber delivery system and an ablated particles collection cell. The main physical processes taking place during UV laser ablation will be explained. The influence of laser wavelength, pulse duration and absorption coefficient of material will be discussed. Special studies have been performed to understand the processes which limit the transmission of high average power excimer laser through optical fiber, and to determine the laser conditions to optimize the value of this transmission. An in-situ spectroscopic analysis of laser ablation plasma allows the real time control of the decontamination. The results obtained for painting or metallic oxides removal from stainless steel surfaces will be presented.

  12. New Insights on Hydro-Climate Feedback Processes over the Tropical Ocean from TRMM

    NASA Technical Reports Server (NTRS)

    Lau, William K. M.; Wu, H. T.; Li, Xiaofan; Sui, C. H.

    2002-01-01

    In this paper, we study hydro-climate feedback processes over the tropical oceans, by examining the relationships among large scale circulation and Tropical Rainfall Measuring Mission Microwave Imager-Sea Surface Temperature (TMI-SST), and a range of TRMM rain products including rain rate, cloud liquid water, precipitable water, cloud types and areal coverage, and precipitation efficiency. Results show that for a warm event (1998), the 28C threshold of convective precipitation is quite well defined over the tropical oceans. However, for a cold event (1999), the SST threshold is less well defined, especially over the central and eastern Pacific cold tongue, where stratiform rain occurs at much lower than 28 C. Precipitation rates and cloud liquid water are found to be more closely related to the large scale vertical motion than to the underlying SST. While total columnar water vapor is more strongly dependent on SST. For a large domain, over the eastern Pacific, we find that the areal extent of the cloudy region tends to shrink as the SST increases. Examination of the relationship between cloud liquid water and rain rate suggests that the residence time of cloud liquid water tends to be shorter, associated with higher precipitation efficiency in a warmer climate. It is hypothesized that the reduction in cloudy area may be influenced both by the shift in large scale cloud patterns in response to changes in large scale forcings, and possible increase in the cloud liquid water conversion to rain water in a warmer environment. Results of numerical experiments with the Goddard cloud resolving model to test the hypothesis will be discussed.

  13. Graphene oxide-based efficient and scalable solar desalination under one sun with a confined 2D water path

    PubMed Central

    Li, Xiuqiang; Xu, Weichao; Tang, Mingyao; Zhou, Lin; Zhu, Bin; Zhu, Shining; Zhu, Jia

    2016-01-01

    Because it is able to produce desalinated water directly using solar energy with minimum carbon footprint, solar steam generation and desalination is considered one of the most important technologies to address the increasingly pressing global water scarcity. Despite tremendous progress in the past few years, efficient solar steam generation and desalination can only be achieved for rather limited water quantity with the assistance of concentrators and thermal insulation, not feasible for large-scale applications. The fundamental paradox is that the conventional design of direct absorber−bulk water contact ensures efficient energy transfer and water supply but also has intrinsic thermal loss through bulk water. Here, enabled by a confined 2D water path, we report an efficient (80% under one-sun illumination) and effective (four orders salinity decrement) solar desalination device. More strikingly, because of minimized heat loss, high efficiency of solar desalination is independent of the water quantity and can be maintained without thermal insulation of the container. A foldable graphene oxide film, fabricated by a scalable process, serves as efficient solar absorbers (>94%), vapor channels, and thermal insulators. With unique structure designs fabricated by scalable processes and high and stable efficiency achieved under normal solar illumination independent of water quantity without any supporting systems, our device represents a concrete step for solar desalination to emerge as a complementary portable and personalized clean water solution. PMID:27872280

  14. Graphene oxide-based efficient and scalable solar desalination under one sun with a confined 2D water path.

    PubMed

    Li, Xiuqiang; Xu, Weichao; Tang, Mingyao; Zhou, Lin; Zhu, Bin; Zhu, Shining; Zhu, Jia

    2016-12-06

    Because it is able to produce desalinated water directly using solar energy with minimum carbon footprint, solar steam generation and desalination is considered one of the most important technologies to address the increasingly pressing global water scarcity. Despite tremendous progress in the past few years, efficient solar steam generation and desalination can only be achieved for rather limited water quantity with the assistance of concentrators and thermal insulation, not feasible for large-scale applications. The fundamental paradox is that the conventional design of direct absorber-bulk water contact ensures efficient energy transfer and water supply but also has intrinsic thermal loss through bulk water. Here, enabled by a confined 2D water path, we report an efficient (80% under one-sun illumination) and effective (four orders salinity decrement) solar desalination device. More strikingly, because of minimized heat loss, high efficiency of solar desalination is independent of the water quantity and can be maintained without thermal insulation of the container. A foldable graphene oxide film, fabricated by a scalable process, serves as efficient solar absorbers (>94%), vapor channels, and thermal insulators. With unique structure designs fabricated by scalable processes and high and stable efficiency achieved under normal solar illumination independent of water quantity without any supporting systems, our device represents a concrete step for solar desalination to emerge as a complementary portable and personalized clean water solution.

  15. In-depth investigation of spin-on doped solar cells with thermally grown oxide passivation

    NASA Astrophysics Data System (ADS)

    Ahmad, Samir Mahmmod; Cheow, Siu Leong; Ludin, Norasikin A.; Sopian, K.; Zaidi, Saleem H.

    Solar cell industrial manufacturing, based largely on proven semiconductor processing technologies supported by significant advancements in automation, has reached a plateau in terms of cost and efficiency. However, solar cell manufacturing cost (dollar/watt) is still substantially higher than fossil fuels. The route to lowering cost may not lie with continuing automation and economies of scale. Alternate fabrication processes with lower cost and environmental-sustainability coupled with self-reliance, simplicity, and affordability may lead to price compatibility with carbon-based fuels. In this paper, a custom-designed formulation of phosphoric acid has been investigated, for n-type doping in p-type substrates, as a function of concentration and drive-in temperature. For post-diffusion surface passivation and anti-reflection, thermally-grown oxide films in 50-150-nm thickness were grown. These fabrication methods facilitate process simplicity, reduced costs, and environmental sustainability by elimination of poisonous chemicals and toxic gases (POCl3, SiH4, NH3). Simultaneous fire-through contact formation process based on screen-printed front surface Ag and back surface through thermally grown oxide films was optimized as a function of the peak temperature in conveyor belt furnace. Highest efficiency solar cells fabricated exhibited efficiency of ∼13%. Analysis of results based on internal quantum efficiency and minority carried measurements reveals three contributing factors: high front surface recombination, low minority carrier lifetime, and higher reflection. Solar cell simulations based on PC1D showed that, with improved passivation, lower reflection, and high lifetimes, efficiency can be enhanced to match with commercially-produced PECVD SiN-coated solar cells.

  16. High-Performing Polycarbazole Derivatives for Efficient Solution-Processing of Organic Solar Cells in Air.

    PubMed

    Burgués-Ceballos, Ignasi; Hermerschmidt, Felix; Akkuratov, Alexander V; Susarova, Diana K; Troshin, Pavel A; Choulis, Stelios A

    2015-12-21

    The application of conjugated materials in organic photovoltaics (OPVs) is usually demonstrated in lab-scale spin-coated devices that are processed under controlled inert conditions. Although this is a necessary step to prove high efficiency, testing of promising materials in air should be done in the early stages of research to validate their real potential for low-cost, solution-processed, and large-scale OPVs. Also relevant for approaching commercialization needs is the use of printing techniques that are compatible with upscaling. Here, solution processing of organic solar cells based on three new poly(2,7-carbazole) derivatives is efficiently transferred, without significant losses, to air conditions and to several deposition methods using a simple device architecture. High efficiencies in the range between 5.0 % and 6.3 % are obtained in (rigid) spin-coated, doctor-bladed, and (flexible) slot-die-coated devices, which surpass the reference devices based on poly[N-9'-heptadecanyl-2,7-carbazole-alt-5,5-(4',7'-di-2-thienyl-2',1',3'-benzothiadiazole)] (PCDTBT). In contrast, inkjet printing does not provide reliable results with the presented polymers, which is attributed to their high molecular weight. When the device area in the best-performing system is increased from 9 mm(2) to 0.7 cm(2), the efficiency drops from 6.2 % to 5.0 %. Photocurrent mapping reveals inhomogeneous current generation derived from changes in the thickness of the active layer. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Automatic Sea Bird Detection from High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Mader, S.; Grenzdörffer, G. J.

    2016-06-01

    Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.

  18. Evaluation of biochar powder on oxygen supply efficiency and global warming potential during mainstream large-scale aerobic composting.

    PubMed

    He, Xueqin; Chen, Longjian; Han, Lujia; Liu, Ning; Cui, Ruxiu; Yin, Hongjie; Huang, Guangqun

    2017-12-01

    This study investigated the effects of biochar powder on oxygen supply efficiency and global warming potential (GWP) in the large-scale aerobic composting pattern which includes cyclical forced-turning with aeration at the bottom of composting tanks in China. A 55-day large-scale aerobic composting experiment was conducted in two different groups without and with 10% biochar powder addition (by weight). The results show that biochar powder improves the holding ability of oxygen, and the duration time (O 2 >5%) is around 80%. The composting process with above pattern significantly reduce CH 4 and N 2 O emissions compared to the static or turning-only styles. Considering the average GWP of the BC group was 19.82% lower than that of the CK group, it suggests that rational addition of biochar powder has the potential to reduce the energy consumption of turning, improve effectiveness of the oxygen supply, and reduce comprehensive greenhouse effects. Copyright © 2017. Published by Elsevier Ltd.

  19. Enhanced light extraction of plastic scintillator using large-area photonic crystal structures fabricated by hot embossing.

    PubMed

    Chen, Xueye; Liu, Bo; Wu, Qiang; Zhu, Zhichao; Zhu, Jingtao; Gu, Mu; Chen, Hong; Liu, Jinliang; Chen, Liang; Ouyang, Xiaoping

    2018-04-30

    Plastic scintillators are widely used in various radiation measurement systems. However, detection efficiency and signal-to-noise are limited due to the total internal reflection, especially for weak signal detection situations. In the present investigation, large-area photonic crystals consisting of an array of periodic truncated cone holes were prepared based on hot embossing technology aiming at coupling with the surface of plastic scintillator to improve the light extraction efficiency and directionality control. The experimental results show that a maximum enhancement of 64% at 25° emergence angle along Γ-M orientation and a maximum enhancement of 58% at 20° emergence angle along Γ-K orientation were obtained. The proposed fabrication method of photonic crystal scintillator can avoid complicated pattern transfer processes used in most traditional methods, leading to a simple, economical method for large-area preparation. The photonic crystal scintillator demonstrated in this work is of great value for practical applications of nuclear radiation detection.

  20. An extended basis inexact shift-invert Lanczos for the efficient solution of large-scale generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Rewieński, M.; Lamecki, A.; Mrozowski, M.

    2013-09-01

    This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.

  1. Large-area high-power VCSEL pump arrays optimized for high-energy lasers

    NASA Astrophysics Data System (ADS)

    Wang, Chad; Geske, Jonathan; Garrett, Henry; Cardellino, Terri; Talantov, Fedor; Berdin, Glen; Millenheft, David; Renner, Daniel; Klemer, Daniel

    2012-06-01

    Practical, large-area, high-power diode pumps for one micron (Nd, Yb) as well as eye-safer wavelengths (Er, Tm, Ho) are critical to the success of any high energy diode pumped solid state laser. Diode efficiency, brightness, availability and cost will determine how realizable a fielded high energy diode pumped solid state laser will be. 2-D Vertical-Cavity Surface-Emitting Laser (VCSEL) arrays are uniquely positioned to meet these requirements because of their unique properties, such as low divergence circular output beams, reduced wavelength drift with temperature, scalability to large 2-D arrays through low-cost and high-volume semiconductor photolithographic processes, high reliability, no catastrophic optical damage failure, and radiation and vacuum operation tolerance. Data will be presented on the status of FLIR-EOC's VCSEL pump arrays. Analysis of the key aspects of electrical, thermal and mechanical design that are critical to the design of a VCSEL pump array to achieve high power efficient array performance will be presented.

  2. Carnot cycle at finite power: attainability of maximal efficiency.

    PubMed

    Allahverdyan, Armen E; Hovhannisyan, Karen V; Melkikh, Alexey V; Gevorkian, Sasun G

    2013-08-02

    We want to understand whether and to what extent the maximal (Carnot) efficiency for heat engines can be reached at a finite power. To this end we generalize the Carnot cycle so that it is not restricted to slow processes. We show that for realistic (i.e., not purposefully designed) engine-bath interactions, the work-optimal engine performing the generalized cycle close to the maximal efficiency has a long cycle time and hence vanishing power. This aspect is shown to relate to the theory of computational complexity. A physical manifestation of the same effect is Levinthal's paradox in the protein folding problem. The resolution of this paradox for realistic proteins allows to construct engines that can extract at a finite power 40% of the maximally possible work reaching 90% of the maximal efficiency. For purposefully designed engine-bath interactions, the Carnot efficiency is achievable at a large power.

  3. Treatment of sugar processing industry effluent up to remittance limits: Suitability of hybrid electrode for electrochemical reactor.

    PubMed

    Sahu, Omprakash

    2017-01-01

    Sugar industry is an oldest accommodates industry in the world. It required and discharges a large amount of water for processing. Removal of chemical oxygen demand and color through the electrochemical process including hybrid iron and aluminum electrode was examined for the treatment of cane-based sugar industry wastewater. Most favorable condition at pH 6.5, inter-electrode gap 20 mm, current density 156 A m -2 , electrolyte concentration 0.5 M and reaction time 120 min, 90% COD and 93.5% color removal was achieved. The sludge generated after treatment has less organic contain, which can be used as manure in agricultural crops. Overall the electrocoagulation was found to be reliable, efficient and economically fit to treat the sugar industry wastewater. •Electrocoagulation method for sugar processing industry wastewater treatment Optimization of operating parameters for maximum efficiency.•Physicochemical analysis of sludge and scum.•Significance of hydride metal electrode for pollutant removal.

  4. Influence of injection temperatures and fiberglass compositions on mechanical properties of polypropylene

    NASA Astrophysics Data System (ADS)

    Keey, Tony Tiew Chun; Azuddin, M.

    2017-06-01

    Injection molding process appears to be one of the most suitable mass and cost efficiency manufacturing processes for polymeric parts nowadays due to its high efficiency of large scale production. When down-scaling the products and components, the limits of conventional injection molding process are reached. These constraints had initiated the development of conventional injection molding process into a new era of micro injection molding technology. In this study, fiberglass reinforced polypropylenes (PP) with various glass fiber percentage materials were used. The study start with fabrication of micro tensile specimens at three different injection temperature, 260°C, 270°C and 280°C for different percentage by weight of fiberglass reinforced PP. Then evaluate the effects of various injection temperatures on the tensile properties of micro tensile specimens. Different percentage by weight of fiberglass reinforced PP were tested as well and it was found that 20% fiberglass reinforced PP possessed the greatest percentage increase of tensile strength with increasing temperatures.

  5. Controllable lasing performance in solution-processed organic-inorganic hybrid perovskites.

    PubMed

    Kao, Tsung Sheng; Chou, Yu-Hsun; Hong, Kuo-Bin; Huang, Jiong-Fu; Chou, Chun-Hsien; Kuo, Hao-Chung; Chen, Fang-Chung; Lu, Tien-Chang

    2016-11-03

    Solution-processed organic-inorganic perovskites are fascinating due to their remarkable photo-conversion efficiency and great potential in the cost-effective, versatile and large-scale manufacturing of optoelectronic devices. In this paper, we demonstrate that the perovskite nanocrystal sizes can be simply controlled by manipulating the precursor solution concentrations in a two-step sequential deposition process, thus achieving the feasible tunability of excitonic properties and lasing performance in hybrid metal-halide perovskites. The lasing threshold is at around 230 μJ cm -2 in this solution-processed organic-inorganic lead-halide material, which is comparable to the colloidal quantum dot lasers. The efficient stimulated emission originates from the multiple random scattering provided by the micro-meter scale rugged morphology and polycrystalline grain boundaries. Thus the excitonic properties in perovskites exhibit high correlation with the formed morphology of the perovskite nanocrystals. Compared to the conventional lasers normally serving as a coherent light source, the perovskite random lasers are promising in making low-cost thin-film lasing devices for flexible and speckle-free imaging applications.

  6. Conceptual Design of Low-Temperature Hydrogen Production and High-Efficiency Nuclear Reactor Technology

    NASA Astrophysics Data System (ADS)

    Fukushima, Kimichika; Ogawa, Takashi

    Hydrogen, a potential alternative energy source, is produced commercially by methane (or LPG) steam reforming, a process that requires high temperatures, which are produced by burning fossil fuels. However, as this process generates large amounts of CO2, replacement of the combustion heat source with a nuclear heat source for 773-1173K processes has been proposed in order to eliminate these CO2 emissions. In this paper, a novel method of nuclear hydrogen production by reforming dimethyl ether (DME) with steam at about 573K is proposed. From a thermodynamic equilibrium analysis of DME steam reforming, the authors identified conditions that provide high hydrogen production fraction at low pressure and temperatures of about 523-573K. By setting this low-temperature hydrogen production process upstream from a turbine and nuclear reactor at about 573K, the total energy utilization efficiency according to equilibrium mass and heat balance analysis is about 50%, and it is 75%for a fast breeder reactor (FBR), where turbine is upstream of the reformer.

  7. Highly efficient chemical process to convert mucic acid into adipic acid and DFT studies of the mechanism of the rhenium-catalyzed deoxydehydration.

    PubMed

    Li, Xiukai; Wu, Di; Lu, Ting; Yi, Guangshun; Su, Haibin; Zhang, Yugen

    2014-04-14

    The production of bulk chemicals and fuels from renewable bio-based feedstocks is of significant importance for the sustainability of human society. Adipic acid, as one of the most-demanded drop-in chemicals from a bioresource, is used primarily for the large-volume production of nylon-6,6 polyamide. It is highly desirable to develop sustainable and environmentally friendly processes for the production of adipic acid from renewable feedstocks. However, currently there is no suitable bio-adipic acid synthesis process. Demonstrated herein is the highly efficient synthetic protocol for the conversion of mucic acid into adipic acid through the oxorhenium-complex-catalyzed deoxydehydration (DODH) reaction and subsequent Pt/C-catalyzed transfer hydrogenation. Quantitative yields (99 %) were achieved for the conversion of mucic acid into muconic acid and adipic acid either in separate sequences or in a one-step process. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Distributed HUC-based modeling with SUMMA for ensemble streamflow forecasting over large regional domains.

    NASA Astrophysics Data System (ADS)

    Saharia, M.; Wood, A.; Clark, M. P.; Bennett, A.; Nijssen, B.; Clark, E.; Newman, A. J.

    2017-12-01

    Most operational streamflow forecasting systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow require an experienced human forecaster. But this approach faces challenges surrounding process reproducibility, hindcasting capability, and extension to large domains. The operational hydrologic community is increasingly moving towards `over-the-loop' (completely automated) large-domain simulations yet recent developments indicate a widespread lack of community knowledge about the strengths and weaknesses of such systems for forecasting. A realistic representation of land surface hydrologic processes is a critical element for improving forecasts, but often comes at the substantial cost of forecast system agility and efficiency. While popular grid-based models support the distributed representation of land surface processes, intermediate-scale Hydrologic Unit Code (HUC)-based modeling could provide a more efficient and process-aligned spatial discretization, reducing the need for tradeoffs between model complexity and critical forecasting requirements such as ensemble methods and comprehensive model calibration. The National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the USACE to implement, assess, and demonstrate real-time, over-the-loop distributed streamflow forecasting for several large western US river basins and regions. In this presentation, we present early results from short to medium range hydrologic and streamflow forecasts for the Pacific Northwest (PNW). We employ a real-time 1/16th degree daily ensemble model forcings as well as downscaled Global Ensemble Forecasting System (GEFS) meteorological forecasts. These datasets drive an intermediate-scale configuration of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model, which represents the PNW using over 11,700 HUCs. The system produces not only streamflow forecasts (using the MizuRoute channel routing tool) but also distributed model states such as soil moisture and snow water equivalent. We also describe challenges in distributed model-based forecasting, including the application and early results of real-time hydrologic data assimilation.

  9. Development of Advanced Deposition Technology for Microcrystalline Si Based Solar Cells and Modules: Final Technical Report, 1 May 2002-31 July 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y. M.

    2004-12-01

    The key objective of this subcontract was to take the first steps to extend the radio-frequency plasma-enhanced chemical vapor deposition (RF-PECVD) manufacturing technology of Energy Photovoltaics, Inc. (EPV), to the promising field of a-Si/nc-Si solar cell fabrication by demonstrating ''proof-of-concept'' devices of good efficiencies that previously were believed to be unobtainable in single-chamber reactors owing to contamination problems. A complementary goal was to find a new high-rate deposition method that can conceivably be deployed in large PECVD-type reactors. We emphasize that our goal was not to produce 'champion' devices of near-record efficiencies, but rather, to achieve modestly high efficiencies usingmore » a far simpler (cheaper) system, via practical processing methods and materials. To directly attack issues in solar-cell fabrication at EPV, the nc-Si thin films were studied almost exclusively in the p-i-n device configuration (as absorbers or i-layers), not as stand-alone films. Highly efficient, p-i-n type, nc-Si-based solar cells are generally grown on expensive, laboratory superstrates, such as custom ZnO/glass of high texture (granular surface) and low absorption. Also standard was the use of a highly effective back-reflector ZnO/Ag, where the ZnO can be surface-textured for efficient diffuse reflection. The high-efficiency ''champion'' devices made by the PECVD methods were invariably prepared in sophisticated (i.e., expensive), multi-chamber, or at least load-locked deposition systems. The electrode utilization efficiency, defined as the surface-area ratio of the powered electrode to that of the substrates, was typically low at about one (1:1). To evaluate the true potential of nc-Si absorbers for cost-competitive, commercially viable manufacturing of large-area PV modules, we took a more down-to-earth approach, based on our proven production of a-Si PV modules by a massively parallel batch process in single-chamber RF-PECVD systems, to the study of nc-Si solar cells, with the aim of producing high-efficiency a-Si/nc-Si solar cells and sub-modules.« less

  10. Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Duffy, C.

    2006-05-01

    Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.

  11. Use of prismatic films to control light distribution

    NASA Technical Reports Server (NTRS)

    Kneipp, K. G.

    1994-01-01

    Piping light for illumination purposes is a concept which has been around for a long time. In fact, it was the subject of an 1881 United States patent which proposed the use of mirrors inside a tube to reflect light from wall to wall down the tube. The use of conventional mirrors for this purpose, however, has not worked because mirrors do not reflect well enough. On the other hand, optical fibers composed of certain glasses or plastics are known to transport light much more efficiently. The light that enters is reflected back and forth within the walls of the fiber until it reaches the other end. This is possible by means of a principle known as 'total internal reflection'. No light escapes through the walls and very little is absorbed in the bulk of the fiber. However, while optical fibers are very efficient in transporting light, they are impractical for transporting large quantities of light. Lorne Whitehead, as a student at the University of British Columbia, recognized that prismatic materials could be used to create a 'prism light guide', a hollow structure that can efficiently transport large quantities of light. This invention is a pipe whose transparent walls are formed on the outside into precise prismatic facets. The facets are efficient total internal reflection mirrors which prevent light travelling down the guide from escaping. Very little light is absorbed by the pipe because light travels primarily in the air space within the hollow guide. And, because the guide is hollow, weight and cost factors are much more favorable than would be the case with very large solid fibers. Recent advances in precision micromachining, polymer processing, and certain other manufacturing technologies have made the development of OLF (Optical Lighting Film) possible. The process is referred to as 'microreplication' and has been found to have broad applicability in a number of diverse product areas.

  12. A novel dismantling process of waste printed circuit boards using water-soluble ionic liquid.

    PubMed

    Zeng, Xianlai; Li, Jinhui; Xie, Henghua; Liu, Lili

    2013-10-01

    Recycling processes for waste printed circuit boards (WPCBs) have been well established in terms of scientific research and field pilots. However, current dismantling procedures for WPCBs have restricted the recycling process, due to their low efficiency and negative impacts on environmental and human health. This work aimed to seek an environmental-friendly dismantling process through heating with water-soluble ionic liquid to separate electronic components and tin solder from two main types of WPCBs-cathode ray tubes and computer mainframes. The work systematically investigates the influence factors, heating mechanism, and optimal parameters for opening solder connections on WPCBs during the dismantling process, and addresses its environmental performance and economic assessment. The results obtained demonstrate that the optimal temperature, retention time, and turbulence resulting from impeller rotation during the dismantling process, were 250 °C, 12 min, and 45 rpm, respectively. Nearly 90% of the electronic components were separated from the WPCBs under the optimal experimental conditions. This novel process offers the possibility of large industrial-scale operations for separating electronic components and recovering tin solder, and for a more efficient and environmentally sound process for WPCBs recycling. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Treatment of winery wastewater by physicochemical, biological and advanced processes: a review.

    PubMed

    Ioannou, L A; Li Puma, G; Fatta-Kassinos, D

    2015-04-09

    Winery wastewater is a major waste stream resulting from numerous cleaning operations that occur during the production stages of wine. The resulting effluent contains various organic and inorganic contaminants and its environmental impact is notable, mainly due to its high organic/inorganic load, the large volumes produced and its seasonal variability. Several processes for the treatment of winery wastewater are currently available, but the development of alternative treatment methods is necessary in order to (i) maximize the efficiency and flexibility of the treatment process to meet the discharge requirements for winery effluents, and (ii) decrease both the environmental footprint, as well as the investment/operational costs of the process. This review, presents the state-of-the-art of the processes currently applied and/or tested for the treatment of winery wastewater, which were divided into five categories: i.e., physicochemical, biological, membrane filtration and separation, advanced oxidation processes, and combined biological and advanced oxidation processes. The advantages and disadvantages, as well as the main parameters/factors affecting the efficiency of winery wastewater treatment are discussed. Both bench- and pilot/industrial-scale processes have been considered for this review. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. The Generation of Continents through Subduction Zone Processing of Large Igneous Provinces: A Case Study from the Central American Subduction Zone

    NASA Astrophysics Data System (ADS)

    Harmon, N.; Rychert, C.

    2013-12-01

    Billions of years ago primary mantle magmas evolved to form the continental crust, although no simple magmatic differentiation process explains the progression to average andesitic crustal compositions observed today. A multiple stage process is often invoked, involving subduction and or oceanic plumes, to explain the strong depletion observed in Archean xenoliths and as well as pervasive tonalite-trondhjemite-granodiorite and komatiite protoliths in the greenstone belts in the crust in the cratons. Studying modern day analogues of oceanic plateaus that are currently interacting with subductions zones can provide insights into continental crust formation. Here we use surface waves to image crustal isotropic and radially anisotropic shear velocity structure above the central American subduction system in Nicaragua and Costa Rica, which juxtaposes thickened ocean island plateau crust in Costa Rica with continental/normal oceanic crust in Nicaragua. We find low velocities beneath the active arc regions (3-6% slower than the surrounding region) and up to 6% radially anisotropic structures within the oceanic crust of the Caribbean Large Igneous Province beneath Costa Rica. The low velocities and radial anisotropy suggest the anomalies are due to pervasive deep crustal magma sills. The inferred sill structures correlate spatially with increased silicic outputs in northern Costa Rica, indicating that deep differentiation of primary magmas is more efficient beneath Costa Rica relative to Nicaragua. Subduction zone alteration of large igneous provinces promotes efficient, deep processing of primary basalts to continental crust. This scenario can explain the formation of continental lithosphere and crust, by both providing strongly depleted mantle lithosphere and a means for rapidly generating a silicic crustal composition.

  15. (Un)Natural Disasters: The Electoral Cycle Outweighs the Hydrologic Cycle in Drought Declaration in Northeast Brazil

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.

    2016-12-01

    Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.

  16. Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines

    PubMed Central

    Mikut, Ralf

    2017-01-01

    Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID:29095927

  17. Functionalization of nanomaterials by non-thermal large area atmospheric pressure plasmas: application to flexible dye-sensitized solar cells.

    PubMed

    Jung, Heesoo; Park, Jaeyoung; Yoo, Eun Sang; Han, Gill-Sang; Jung, Hyun Suk; Ko, Min Jae; Park, Sanghoo; Choe, Wonho

    2013-09-07

    A key challenge to the industrial application of nanotechnology is the development of fabrication processes for functional devices based on nanomaterials which can be scaled up for mass production. In this report, we disclose the results of non-thermal radio-frequency (rf) atmospheric pressure plasma (APP) based deposition of TiO2 nanoparticles on a flexible substrate for the fabrication of dye-sensitized solar cells (DSSCs). Operating at 190 °C without a vacuum enclosure, the APP method can avoid thermal damage and vacuum compatibility restrictions and utilize roll-to-roll processing over a large area. The various analyses of the TiO2 films demonstrate that superior film properties can be obtained by the non-thermal APP method when compared with the thermal sintering process operating at 450 °C. The crystallinity of the anatase TiO2 nanoparticles is significantly improved without thermal agglomeration, while the surface defects such as Ti(3+) ions are eliminated, thus providing efficient charge collecting properties for solar cells. Finally, we successfully fabricated a flexible DSSC with an energy conversion efficiency of 4.2% using a transparent plastic substrate. This work demonstrates the potential of non-thermal APP technology in the area of device-level, nano-enabled material manufacturing.

  18. Adapting Wave-front Algorithms to Efficiently Utilize Systems with Deep Communication Hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J.; Lang, Michael; Pakin, Scott

    2011-09-30

    Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance especially in hybrid systems using accelerators. Processorcores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contains wavefront processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundarymore » data downstream and whose cost is typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional steps in the parallel computation and higher use of on-chip communications. This tradeoff is explored using a performance model. An implementation using the Reverse-acceleration programming model on the petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less

  19. Adapting wave-front algorithms to efficiently utilize systems with deep communication hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J; Lang, Michael; Pakin, Scott

    2009-01-01

    Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance. Processor-cores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contain wave-front processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundary data downstream and whose cost ismore » typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional computation and higher use of on-chip communications. This tradeoff is explored using a performance model and an implementation on the Petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less

  20. Molecular epidemiology biomarkers-Sample collection and processing considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Nina T.; Pfleger, Laura; Berger, Eileen

    2005-08-07

    Biomarker studies require processing and storage of numerous biological samples with the goals of obtaining a large amount of information and minimizing future research costs. An efficient study design includes provisions for processing of the original samples, such as cryopreservation, DNA isolation, and preparation of specimens for exposure assessment. Use of standard, two-dimensional and nanobarcodes and customized electronic databases assure efficient management of large sample collections and tracking results of data analyses. Standard operating procedures and quality control plans help to protect sample quality and to assure validity of the biomarker data. Specific state, federal and international regulations are inmore » place regarding research with human samples, governing areas including custody, safety of handling, and transport of human samples. Appropriate informed consent must be obtained from the study subjects prior to sample collection and confidentiality of results maintained. Finally, examples of three biorepositories of different scale (European Cancer Study, National Cancer Institute and School of Public Health Biorepository, University of California, Berkeley) are used to illustrate challenges faced by investigators and the ways to overcome them. New software and biorepository technologies are being developed by many companies that will help to bring biological banking to a new level required by molecular epidemiology of the 21st century.« less

  1. Maximizing Energy Savings Reliability in BC Hydro Industrial Demand-side Management Programs: An Assessment of Performance Incentive Models

    NASA Astrophysics Data System (ADS)

    Gosman, Nathaniel

    For energy utilities faced with expanded jurisdictional energy efficiency requirements and pursuing demand-side management (DSM) incentive programs in the large industrial sector, performance incentive programs can be an effective means to maximize the reliability of planned energy savings. Performance incentive programs balance the objectives of high participation rates with persistent energy savings by: (1) providing financial incentives and resources to minimize constraints to investment in energy efficiency, and (2) requiring that incentive payments be dependent on measured energy savings over time. As BC Hydro increases its DSM initiatives to meet the Clean Energy Act objective to reduce at least 66 per cent of new electricity demand with DSM by 2020, the utility is faced with a higher level of DSM risk, or uncertainties that impact the costeffective acquisition of planned energy savings. For industrial DSM incentive programs, DSM risk can be broken down into project development and project performance risks. Development risk represents the project ramp-up phase and is the risk that planned energy savings do not materialize due to low customer response to program incentives. Performance risk represents the operational phase and is the risk that planned energy savings do not persist over the effective measure life. DSM project development and performance risks are, in turn, a result of industrial economic, technological and organizational conditions, or DSM risk factors. In the BC large industrial sector, and characteristic of large industrial sectors in general, these DSM risk factors include: (1) capital constraints to investment in energy efficiency, (2) commodity price volatility, (3) limited internal staffing resources to deploy towards energy efficiency, (4) variable load, process-based energy saving potential, and (5) a lack of organizational awareness of an operation's energy efficiency over time (energy performance). This research assessed the capacity of alternative performance incentive program models to manage DSM risk in BC. Three performance incentive program models were assessed and compared to BC Hydro's current large industrial DSM incentive program, Power Smart Partners -- Transmission Project Incentives, itself a performance incentive-based program. Together, the selected program models represent a continuum of program design and implementation in terms of the schedule and level of incentives provided, the duration and rigour of measurement and verification (M&V), energy efficiency measures targeted and involvement of the private sector. A multi criteria assessment framework was developed to rank the capacity of each program model to manage BC large industrial DSM risk factors. DSM risk management rankings were then compared to program costeffectiveness, targeted energy savings potential in BC and survey results from BC industrial firms on the program models. The findings indicate that the reliability of DSM energy savings in the BC large industrial sector can be maximized through performance incentive program models that: (1) offer incentives jointly for capital and low-cost operations and maintenance (O&M) measures, (2) allow flexible lead times for project development, (3) utilize rigorous M&V methods capable of measuring variable load, process-based energy savings, (4) use moderate contract lengths that align with effective measure life, and (5) integrate energy management software tools capable of providing energy performance feedback to customers to maximize the persistence of energy savings. While this study focuses exclusively on the BC large industrial sector, the findings of this research have applicability to all energy utilities serving large, energy intensive industrial sectors.

  2. Modelling the link amongst fine-pore diffuser fouling, oxygen transfer efficiency, and aeration energy intensity.

    PubMed

    Garrido-Baserba, Manel; Sobhani, Reza; Asvapathanagul, Pitiporn; McCarthy, Graham W; Olson, Betty H; Odize, Victory; Al-Omari, Ahmed; Murthy, Sudhir; Nifong, Andrea; Godwin, Johnnie; Bott, Charles B; Stenstrom, Michael K; Shaw, Andrew R; Rosso, Diego

    2017-03-15

    This research systematically studied the behavior of aeration diffuser efficiency over time, and its relation to the energy usage per diffuser. Twelve diffusers were selected for a one year fouling study. Comprehensive aeration efficiency projections were carried out in two WRRFs with different influent rates, and the influence of operating conditions on aeration diffusers' performance was demonstrated. This study showed that the initial energy use, during the first year of operation, of those aeration diffusers located in high rate systems (with solids retention time - SRT-less than 2 days) increased more than 20% in comparison to the conventional systems (2 > SRT). Diffusers operating for three years in conventional systems presented the same fouling characteristics as those deployed in high rate processes for less than 15 months. A new procedure was developed to accurately project energy consumption on aeration diffusers; including the impacts of operation conditions, such SRT and organic loading rate, on specific aeration diffusers materials (i.e. silicone, polyurethane, EPDM, ceramic). Furthermore, it considers the microbial colonization dynamics, which successfully correlated with the increase of energy consumption (r 2 :0.82 ± 7). The presented energy model projected the energy costs and the potential savings for the diffusers after three years in operation in different operating conditions. Whereas the most efficient diffusers provided potential costs spanning from 4900 USD/Month for a small plant (20 MGD, or 74,500 m 3 /d) up to 24,500 USD/Month for a large plant (100 MGD, or 375,000 m 3 /d), other diffusers presenting less efficiency provided spans from 18,000USD/Month for a small plant to 90,000 USD/Month for large plants. The aim of this methodology is to help utilities gain more insight into process mechanisms and design better energy efficiency strategies at existing facilities to reduce energy consumption. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Effect of hydrothermal liquefaction aqueous phase recycling on bio-crude yields and composition.

    PubMed

    Biller, Patrick; Madsen, René B; Klemmer, Maika; Becker, Jacob; Iversen, Bo B; Glasius, Marianne

    2016-11-01

    Hydrothermal liquefaction (HTL) is a promising thermo-chemical processing technology for the production of biofuels but produces large amounts of process water. Therefore recirculation of process water from HTL of dried distillers grains with solubles (DDGS) is investigated. Two sets of recirculation on a continuous reactor system using K2CO3 as catalyst were carried out. Following this, the process water was recirculated in batch experiments for a total of 10 rounds. To assess the effect of alkali catalyst, non-catalytic HTL process water recycling was performed with 9 recycle rounds. Both sets of experiments showed a large increase in bio-crude yields from approximately 35 to 55wt%. The water phase and bio-crude samples from all experiments were analysed via quantitative gas chromatography-mass spectrometry (GC-MS) to investigate their composition and build-up of organic compounds. Overall the results show an increase in HTL conversion efficiency and a lower volume, more concentrated aqueous by-product following recycling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. The latest developments and outlook for hydrogen liquefaction technology

    NASA Astrophysics Data System (ADS)

    Ohlig, K.; Decker, L.

    2014-01-01

    Liquefied hydrogen is presently mainly used for space applications and the semiconductor industry. While clean energy applications, for e.g. the automotive sector, currently contribute to this demand with a small share only, their demand may see a significant boost in the next years with the need for large scale liquefaction plants exceeding the current plant sizes by far. Hydrogen liquefaction for small scale plants with a maximum capacity of 3 tons per day (tpd) is accomplished with a Brayton refrigeration cycle using helium as refrigerant. This technology is characterized by low investment costs but lower process efficiency and hence higher operating costs. For larger plants, a hydrogen Claude cycle is used, characterized by higher investment but lower operating costs. However, liquefaction plants meeting the potentially high demand in the clean energy sector will need further optimization with regard to energy efficiency and hence operating costs. The present paper gives an overview of the currently applied technologies, including their thermodynamic and technical background. Areas of improvement are identified to derive process concepts for future large scale hydrogen liquefaction plants meeting the needs of clean energy applications with optimized energy efficiency and hence minimized operating costs. Compared to studies in this field, this paper focuses on application of new technology and innovative concepts which are either readily available or will require short qualification procedures. They will hence allow implementation in plants in the close future.

  5. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  6. Analyzing big data with the hybrid interval regression methods.

    PubMed

    Huang, Chia-Hui; Yang, Keng-Chieh; Kao, Han-Ying

    2014-01-01

    Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.

  7. Analyzing Big Data with the Hybrid Interval Regression Methods

    PubMed Central

    Kao, Han-Ying

    2014-01-01

    Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes. PMID:25143968

  8. Influence of forces acting on side of machine on precision machining of large diameter holes

    NASA Astrophysics Data System (ADS)

    Fedorenko, M. A.; Bondarenko, J. A.; Sanina, T. M.

    2018-03-01

    One of the most important factors that increase efficiency, durability and reliability of rotating units is precision installation, preventive maintenance work, timely replacing of a failed or worn components and assemblies. These works should be carried out in the operation of the equipment, as the downtime in many cases leads to large financial losses. Stop of one unit of an industrial enterprise can interrupt the technological chain of production, resulting in a possible stop of the entire equipment. Improving the efficiency and optimization of the repair process increases accuracy of installation work when installing equipment, conducting restoration under operating conditions relevant for enterprises of different industries because it eliminates dismantling the equipment, sending it to maintenance, the expectation of equipment return, the new installation with the required quality and accuracy of repair.

  9. Efficient structure from motion on large scenes using UAV with position and pose information

    NASA Astrophysics Data System (ADS)

    Teng, Xichao; Yu, Qifeng; Shang, Yang; Luo, Jing; Wang, Gang

    2018-04-01

    In this paper, we exploit prior information from global positioning systems and inertial measurement units to speed up the process of large scene reconstruction from images acquired by Unmanned Aerial Vehicles. We utilize weak pose information and intrinsic parameter to obtain the projection matrix for each view. As compared to unmanned aerial vehicles' flight altitude, topographic relief can usually be ignored, we assume that the scene is flat and use weak perspective camera to get projective transformations between two views. Furthermore, we propose an overlap criterion and select potentially matching view pairs between projective transformed views. A robust global structure from motion method is used for image based reconstruction. Our real world experiments show that the approach is accurate, scalable and computationally efficient. Moreover, projective transformations between views can also be used to eliminate false matching.

  10. On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat

    NASA Astrophysics Data System (ADS)

    Hua, H.

    2016-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.

  11. Maximizing coupling-efficiency of high-power diode lasers utilizing hybrid assembly technology

    NASA Astrophysics Data System (ADS)

    Zontar, D.; Dogan, M.; Fulghum, S.; Müller, T.; Haag, S.; Brecher, C.

    2015-03-01

    In this paper, we present hybrid assembly technology to maximize coupling efficiency for spatially combined laser systems. High quality components, such as center-turned focusing units, as well as suitable assembly strategies are necessary to obtain highest possible output ratios. Alignment strategies are challenging tasks due to their complexity and sensitivity. Especially in low-volume production fully automated systems are economically at a disadvantage, as operator experience is often expensive. However reproducibility and quality of automatically assembled systems can be superior. Therefore automated and manual assembly techniques are combined to obtain high coupling efficiency while preserving maximum flexibility. The paper will describe necessary equipment and software to enable hybrid assembly processes. Micromanipulator technology with high step-resolution and six degrees of freedom provide a large number of possible evaluation points. Automated algorithms are necess ary to speed-up data gathering and alignment to efficiently utilize available granularity for manual assembly processes. Furthermore, an engineering environment is presented to enable rapid prototyping of automation tasks with simultaneous data ev aluation. Integration with simulation environments, e.g. Zemax, allows the verification of assembly strategies in advance. Data driven decision making ensures constant high quality, documents the assembly process and is a basis for further improvement. The hybrid assembly technology has been applied on several applications for efficiencies above 80% and will be discussed in this paper. High level coupling efficiency has been achieved with minimized assembly as a result of semi-automated alignment. This paper will focus on hybrid automation for optimizing and attaching turning mirrors and collimation lenses.

  12. Scalable graphene production from ethanol decomposition by microwave argon plasma torch

    NASA Astrophysics Data System (ADS)

    Melero, C.; Rincón, R.; Muñoz, J.; Zhang, G.; Sun, S.; Perez, A.; Royuela, O.; González-Gago, C.; Calzada, M. D.

    2018-01-01

    A fast, efficient and simple method is presented for the production of high quality graphene on a large scale by using an atmospheric pressure plasma-based technique. This technique allows to obtain high quality graphene in powder in just one step, without the use of neither metal catalysts and nor specific substrate during the process. Moreover, the cost for graphene production is significantly reduced since the ethanol used as carbon source can be obtained from the fermentation of agricultural industries. The process provides an additional benefit contributing to the revalorization of waste in the production of a high-value added product like graphene. Thus, this work demonstrates the features of plasma technology as a low cost, efficient, clean and environmentally friendly route for production of high-quality graphene.

  13. Fiber-fed time-resolved photoluminescence for reduced process feedback time on thin-film photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Repins, I. L.; Egaas, B.; Mansfield, L. M.

    2015-01-15

    Fiber-fed time-resolved photoluminescence is demonstrated as a tool for immediate process feedback after deposition of the absorber layer for CuIn{sub x}Ga{sub 1-x}Se{sub 2} and Cu{sub 2}ZnSnSe{sub 4} photovoltaic devices. The technique uses a simplified configuration compared to typical laboratory time-resolved photoluminescence in the delivery of the exciting beam, signal collection, and electronic components. Correlation of instrument output with completed device efficiency is demonstrated over a large sample set. The extraction of the instrument figure of merit, depending on both the initial luminescence intensity and its time decay, is explained and justified. Limitations in the prediction of device efficiency by thismore » method, including surface effect, are demonstrated and discussed.« less

  14. Foster Wheeler's Solutions for Large Scale CFB Boiler Technology: Features and Operational Performance of Łagisza 460 MWe CFB Boiler

    NASA Astrophysics Data System (ADS)

    Hotta, Arto

    During recent years, once-through supercritical (OTSC) CFB technology has been developed, enabling the CFB technology to proceed to medium-scale (500 MWe) utility projects such as Łagisza Power Plant in Poland owned by Poludniowy Koncern Energetyczny SA. (PKE), with net efficiency nearly 44%. Łagisza power plant is currently under commissioning and has reached full load operation in March 2009. The initial operation shows very good performance and confirms, that the CFB process has no problems with the scaling up to this size. Also the once-through steam cycle utilizing Siemens' vertical tube Benson technology has performed as predicted in the CFB process. Foster Wheeler has developed the CFB design further up to 800 MWe with net efficiency of ≥45%.

  15. More effective wet turboexpander for the nuclotron helium refrigerators

    NASA Astrophysics Data System (ADS)

    Agapov, N. N.; Batin, V. I.; Davydov, A. B.; Khodzhibagian, H. G.; Kovalenko, A. D.; Perestoronin, G. A.; Sergeev, I. I.; Stulov, V. L.; Udut, V. N.

    2002-05-01

    In order to raise the efficiency of cryogenic refrigerators and liquefiers, it is very important to replace the JT process, which involves large losses of exergy, by the improved process of adiabatic expansion. This paper presents test results of the second-generation wet turboexpander for the Nuclotron helium refrigerators. A rotor is fixed vertically by a combination of gas and hydrostatic oil bearings. The turbines are capable to operate at a speed of 300,000 revolutions per minute. The power generated by the turbine goes into friction in the oil bearings. The design of the new wet turboexpander was executed in view of those specific conditions, which arise due to the operation at liquid helium temperature. The application of this new expansion machine increases the efficiency of the Nuclotron helium refrigerators by 25%.

  16. Enhanced diffusion on oscillating surfaces through synchronization

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Cao, Wei; Ma, Ming; Zheng, Quanshui

    2018-02-01

    The diffusion of molecules and clusters under nanoscale confinement or absorbed on surfaces is the key controlling factor in dynamical processes such as transport, chemical reaction, or filtration. Enhancing diffusion could benefit these processes by increasing their transport efficiency. Using a nonlinear Langevin equation with an extensive number of simulations, we find a large enhancement in diffusion through surface oscillation. For helium confined in a narrow carbon nanotube, the diffusion enhancement is estimated to be over three orders of magnitude. A synchronization mechanism between the kinetics of the particles and the oscillating surface is revealed. Interestingly, a highly nonlinear negative correlation between diffusion coefficient and temperature is predicted based on this mechanism, and further validated by simulations. Our results provide a general and efficient method for enhancing diffusion, especially at low temperatures.

  17. An integrated approach to realizing high-performance liquid-junction quantum dot sensitized solar cells

    PubMed Central

    McDaniel, Hunter; Fuke, Nobuhiro; Makarov, Nikolay S.; Pietryga, Jeffrey M.; Klimov, Victor I.

    2013-01-01

    Solution-processed semiconductor quantum dot solar cells offer a path towards both reduced fabrication cost and higher efficiency enabled by novel processes such as hot-electron extraction and carrier multiplication. Here we use a new class of low-cost, low-toxicity CuInSexS2−x quantum dots to demonstrate sensitized solar cells with certified efficiencies exceeding 5%. Among other material and device design improvements studied, use of a methanol-based polysulfide electrolyte results in a particularly dramatic enhancement in photocurrent and reduced series resistance. Despite the high vapour pressure of methanol, the solar cells are stable for months under ambient conditions, which is much longer than any previously reported quantum dot sensitized solar cell. This study demonstrates the large potential of CuInSexS2−x quantum dots as active materials for the realization of low-cost, robust and efficient photovoltaics as well as a platform for investigating various advanced concepts derived from the unique physics of the nanoscale size regime. PMID:24322379

  18. A simple method for decomposition of peracetic acid in a microalgal cultivation system.

    PubMed

    Sung, Min-Gyu; Lee, Hansol; Nam, Kibok; Rexroth, Sascha; Rögner, Matthias; Kwon, Jong-Hee; Yang, Ji-Won

    2015-03-01

    A cost-efficient process devoid of several washing steps was developed, which is related to direct cultivation following the decomposition of the sterilizer. Peracetic acid (PAA) is known to be an efficient antimicrobial agent due to its high oxidizing potential. Sterilization by 2 mM PAA demands at least 1 h incubation time for an effective disinfection. Direct degradation of PAA was demonstrated by utilizing components in conventional algal medium. Consequently, ferric ion and pH buffer (HEPES) showed a synergetic effect for the decomposition of PAA within 6 h. On the contrary, NaNO3, one of the main components in algal media, inhibits the decomposition of PAA. The improved growth of Chlorella vulgaris and Synechocystis PCC6803 was observed in the prepared BG11 by decomposition of PAA. This process involving sterilization and decomposition of PAA should help cost-efficient management of photobioreactors in a large scale for the production of value-added products and biofuels from microalgal biomass.

  19. Water-Soluble Polymeric Interfacial Material for Planar Perovskite Solar Cells.

    PubMed

    Zheng, Lingling; Ma, Yingzhuang; Xiao, Lixin; Zhang, Fengyan; Wang, Yuanhao; Yang, Hongxing

    2017-04-26

    Interfacial materials play a critical role in photoelectric conversion properties as well as the anomalous hysteresis phenomenon of the perovskite solar cells (PSCs). In this article, a water-soluble polythiophene PTEBS was employed as a cathode interfacial material for PSCs. Efficient energy level aligning and improved film morphology were obtained due to an ultrathin coating of PTEBS. Better ohmic contact between the perovskite layer and the cathode also benefits the charge transport and extraction of the device. Moreover, less charge accumulation at the interface weakens the polarization of the perovskite resulting in a relatively quick response of the modified device. The ITO/PTEBS/CH 3 NH 3 PbI 3 /spiro-MeOTAD/Au cells by an all low-temperature process achieved power conversion efficiencies of up to 15.4% without apparent hysteresis effect. Consequently, the utilization of this water-soluble polythiophene is a practical approach for the fabrication of highly efficient, large-area, and low-cost PSCs and compatible with low-temperature solution process, roll-to-roll manufacture, and flexible application.

  20. Defluoridation potential of jute fibers grafted with fatty acyl chain

    NASA Astrophysics Data System (ADS)

    Manna, Suvendu; Saha, Prosenjit; Roy, Debasis; Sen, Ramkrishna; Adhikari, Basudam

    2015-11-01

    Waterborne fluoride is usually removed from water by coagulation, adsorption, ion exchange, electro dialysis or reverse osmosis. These processes are often effective over narrow pH ranges, release ions considered hazardous to human health or produce large volumes of toxic sludge that are difficult to handle and dispose. Although plant matters have been shown to remove waterborne fluoride, they suffer from poor removal efficiency. Following from the insight that interaction between microbial carbohydrate biopolymers and anionic surfaces is often facilitated by lipids, an attempt has been made to enhance fluoride adsorption efficiency of jute by grafting the lignocellulosic fiber with fatty acyl chains found in vegetable oils. Fluoride removal efficiency of grafted jute was found to be comparable or higher than those of alternative defluoridation processes. Infrared and X-ray photoelectron spectroscopic evidence indicated that hydrogen bonding, protonation and C-F bonding were responsible for fluoride accumulation on grafted jute. Adsorption based on grafted jute fibers appears to be an economical, sustainable and eco-friendly alternative technique for removing waterborne fluoride.

Top