Sample records for efficient cost-effective large-scale

  1. Characterizing Synergistic Water and Energy Efficiency at the Residential Scale Using a Cost Abatement Curve Approach

    NASA Astrophysics Data System (ADS)

    Stillwell, A. S.; Chini, C. M.; Schreiber, K. L.; Barker, Z. A.

    2015-12-01

    Energy and water are two increasingly correlated resources. Electricity generation at thermoelectric power plants requires cooling such that large water withdrawal and consumption rates are associated with electricity consumption. Drinking water and wastewater treatment require significant electricity inputs to clean, disinfect, and pump water. Due to this energy-water nexus, energy efficiency measures might be a cost-effective approach to reducing water use and water efficiency measures might support energy savings as well. This research characterizes the cost-effectiveness of different efficiency approaches in households by quantifying the direct and indirect water and energy savings that could be realized through efficiency measures, such as low-flow fixtures, energy and water efficient appliances, distributed generation, and solar water heating. Potential energy and water savings from these efficiency measures was analyzed in a product-lifetime adjusted economic model comparing efficiency measures to conventional counterparts. Results were displayed as cost abatement curves indicating the most economical measures to implement for a target reduction in water and/or energy consumption. These cost abatement curves are useful in supporting market innovation and investment in residential-scale efficiency.

  2. ResStock Analysis Tool | Buildings | NREL

    Science.gov Websites

    Energy and Cost Savings for U.S. Homes Contact Eric Wilson to learn how ResStock can benefit your approach to large-scale residential energy analysis by combining: Large public and private data sources uncovered $49 billion in potential annual utility bill savings through cost-effective energy efficiency

  3. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  4. A multidisciplinary approach to the development of low-cost high-performance lightwave networks

    NASA Technical Reports Server (NTRS)

    Maitan, Jacek; Harwit, Alex

    1991-01-01

    Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system.

  5. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  6. A low-cost iron-cadmium redox flow battery for large-scale energy storage

    NASA Astrophysics Data System (ADS)

    Zeng, Y. K.; Zhao, T. S.; Zhou, X. L.; Wei, L.; Jiang, H. R.

    2016-10-01

    The redox flow battery (RFB) is one of the most promising large-scale energy storage technologies that offer a potential solution to the intermittency of renewable sources such as wind and solar. The prerequisite for widespread utilization of RFBs is low capital cost. In this work, an iron-cadmium redox flow battery (Fe/Cd RFB) with a premixed iron and cadmium solution is developed and tested. It is demonstrated that the coulombic efficiency and energy efficiency of the Fe/Cd RFB reach 98.7% and 80.2% at 120 mA cm-2, respectively. The Fe/Cd RFB exhibits stable efficiencies with capacity retention of 99.87% per cycle during the cycle test. Moreover, the Fe/Cd RFB is estimated to have a low capital cost of 108 kWh-1 for 8-h energy storage. Intrinsically low-cost active materials, high cell performance and excellent capacity retention equip the Fe/Cd RFB to be a promising solution for large-scale energy storage systems.

  7. An Analysis of the Cost and Performance of Photovoltaic Systems as a Function of Module Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horowitz, Kelsey A.W.; Fu, Ran; Silverman, Tim

    We investigate the potential effects of module area on the cost and performance of photovoltaic systems. Applying a bottom-up methodology, we analyzed the costs associated with mc-Si and thin-film modules and systems as a function of module area. We calculate a potential for savings of up to $0.04/W, $0.10/W, and $0.13/W in module manufacturing costs for mc-Si, CdTe, and CIGS respectively, with large area modules. We also find that an additional $0.05/W savings in balance-of-systems costs may be achieved. However, these savings are dependent on the ability to maintain efficiency and manufacturing yield as area scales. Lifetime energy yield mustmore » also be maintained to realize reductions in the levelized cost of energy. We explore the possible effects of module size on efficiency and energy production, and find that more research is required to understand these issues for each technology. Sensitivity of the $/W cost savings to module efficiency and manufacturing yield is presented. We also discuss non-cost barriers to adoption of large area modules.« less

  8. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torcellini, P.; Pless, S.; Lobato, C.

    Until recently, large-scale, cost-effective net-zero energy buildings (NZEBs) were thought to lie decades in the future. However, ongoing work at the National Renewable Energy Laboratory (NREL) indicates that NZEB status is both achievable and repeatable today. This paper presents a definition framework for classifying NZEBs and a real-life example that demonstrates how a large-scale office building can cost-effectively achieve net-zero energy. The vision of NZEBs is compelling. In theory, these highly energy-efficient buildings will produce, during a typical year, enough renewable energy to offset the energy they consume from the grid. The NREL NZEB definition framework classifies NZEBs according tomore » the criteria being used to judge net-zero status and the way renewable energy is supplied to achieve that status. We use the new U.S. Department of Energy/NREL 220,000-ft{sub 2} Research Support Facilities (RSF) building to illustrate why a clear picture of NZEB definitions is important and how the framework provides a methodology for creating a cost-effective NZEB. The RSF, scheduled to open in June 2010, includes contractual commitments to deliver a Leadership in Energy Efficiency and Design (LEED) Platinum Rating, an energy use intensity of 25 kBtu/ft{sub 2} (half that of a typical LEED Platinum office building), and net-zero energy status. We will discuss the analysis method and cost tradeoffs that were performed throughout the design and build phases to meet these commitments and maintain construction costs at $259/ft{sub 2}. We will discuss ways to achieve large-scale, replicable NZEB performance. Many passive and renewable energy strategies are utilized, including full daylighting, high-performance lighting, natural ventilation through operable windows, thermal mass, transpired solar collectors, radiant heating and cooling, and workstation configurations allow for maximum daylighting.« less

  10. Automated aray assembly, phase 2

    NASA Technical Reports Server (NTRS)

    Daiello, R. V.

    1979-01-01

    A manufacturing process suitable for the large-scale production of silicon solar array modules at a cost of less than $500/peak kW is described. Factors which control the efficiency of ion implanted silicon solar cells, screen-printed thick film metallization, spray-on antireflection coating process, and panel assembly are discussed. Conclusions regarding technological readiness or cost effectiveness of individual process steps are presented.

  11. High-Efficiency, Multijunction Solar Cells for Large-Scale Solar Electricity Generation

    NASA Astrophysics Data System (ADS)

    Kurtz, Sarah

    2006-03-01

    A solar cell with an infinite number of materials (matched to the solar spectrum) has a theoretical efficiency limit of 68%. If sunlight is concentrated, this limit increases to about 87%. These theoretical limits are calculated using basic physics and are independent of the details of the materials. In practice, the challenge of achieving high efficiency depends on identifying materials that can effectively use the solar spectrum. Impressive progress has been made with the current efficiency record being 39%. Today's solar market is also showing impressive progress, but is still hindered by high prices. One strategy for reducing cost is to use lenses or mirrors to focus the light on small solar cells. In this case, the system cost is dominated by the cost of the relatively inexpensive optics. The value of the optics increases with the efficiency of the solar cell. Thus, a concentrator system made with 35%- 40%-efficient solar cells is expected to deliver 50% more power at a similar cost when compare with a system using 25%-efficient cells. Today's markets are showing an opportunity for large concentrator systems that didn't exist 5-10 years ago. Efficiencies may soon pass 40% and ultimately may reach 50%, providing a pathway to improved performance and decreased cost. Many companies are currently investigating this technology for large-scale electricity generation. The presentation will cover the basic physics and more practical considerations to achieving high efficiency as well as describing the current status of the concentrator industry. This work has been authored by an employee of the Midwest Research Institute under Contract No. DE- AC36-99GO10337 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for United States Government purposes.

  12. Laying a Solid Foundation: Strategies for Effective Program Replication

    ERIC Educational Resources Information Center

    Summerville, Geri

    2009-01-01

    The replication of proven social programs is a cost-effective and efficient way to achieve large-scale, positive social change. Yet there has been little guidance available about how to approach program replication and limited development of systems--at local, state or federal levels--to support replication efforts. "Laying a Solid Foundation:…

  13. Multi-Scale Ordered Cell Structure for Cost Effective Production of Hydrogen by HTWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elangovan, Elango; Rao, Ranjeet; Colella, Whitney

    Production of hydrogen using an electrochemical device provides for large scale, high efficiency conversion and storage of electrical energy. When renewable electricity is used for conversion of steam to hydrogen, a low-cost and low emissions pathway to hydrogen production emerges. This project was intended to demonstrate a high efficiency High Temperature Water Splitting (HTWS) stack for the electrochemical production of low cost H2. The innovations investigated address the limitations of the state of the art through the use of a novel architecture that introduces macro-features to provide mechanical support of a thin electrolyte, and micro-features of the electrodes to lowermore » polarization losses. The approach also utilizes a combination of unique sets of fabrication options that are scalable to achieve manufacturing cost objectives. The development of HTWS process and device is guided by techno-economic and life cycle analyses.« less

  14. A Single-use Strategy to Enable Manufacturing of Affordable Biologics.

    PubMed

    Jacquemart, Renaud; Vandersluis, Melissa; Zhao, Mochao; Sukhija, Karan; Sidhu, Navneet; Stout, Jim

    2016-01-01

    The current processing paradigm of large manufacturing facilities dedicated to single product production is no longer an effective approach for best manufacturing practices. Increasing competition for new indications and the launch of biosimilars for the monoclonal antibody market have put pressure on manufacturers to produce at lower cost. Single-use technologies and continuous upstream processes have proven to be cost-efficient options to increase biomass production but as of today the adoption has been only minimal for the purification operations, partly due to concerns related to cost and scale-up. This review summarizes how a single-use holistic process and facility strategy can overcome scale limitations and enable cost-efficient manufacturing to support the growing demand for affordable biologics. Technologies enabling high productivity, right-sized, small footprint, continuous, and automated upstream and downstream operations are evaluated in order to propose a concept for the flexible facility of the future.

  15. Study of Potential Cost Reductions Resulting from Super-Large-Scale Manufacturing of PV Modules: Final Subcontract Report, 7 August 2003--30 September 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keshner, M. S.; Arya, R.

    2004-10-01

    Hewlett Packard has created a design for a ''Solar City'' factory that will process 30 million sq. meters of glass panels per year and produce 2.1-3.6 GW of solar panels per year-100x the volume of a typical, thin-film, solar panel manufacturer in 2004. We have shown that with a reasonable selection of materials, and conservative assumptions, this ''Solar City'' can produce solar panels and hit the price target of $1.00 per peak watt (6.5x-8.5x lower than prices in 2004) as the total price for a complete and installed rooftop (or ground mounted) solar energy system. This breakthrough in the pricemore » of solar energy comes without the need for any significant new invention. It comes entirely from the manufacturing scale of a large plant and the cost savings inherent in operating at such a large manufacturing scale. We expect that further optimizations from these simple designs will lead to further improvements in cost. The manufacturing process and cost depend on the choice for the active layer that converts sunlight into electricity. The efficiency by which sunlight is converted into electricity can range from 7% to 15%. This parameter has a large effect on the overall price per watt. There are other impacts, as well, and we have attempted to capture them without creating undue distractions. Our primary purpose is to demonstrate the impact of large-scale manufacturing. This impact is largely independent of the choice of active layer. It is not our purpose to compare the pro's and con's for various types of active layers. Significant improvements in cost per watt can also come from scientific advances in active layers that lead to higher efficiency. But, again, our focus is on manufacturing gains and not on the potential advances in the basic technology.« less

  16. Architectural and Mobility Management Designs in Internet-Based Infrastructure Wireless Mesh Networks

    ERIC Educational Resources Information Center

    Zhao, Weiyi

    2011-01-01

    Wireless mesh networks (WMNs) have recently emerged to be a cost-effective solution to support large-scale wireless Internet access. They have numerous applications, such as broadband Internet access, building automation, and intelligent transportation systems. One research challenge for Internet-based WMNs is to design efficient mobility…

  17. High-Temperature-Short-Time Annealing Process for High-Performance Large-Area Perovskite Solar Cells.

    PubMed

    Kim, Minjin; Kim, Gi-Hwan; Oh, Kyoung Suk; Jo, Yimhyun; Yoon, Hyun; Kim, Ka-Hyun; Lee, Heon; Kim, Jin Young; Kim, Dong Suk

    2017-06-27

    Organic-inorganic hybrid metal halide perovskite solar cells (PSCs) are attracting tremendous research interest due to their high solar-to-electric power conversion efficiency with a high possibility of cost-effective fabrication and certified power conversion efficiency now exceeding 22%. Although many effective methods for their application have been developed over the past decade, their practical transition to large-size devices has been restricted by difficulties in achieving high performance. Here we report on the development of a simple and cost-effective production method with high-temperature and short-time annealing processing to obtain uniform, smooth, and large-size grain domains of perovskite films over large areas. With high-temperature short-time annealing at 400 °C for 4 s, the perovskite film with an average domain size of 1 μm was obtained, which resulted in fast solvent evaporation. Solar cells fabricated using this processing technique had a maximum power conversion efficiency exceeding 20% over a 0.1 cm 2 active area and 18% over a 1 cm 2 active area. We believe our approach will enable the realization of highly efficient large-area PCSs for practical development with a very simple and short-time procedure. This simple method should lead the field toward the fabrication of uniform large-scale perovskite films, which are necessary for the production of high-efficiency solar cells that may also be applicable to several other material systems for more widespread practical deployment.

  18. Cost effects of hospital mergers in Portugal.

    PubMed

    Azevedo, Helda; Mateus, Céu

    2014-12-01

    The Portuguese hospital sector has been restructured by wide-ranging hospital mergers, following a conviction among policy makers that bigger hospitals lead to lower average costs. Since the effects of mergers have not been systematically evaluated, the purpose of this article is to contribute to this area of knowledge by assessing potential economies of scale to explore and compare these results with realized cost savings after mergers. Considering the period 2003-2009, we estimate the translog cost function to examine economies of scale in the years preceding restructuring. Additionally, we use the difference-in-differences approach to evaluate hospital centres (HC) that occurred between 2004 and 2007, comparing the years after and before mergers. Our findings suggest that economies of scale are present in the pre-merger configuration with an optimum hospital size of around 230 beds. However, the mergers between two or more hospitals led to statistically significant post-merger cost increases, of about 8 %. This result indicates that some HC become too large to explore economies of scale and suggests the difficulty of achieving efficiencies through combining operations and service specialization.

  19. Applicability of Zeolite Based Systems for Ammonia Removal and Recovery From Wastewater.

    PubMed

    Das, Pallabi; Prasad, Bably; Singh, Krishna Kant Kumar

    2017-09-01

      Ammonia discharged in industrial effluents bears deleterious effects and necessitates remediation. Integrated systems devoted to recovery of ammonia in a useful form and remediation of the same addresses the challenges of waste management and its utilization. A comparative performance evaluation study was undertaken to access the suitability of different zeolite based systems (commercial zeolites and zeolites synthesized from fly ash) for removal of ammonia followed by its subsequent release. Four main parameters which were studied to evaluate the applicability of such systems for large scale usage are cost-effectiveness, ammonia removal efficiency, performance on regeneration, and ammonia release percentage. The results indicated that synthetic zeolites outperformed zeolites synthesized from fly ash, although the later proved to be more efficient in terms of total cost incurred. Process technology development in this direction will be a trade-of between cost and ammonia removal and release efficiencies.

  20. Materials interface engineering for solution-processed photovoltaics.

    PubMed

    Graetzel, Michael; Janssen, René A J; Mitzi, David B; Sargent, Edward H

    2012-08-16

    Advances in solar photovoltaics are urgently needed to increase the performance and reduce the cost of harvesting solar power. Solution-processed photovoltaics are cost-effective to manufacture and offer the potential for physical flexibility. Rapid progress in their development has increased their solar-power conversion efficiencies. The nanometre (electron) and micrometre (photon) scale interfaces between the crystalline domains that make up solution-processed solar cells are crucial for efficient charge transport. These interfaces include large surface area junctions between photoelectron donors and acceptors, the intralayer grain boundaries within the absorber, and the interfaces between photoactive layers and the top and bottom contacts. Controlling the collection and minimizing the trapping of charge carriers at these boundaries is crucial to efficiency.

  1. Energy Efficiency Potential in the U.S. Single-Family Housing Stock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Eric J.; Christensen, Craig B.; Horowitz, Scott G.

    Typical approaches for assessing energy efficiency potential in buildings use a limited number of prototypes, and therefore suffer from inadequate resolution when pass-fail cost-effectiveness tests are applied, which can significantly underestimate or overestimate the economic potential of energy efficiency technologies. This analysis applies a new approach to large-scale residential energy analysis, combining the use of large public and private data sources, statistical sampling, detailed building simulations, and high-performance computing to achieve unprecedented granularity - and therefore accuracy - in modeling the diversity of the single-family housing stock. The result is a comprehensive set of maps, tables, and figures showing themore » technical and economic potential of 50 plus residential energy efficiency upgrades and packages for each state. Policymakers, program designers, and manufacturers can use these results to identify upgrades with the highest potential for cost-effective savings in a particular state or region, as well as help identify customer segments for targeted marketing and deployment. The primary finding of this analysis is that there is significant technical and economic potential to save electricity and on-site fuel use in the single-family housing stock. However, the economic potential is very sensitive to the cost-effectiveness criteria used for analysis. Additionally, the savings of particular energy efficiency upgrades is situation-specific within the housing stock (depending on climate, building vintage, heating fuel type, building physical characteristics, etc.).« less

  2. Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion.

    PubMed

    Igarashi, Shunsuke; Nakamura, Tomoya; Matsushima, Kyoji; Yamaguchi, Masahiro

    2018-04-16

    In the calculation of large-scale computer-generated holograms, an approach called "tiling," which divides the hologram plane into small rectangles, is often employed due to limitations on computational memory. However, the total amount of computational complexity severely increases with the number of divisions. In this paper, we propose an efficient method for calculating tiled large-scale holograms using ray-wavefront conversion. In experiments, the effectiveness of the proposed method was verified by comparing its calculation cost with that using the previous method. Additionally, a hologram of 128K × 128K pixels was calculated and fabricated by a laser-lithography system, and a high-quality 105 mm × 105 mm 3D image including complicated reflection and translucency was optically reconstructed.

  3. [Effect of pilot UASB-SFSBR-MAP process for the large scale swine wastewater treatment].

    PubMed

    Wang, Liang; Chen, Chong-Jun; Chen, Ying-Xu; Wu, Wei-Xiang

    2013-03-01

    In this paper, a treatment process consisted of UASB, step-fed sequencing batch reactor (SFSBR) and magnesium ammonium phosphate precipitation reactor (MAP) was built to treat the large scale swine wastewater, which aimed at overcoming drawbacks of conventional anaerobic-aerobic treatment process and SBR treatment process, such as the low denitrification efficiency, high operating costs and high nutrient losses and so on. Based on the treatment process, a pilot engineering was constructed. It was concluded from the experiment results that the removal efficiency of COD, NH4(+) -N and TP reached 95.1%, 92.7% and 88.8%, the recovery rate of NH4(+) -N and TP by MAP process reached 23.9% and 83.8%, the effluent quality was superior to the discharge standard of pollutants for livestock and poultry breeding (GB 18596-2001), mass concentration of COD, TN, NH4(+) -N, TP and SS were not higher than 135, 116, 43, 7.3 and 50 mg x L(-1) respectively. The process developed was reliable, kept self-balance of carbon source and alkalinity, reached high nutrient recovery efficiency. And the operating cost was equal to that of the traditional anaerobic-aerobic treatment process. So the treatment process could provide a high value of application and dissemination and be fit for the treatment pf the large scale swine wastewater in China.

  4. Enterprise PACS and image distribution.

    PubMed

    Huang, H K

    2003-01-01

    Around the world now, because of the need to improve operation efficiency and better cost effective healthcare, many large-scale healthcare enterprises have been formed. Each of these enterprises groups hospitals, medical centers, and clinics together as one enterprise healthcare network. The management of these enterprises recognizes the importance of using PACS and image distribution as a key technology in cost-effective healthcare delivery in the enterprise level. As a result, many large-scale enterprise level PACS/image distribution pilot studies, full design and implementation, are underway. The purpose of this paper is to provide readers an overall view of the current status of enterprise PACS and image distribution. reviews three large-scale enterprise PACS/image distribution systems in USA, Germany, and South Korean. The concept of enterprise level PACS/image distribution, its characteristics and ingredients are then discussed. Business models for enterprise level implementation available by the private medical imaging and system integration industry are highlighted. One current system under development in designing a healthcare enterprise level chest tuberculosis (TB) screening in Hong Kong is described in detail. Copyright 2002 Elsevier Science Ltd.

  5. Economic barriers to implementation of innovations in health care: is the long run-short run efficiency discrepancy a paradox?

    PubMed

    Adang, Eddy M M; Wensing, Michel

    2008-12-01

    Favourable cost-effectiveness of innovative technologies is more and more a necessary condition for implementation in clinical practice. But proven cost-effectiveness itself does not guarantee successful implementation. The reason for this is a potential discrepancy between long run efficiency, on which cost-effectiveness is based, and short run efficiency. Long run and short run efficiency is dependent upon economies of scale. This paper addresses the potential discrepancy between long run and short run efficiency of innovative technologies in healthcare, explores diseconomies of scale in Dutch hospitals and suggests what strategies might help to overcome hurdles to implement innovations due to that discrepancy.

  6. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  7. A rapid and cost-effective method for sequencing pooled cDNA clones by using a combination of transposon insertion and Gateway technology.

    PubMed

    Morozumi, Takeya; Toki, Daisuke; Eguchi-Ogawa, Tomoko; Uenishi, Hirohide

    2011-09-01

    Large-scale cDNA-sequencing projects require an efficient strategy for mass sequencing. Here we describe a method for sequencing pooled cDNA clones using a combination of transposon insertion and Gateway technology. Our method reduces the number of shotgun clones that are unsuitable for reconstruction of cDNA sequences, and has the advantage of reducing the total costs of the sequencing project.

  8. Vapor and healing treatment for CH3NH3PbI3-xClx films toward large-area perovskite solar cells

    NASA Astrophysics Data System (ADS)

    Gouda, Laxman; Gottesman, Ronen; Tirosh, Shay; Haltzi, Eynav; Hu, Jiangang; Ginsburg, Adam; Keller, David A.; Bouhadana, Yaniv; Zaban, Arie

    2016-03-01

    Hybrid methyl-ammonium lead trihalide perovskites are promising low-cost materials for use in solar cells and other optoelectronic applications. With a certified photovoltaic conversion efficiency record of 20.1%, scale-up for commercial purposes is already underway. However, preparation of large-area perovskite films remains a challenge, and films of perovskites on large electrodes suffer from non-uniform performance. Thus, production and characterization of the lateral uniformity of large-area films is a crucial step towards scale-up of devices. In this paper, we present a reproducible method for improving the lateral uniformity and performance of large-area perovskite solar cells (32 cm2). The method is based on methyl-ammonium iodide (MAI) vapor treatment as a new step in the sequential deposition of perovskite films. Following the MAI vapor treatment, we used high throughput techniques to map the photovoltaic performance throughout the large-area device. The lateral uniformity and performance of all photovoltaic parameters (Voc, Jsc, Fill Factor, Photo-conversion efficiency) increased, with an overall improved photo-conversion efficiency of ~100% following a vapor treatment at 140 °C. Based on XRD and photoluminescence measurements, We propose that the MAI treatment promotes a ``healing effect'' to the perovskite film which increases the lateral uniformity across the large-area solar cell. Thus, the straightforward MAI vapor treatment is highly beneficial for large scale commercialization of perovskite solar cells, regardless of the specific deposition method.Hybrid methyl-ammonium lead trihalide perovskites are promising low-cost materials for use in solar cells and other optoelectronic applications. With a certified photovoltaic conversion efficiency record of 20.1%, scale-up for commercial purposes is already underway. However, preparation of large-area perovskite films remains a challenge, and films of perovskites on large electrodes suffer from non-uniform performance. Thus, production and characterization of the lateral uniformity of large-area films is a crucial step towards scale-up of devices. In this paper, we present a reproducible method for improving the lateral uniformity and performance of large-area perovskite solar cells (32 cm2). The method is based on methyl-ammonium iodide (MAI) vapor treatment as a new step in the sequential deposition of perovskite films. Following the MAI vapor treatment, we used high throughput techniques to map the photovoltaic performance throughout the large-area device. The lateral uniformity and performance of all photovoltaic parameters (Voc, Jsc, Fill Factor, Photo-conversion efficiency) increased, with an overall improved photo-conversion efficiency of ~100% following a vapor treatment at 140 °C. Based on XRD and photoluminescence measurements, We propose that the MAI treatment promotes a ``healing effect'' to the perovskite film which increases the lateral uniformity across the large-area solar cell. Thus, the straightforward MAI vapor treatment is highly beneficial for large scale commercialization of perovskite solar cells, regardless of the specific deposition method. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr08658b

  9. Construction and Operation Costs of Wastewater Treatment and Implications for the Paper Industry in China.

    PubMed

    Niu, Kunyu; Wu, Jian; Yu, Fang; Guo, Jingli

    2016-11-15

    This paper aims to develop a construction and operation cost model of wastewater treatment for the paper industry in China and explores the main factors that determine these costs. Previous models mainly involved factors relating to the treatment scale and efficiency of treatment facilities for deriving the cost function. We considered the factors more comprehensively by adding a regional variable to represent the economic development level, a corporate ownership factor to represent the plant characteristics, a subsector variable to capture pollutant characteristics, and a detailed-classification technology variable. We applied a unique data set from a national pollution source census for the model simulation. The major findings include the following: (1) Wastewater treatment costs in the paper industry are determined by scale, technology, degree of treatment, ownership, and regional factors; (2) Wastewater treatment costs show a large decreasing scale effect; (3) The current level of pollutant discharge fees is far lower than the marginal treatment costs for meeting the wastewater discharge standard. Key implications are as follows: (1) Cost characteristics and impact factors should be fully recognized when planning or making policies relating to wastewater treatment projects or technology development; (2) There is potential to reduce treatment costs by centralizing wastewater treatment via industrial parks; (3) Wastewater discharge fee rates should be increased; (4) Energy efficient technology should become the future focus of wastewater treatment.

  10. Network placement optimization for large-scale distributed system

    NASA Astrophysics Data System (ADS)

    Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng

    2018-01-01

    The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.

  11. Scaling up HIV viral load - lessons from the large-scale implementation of HIV early infant diagnosis and CD4 testing.

    PubMed

    Peter, Trevor; Zeh, Clement; Katz, Zachary; Elbireer, Ali; Alemayehu, Bereket; Vojnov, Lara; Costa, Alex; Doi, Naoko; Jani, Ilesh

    2017-11-01

    The scale-up of effective HIV viral load (VL) testing is an urgent public health priority. Implementation of testing is supported by the availability of accurate, nucleic acid based laboratory and point-of-care (POC) VL technologies and strong WHO guidance recommending routine testing to identify treatment failure. However, test implementation faces challenges related to the developing health systems in many low-resource countries. The purpose of this commentary is to review the challenges and solutions from the large-scale implementation of other diagnostic tests, namely nucleic-acid based early infant HIV diagnosis (EID) and CD4 testing, and identify key lessons to inform the scale-up of VL. Experience with EID and CD4 testing provides many key lessons to inform VL implementation and may enable more effective and rapid scale-up. The primary lessons from earlier implementation efforts are to strengthen linkage to clinical care after testing, and to improve the efficiency of testing. Opportunities to improve linkage include data systems to support the follow-up of patients through the cascade of care and test delivery, rapid sample referral networks, and POC tests. Opportunities to increase testing efficiency include improvements to procurement and supply chain practices, well connected tiered laboratory networks with rational deployment of test capacity across different levels of health services, routine resource mapping and mobilization to ensure adequate resources for testing programs, and improved operational and quality management of testing services. If applied to VL testing programs, these approaches could help improve the impact of VL on ART failure management and patient outcomes, reduce overall costs and help ensure the sustainable access to reduced pricing for test commodities, as well as improve supportive health systems such as efficient, and more rigorous quality assurance. These lessons draw from traditional laboratory practices as well as fields such as logistics, operations management and business. The lessons and innovations from large-scale EID and CD4 programs described here can be adapted to inform more effective scale-up approaches for VL. They demonstrate that an integrated approach to health system strengthening focusing on key levers for test access such as data systems, supply efficiencies and network management. They also highlight the challenges with implementation and the need for more innovative approaches and effective partnerships to achieve equitable and cost-effective test access. © 2017 The Authors. Journal of the International AIDS Society published by John Wiley & sons Ltd on behalf of the International AIDS Society.

  12. HIV prevention costs and program scale: data from the PANCEA project in five low and middle-income countries.

    PubMed

    Marseille, Elliot; Dandona, Lalit; Marshall, Nell; Gaist, Paul; Bautista-Arredondo, Sergio; Rollins, Brandi; Bertozzi, Stefano M; Coovadia, Jerry; Saba, Joseph; Lioznov, Dmitry; Du Plessis, Jo-Ann; Krupitsky, Evgeny; Stanley, Nicci; Over, Mead; Peryshkina, Alena; Kumar, S G Prem; Muyingo, Sowedi; Pitter, Christian; Lundberg, Mattias; Kahn, James G

    2007-07-12

    Economic theory and limited empirical data suggest that costs per unit of HIV prevention program output (unit costs) will initially decrease as small programs expand. Unit costs may then reach a nadir and start to increase if expansion continues beyond the economically optimal size. Information on the relationship between scale and unit costs is critical to project the cost of global HIV prevention efforts and to allocate prevention resources efficiently. The "Prevent AIDS: Network for Cost-Effectiveness Analysis" (PANCEA) project collected 2003 and 2004 cost and output data from 206 HIV prevention programs of six types in five countries. The association between scale and efficiency for each intervention type was examined for each country. Our team characterized the direction, shape, and strength of this association by fitting bivariate regression lines to scatter plots of output levels and unit costs. We chose the regression forms with the highest explanatory power (R2). Efficiency increased with scale, across all countries and interventions. This association varied within intervention and within country, in terms of the range in scale and efficiency, the best fitting regression form, and the slope of the regression. The fraction of variation in efficiency explained by scale ranged from 26-96%. Doubling in scale resulted in reductions in unit costs averaging 34.2% (ranging from 2.4% to 58.0%). Two regression trends, in India, suggested an inflection point beyond which unit costs increased. Unit costs decrease with scale across a wide range of service types and volumes. These country and intervention-specific findings can inform projections of the global cost of scaling up HIV prevention efforts.

  13. Self-assembled monolayers of n-alkanethiols suppress hydrogen evolution and increase the efficiency of rechargeable iron battery electrodes.

    PubMed

    Malkhandi, Souradip; Yang, Bo; Manohar, Aswin K; Prakash, G K Surya; Narayanan, S R

    2013-01-09

    Iron-based rechargeable batteries, because of their low cost, eco-friendliness, and durability, are extremely attractive for large-scale energy storage. A principal challenge in the deployment of these batteries is their relatively low electrical efficiency. The low efficiency is due to parasitic hydrogen evolution that occurs on the iron electrode during charging and idle stand. In this study, we demonstrate for the first time that linear alkanethiols are very effective in suppressing hydrogen evolution on alkaline iron battery electrodes. The alkanethiols form self-assembled monolayers on the iron electrodes. The degree of suppression of hydrogen evolution by the alkanethiols was found to be greater than 90%, and the effectiveness of the alkanethiol increased with the chain length. Through steady-state potentiostatic polarization studies and impedance measurements on high-purity iron disk electrodes, we show that the self-assembly of alkanethiols suppressed the parasitic reaction by reducing the interfacial area available for the electrochemical reaction. We have modeled the effect of chain length of the alkanethiol on the surface coverage, charge-transfer resistance, and double-layer capacitance of the interface using a simple model that also yields a value for the interchain interaction energy. We have verified the improvement in charging efficiency resulting from the use of the alkanethiols in practical rechargeable iron battery electrodes. The results of battery tests indicate that alkanethiols yield among the highest faradaic efficiencies reported for the rechargeable iron electrodes, enabling the prospect of a large-scale energy storage solution based on low-cost iron-based rechargeable batteries.

  14. Cost of Community Integrated Prevention Campaign for Malaria, HIV, and Diarrhea in Rural Kenya

    PubMed Central

    2011-01-01

    Background Delivery of community-based prevention services for HIV, malaria, and diarrhea is a major priority and challenge in rural Africa. Integrated delivery campaigns may offer a mechanism to achieve high coverage and efficiency. Methods We quantified the resources and costs to implement a large-scale integrated prevention campaign in Lurambi Division, Western Province, Kenya that reached 47,133 individuals (and 83% of eligible adults) in 7 days. The campaign provided HIV testing, condoms, and prevention education materials; a long-lasting insecticide-treated bed net; and a water filter. Data were obtained primarily from logistical and expenditure data maintained by implementing partners. We estimated the projected cost of a Scaled-Up Replication (SUR), assuming reliance on local managers, potential efficiencies of scale, and other adjustments. Results The cost per person served was $41.66 for the initial campaign and was projected at $31.98 for the SUR. The SUR cost included 67% for commodities (mainly water filters and bed nets) and 20% for personnel. The SUR projected unit cost per person served, by disease, was $6.27 for malaria (nets and training), $15.80 for diarrhea (filters and training), and $9.91 for HIV (test kits, counseling, condoms, and CD4 testing at each site). Conclusions A large-scale, rapidly implemented, integrated health campaign provided services to 80% of a rural Kenyan population with relatively low cost. Scaling up this design may provide similar services to larger populations at lower cost per person. PMID:22189090

  15. Efficiency and economics of large scale hydrogen liquefaction. [for future generation aircraft requirements

    NASA Technical Reports Server (NTRS)

    Baker, C. R.

    1975-01-01

    Liquid hydrogen is being considered as a substitute for conventional hydrocarbon-based fuels for future generations of commercial jet aircraft. Its acceptance will depend, in part, upon the technology and cost of liquefaction. The process and economic requirements for providing a sufficient quantity of liquid hydrogen to service a major airport are described. The design is supported by thermodynamic studies which determine the effect of process arrangement and operating parameters on the process efficiency and work of liquefaction.

  16. Development of a gene synthesis platform for the efficient large scale production of small genes encoding animal toxins.

    PubMed

    Sequeira, Ana Filipa; Brás, Joana L A; Guerreiro, Catarina I P D; Vincentelli, Renaud; Fontes, Carlos M G A

    2016-12-01

    Gene synthesis is becoming an important tool in many fields of recombinant DNA technology, including recombinant protein production. De novo gene synthesis is quickly replacing the classical cloning and mutagenesis procedures and allows generating nucleic acids for which no template is available. In addition, when coupled with efficient gene design algorithms that optimize codon usage, it leads to high levels of recombinant protein expression. Here, we describe the development of an optimized gene synthesis platform that was applied to the large scale production of small genes encoding venom peptides. This improved gene synthesis method uses a PCR-based protocol to assemble synthetic DNA from pools of overlapping oligonucleotides and was developed to synthesise multiples genes simultaneously. This technology incorporates an accurate, automated and cost effective ligation independent cloning step to directly integrate the synthetic genes into an effective Escherichia coli expression vector. The robustness of this technology to generate large libraries of dozens to thousands of synthetic nucleic acids was demonstrated through the parallel and simultaneous synthesis of 96 genes encoding animal toxins. An automated platform was developed for the large-scale synthesis of small genes encoding eukaryotic toxins. Large scale recombinant expression of synthetic genes encoding eukaryotic toxins will allow exploring the extraordinary potency and pharmacological diversity of animal venoms, an increasingly valuable but unexplored source of lead molecules for drug discovery.

  17. Large-scale conservation planning in a multinational marine environment: cost matters.

    PubMed

    Mazor, Tessa; Giakoumi, Sylvaine; Kark, Salit; Possingham, Hugh P

    2014-07-01

    Explicitly including cost in marine conservation planning is essential for achieving feasible and efficient conservation outcomes. Yet, spatial priorities for marine conservation are still often based solely on biodiversity hotspots, species richness, and/or cumulative threat maps. This study aims to provide an approach for including cost when planning large-scale Marine Protected Area (MPA) networks that span multiple countries. Here, we explore the incorporation of cost in the complex setting of the Mediterranean Sea. In order to include cost in conservation prioritization, we developed surrogates that account for revenue from multiple marine sectors: commercial fishing, noncommercial fishing, and aquaculture. Such revenue can translate into an opportunity cost for the implementation of an MPA network. Using the software Marxan, we set conservation targets to protect 10% of the distribution of 77 threatened marine species in the Mediterranean Sea. We compared nine scenarios of opportunity cost by calculating the area and cost required to meet our targets. We further compared our spatial priorities with those that are considered consensus areas by several proposed prioritization schemes in the Mediterranean Sea, none of which explicitly considers cost. We found that for less than 10% of the Sea's area, our conservation targets can be achieved while incurring opportunity costs of less than 1%. In marine systems, we reveal that area is a poor cost surrogate and that the most effective surrogates are those that account for multiple sectors or stakeholders. Furthermore, our results indicate that including cost can greatly influence the selection of spatial priorities for marine conservation of threatened species. Although there are known limitations in multinational large-scale planning, attempting to devise more systematic and rigorous planning methods is especially critical given that collaborative conservation action is on the rise and global financial crisis restricts conservation investments.

  18. The latest developments and outlook for hydrogen liquefaction technology

    NASA Astrophysics Data System (ADS)

    Ohlig, K.; Decker, L.

    2014-01-01

    Liquefied hydrogen is presently mainly used for space applications and the semiconductor industry. While clean energy applications, for e.g. the automotive sector, currently contribute to this demand with a small share only, their demand may see a significant boost in the next years with the need for large scale liquefaction plants exceeding the current plant sizes by far. Hydrogen liquefaction for small scale plants with a maximum capacity of 3 tons per day (tpd) is accomplished with a Brayton refrigeration cycle using helium as refrigerant. This technology is characterized by low investment costs but lower process efficiency and hence higher operating costs. For larger plants, a hydrogen Claude cycle is used, characterized by higher investment but lower operating costs. However, liquefaction plants meeting the potentially high demand in the clean energy sector will need further optimization with regard to energy efficiency and hence operating costs. The present paper gives an overview of the currently applied technologies, including their thermodynamic and technical background. Areas of improvement are identified to derive process concepts for future large scale hydrogen liquefaction plants meeting the needs of clean energy applications with optimized energy efficiency and hence minimized operating costs. Compared to studies in this field, this paper focuses on application of new technology and innovative concepts which are either readily available or will require short qualification procedures. They will hence allow implementation in plants in the close future.

  19. Estimated Costs for Delivery of HIV Antiretroviral Therapy to Individuals with CD4+ T-Cell Counts >350 cells/uL in Rural Uganda.

    PubMed

    Jain, Vivek; Chang, Wei; Byonanebye, Dathan M; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R; Havlir, Diane V; Kahn, James G

    2015-01-01

    Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451-716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100-200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. In a Ugandan HIV clinic, ART delivery costs--including VL testing--for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions.

  20. Estimated Costs for Delivery of HIV Antiretroviral Therapy to Individuals with CD4+ T-Cell Counts >350 cells/uL in Rural Uganda

    PubMed Central

    Jain, Vivek; Chang, Wei; Byonanebye, Dathan M.; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R.; Havlir, Diane V.; Kahn, James G.

    2015-01-01

    Background Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Methods Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Findings Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451–716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100–200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. Conclusions In a Ugandan HIV clinic, ART delivery costs—including VL testing—for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions. PMID:26632823

  1. Efficient design of clinical trials and epidemiological research: is it possible?

    PubMed

    Lauer, Michael S; Gordon, David; Wei, Gina; Pearson, Gail

    2017-08-01

    Randomized clinical trials and large-scale, cohort studies continue to have a critical role in generating evidence in cardiovascular medicine; however, the increasing concern is that ballooning costs threaten the clinical trial enterprise. In this Perspectives article, we discuss the changing landscape of clinical research, and clinical trials in particular, focusing on reasons for the increasing costs and inefficiencies. These reasons include excessively complex design, overly restrictive inclusion and exclusion criteria, burdensome regulations, excessive source-data verification, and concerns about the effect of clinical research conduct on workflow. Thought leaders have called on the clinical research community to consider alternative, transformative business models, including those models that focus on simplicity and leveraging of digital resources. We present some examples of innovative approaches by which some investigators have successfully conducted large-scale, clinical trials at relatively low cost. These examples include randomized registry trials, cluster-randomized trials, adaptive trials, and trials that are fully embedded within digital clinical care or administrative platforms.

  2. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  3. Delayed Effects of a Low-Cost and Large-Scale Summer Reading Intervention on Elementary School Children's Reading Comprehension

    ERIC Educational Resources Information Center

    Kim, James S.; Guryan, Jonathan; White, Thomas G.; Quinn, David M.; Capotosto, Lauren; Kingston, Helen Chen

    2016-01-01

    To improve the reading comprehension outcomes of children in high-poverty schools, policymakers need to identify reading interventions that show promise of effectiveness at scale. This study evaluated the effectiveness of a low-cost and large-scale summer reading intervention that provided comprehension lessons at the end of the school year and…

  4. Potential gains from hospital mergers in Denmark.

    PubMed

    Kristensen, Troels; Bogetoft, Peter; Pedersen, Kjeld Moeller

    2010-12-01

    The Danish hospital sector faces a major rebuilding program to centralize activity in fewer and larger hospitals. We aim to conduct an efficiency analysis of hospitals and to estimate the potential cost savings from the planned hospital mergers. We use Data Envelopment Analysis (DEA) to estimate a cost frontier. Based on this analysis, we calculate an efficiency score for each hospital and estimate the potential gains from the proposed mergers by comparing individual efficiencies with the efficiency of the combined hospitals. Furthermore, we apply a decomposition algorithm to split merger gains into technical efficiency, size (scale) and harmony (mix) gains. The motivation for this decomposition is that some of the apparent merger gains may actually be available with less than a full-scale merger, e.g., by sharing best practices and reallocating certain resources and tasks. Our results suggest that many hospitals are technically inefficient, and the expected "best practice" hospitals are quite efficient. Also, some mergers do not seem to lower costs. This finding indicates that some merged hospitals become too large and therefore experience diseconomies of scale. Other mergers lead to considerable cost reductions; we find potential gains resulting from learning better practices and the exploitation of economies of scope. To ensure robustness, we conduct a sensitivity analysis using two alternative returns-to-scale assumptions and two alternative estimation approaches. We consistently find potential gains from improving the technical efficiency and the exploitation of economies of scope from mergers.

  5. A ZigBee wireless networking for remote sensing applications in hydrological monitoring system

    NASA Astrophysics Data System (ADS)

    Weng, Songgan; Zhai, Duo; Yang, Xing; Hu, Xiaodong

    2017-01-01

    Hydrological monitoring is recognized as one of the most important factors in hydrology. Particularly, investigation of the tempo-spatial variation patterns of water-level and their effect on hydrological research has attracted more and more attention in recent. Because of the limitations in both human costs and existing water-level monitoring devices, however, it is very hard for researchers to collect real-time water-level data from large-scale geographical areas. This paper designs and implements a real-time water-level data monitoring system (MCH) based on ZigBee networking, which explicitly serves as an effective and efficient scientific instrument for domain experts to facilitate the measurement of large-scale and real-time water-level data monitoring. We implement a proof-of-concept prototype of the MCH, which can monitor water-level automatically, real-timely and accurately with low cost and low power consumption. The preliminary laboratory results and analyses demonstrate the feasibility and the efficacy of the MCH.

  6. Proactive replica checking to assure reliability of data in cloud storage with minimum replication

    NASA Astrophysics Data System (ADS)

    Murarka, Damini; Maheswari, G. Uma

    2017-11-01

    The two major issues for cloud storage systems are data reliability and storage costs. For data reliability protection, multi-replica replication strategy which is used mostly in current clouds acquires huge storage consumption, leading to a large storage cost for applications within the loud specifically. This paper presents a cost-efficient data reliability mechanism named PRCR to cut back the cloud storage consumption. PRCR ensures data reliability of large cloud information with the replication that might conjointly function as a price effective benchmark for replication. The duplication shows that when resembled to the standard three-replica approach, PRCR will scale back to consume only a simple fraction of the cloud storage from one-third of the storage, thence considerably minimizing the cloud storage price.

  7. Enhancing the transmission efficiency by edge deletion in scale-free networks

    NASA Astrophysics Data System (ADS)

    Zhang, Guo-Qing; Wang, Di; Li, Guo-Jie

    2007-07-01

    How to improve the transmission efficiency of Internet-like packet switching networks is one of the most important problems in complex networks as well as for the Internet research community. In this paper we propose a convenient method to enhance the transmission efficiency of scale-free networks dramatically by kicking out the edges linking to nodes with large betweenness, which we called the “black sheep.” The advantages of our method are of facility and practical importance. Since the black sheep edges are very costly due to their large bandwidth, our method could decrease the cost as well as gain higher throughput of networks. Moreover, we analyze the curve of the largest betweenness on deleting more and more black sheep edges and find that there is a sharp transition at the critical point where the average degree of the nodes ⟨k⟩→2 .

  8. Integrating Efficiency of Industry Processes and Practices Alongside Technology Effectiveness in Space Transportation Cost Modeling and Analysis

    NASA Technical Reports Server (NTRS)

    Zapata, Edgar

    2012-01-01

    This paper presents past and current work in dealing with indirect industry and NASA costs when providing cost estimation or analysis for NASA projects and programs. Indirect costs, when defined as those costs in a project removed from the actual hardware or software hands-on labor; makes up most of the costs of today's complex large scale NASA space/industry projects. This appears to be the case across phases from research into development into production and into the operation of the system. Space transportation is the case of interest here. Modeling and cost estimation as a process rather than a product will be emphasized. Analysis as a series of belief systems in play among decision makers and decision factors will also be emphasized to provide context.

  9. Removing ammonium from water and wastewater using cost-effective adsorbents: A review.

    PubMed

    Huang, Jianyin; Kankanamge, Nadeeka Rathnayake; Chow, Christopher; Welsh, David T; Li, Tianling; Teasdale, Peter R

    2018-01-01

    Ammonium is an important nutrient in primary production; however, high ammonium loads can cause eutrophication of natural waterways, contributing to undesirable changes in water quality and ecosystem structure. While ammonium pollution comes from diffuse agricultural sources, making control difficult, industrial or municipal point sources such as wastewater treatment plants also contribute significantly to overall ammonium pollution. These latter sources can be targeted more readily to control ammonium release into water systems. To assist policy makers and researchers in understanding the diversity of treatment options and the best option for their circumstance, this paper produces a comprehensive review of existing treatment options for ammonium removal with a particular focus on those technologies which offer the highest rates of removal and cost-effectiveness. Ion exchange and adsorption material methods are simple to apply, cost-effective, environmentally friendly technologies which are quite efficient at removing ammonium from treated water. The review presents a list of adsorbents from the literature, their adsorption capacities and other parameters needed for ammonium removal. Further, the preparation of adsorbents with high ammonium removal capacities and new adsorbents is discussed in the context of their relative cost, removal efficiencies, and limitations. Efficient, cost-effective, and environmental friendly adsorbents for the removal of ammonium on a large scale for commercial or water treatment plants are provided. In addition, future perspectives on removing ammonium using adsorbents are presented. Copyright © 2017. Published by Elsevier B.V.

  10. Effect of turbulence modelling to predict combustion and nanoparticle production in the flame assisted spray dryer based on computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Septiani, Eka Lutfi; Widiyastuti, W.; Winardi, Sugeng; Machmudah, Siti; Nurtono, Tantular; Kusdianto

    2016-02-01

    Flame assisted spray dryer are widely uses for large-scale production of nanoparticles because of it ability. Numerical approach is needed to predict combustion and particles production in scale up and optimization process due to difficulty in experimental observation and relatively high cost. Computational Fluid Dynamics (CFD) can provide the momentum, energy and mass transfer, so that CFD more efficient than experiment due to time and cost. Here, two turbulence models, k-ɛ and Large Eddy Simulation were compared and applied in flame assisted spray dryer system. The energy sources for particle drying was obtained from combustion between LPG as fuel and air as oxidizer and carrier gas that modelled by non-premixed combustion in simulation. Silica particles was used to particle modelling from sol silica solution precursor. From the several comparison result, i.e. flame contour, temperature distribution and particle size distribution, Large Eddy Simulation turbulence model can provide the closest data to the experimental result.

  11. Combined heat and power systems: economic and policy barriers to growth.

    PubMed

    Kalam, Adil; King, Abigail; Moret, Ellen; Weerasinghe, Upekha

    2012-04-23

    Combined Heat and Power (CHP) systems can provide a range of benefits to users with regards to efficiency, reliability, costs and environmental impact. Furthermore, increasing the amount of electricity generated by CHP systems in the United States has been identified as having significant potential for impressive economic and environmental outcomes on a national scale. Given the benefits from increasing the adoption of CHP technologies, there is value in improving our understanding of how desired increases in CHP adoption can be best achieved. These obstacles are currently understood to stem from regulatory as well as economic and technological barriers. In our research, we answer the following questions: Given the current policy and economic environment facing the CHP industry, what changes need to take place in this space in order for CHP systems to be competitive in the energy market? We focus our analysis primarily on Combined Heat and Power Systems that use natural gas turbines. Our analysis takes a two-pronged approach. We first conduct a statistical analysis of the impact of state policies on increases in electricity generated from CHP system. Second, we conduct a Cost-Benefit analysis to determine in which circumstances funding incentives are necessary to make CHP technologies cost-competitive. Our policy analysis shows that regulatory improvements do not explain the growth in adoption of CHP technologies but hold the potential to encourage increases in electricity generated from CHP system in small-scale applications. Our Cost-Benefit analysis shows that CHP systems are only cost competitive in large-scale applications and that funding incentives would be necessary to make CHP technology cost-competitive in small-scale applications. From the synthesis of these analyses we conclude that because large-scale applications of natural gas turbines are already cost-competitive, policy initiatives aimed at a CHP market dominated primarily by large-scale (and therefore already cost-competitive) systems have not been effectively directed. Our recommendation is that for CHP technologies using natural gas turbines, policy focuses should be on increasing CHP growth in small-scale systems. This result can be best achieved through redirection of state and federal incentives, research and development, adoption of smart grid technology, and outreach and education.

  12. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  13. Hybrid WDM/OCDMA for next generation access network

    NASA Astrophysics Data System (ADS)

    Wang, Xu; Wada, Naoya; Miyazaki, T.; Cincotti, G.; Kitayama, Ken-ichi

    2007-11-01

    Hybrid wavelength division multiplexing/optical code division multiple access (WDM/OCDMA) passive optical network (PON), where asynchronous OCDMA traffic transmits over WDM network, can be one potential candidate for gigabit-symmetric fiber-to-the-home (FTTH) services. In a cost-effective WDM/OCDMA network, a large scale multi-port encoder/decoder can be employed in the central office, and a low cost encoder/decoder will be used in optical network unit (ONU). The WDM/OCDMA system could be one promising solution to the symmetric high capacity access network with high spectral efficiency, cost effective, good flexibility and enhanced security. Asynchronous WDM/OCDMA systems have been experimentally demonstrated using superstructured fiber Bragg gratings (SSFBG) and muti-port OCDMA en/decoders. The total throughput has reached above Tera-bit/s with spectral efficiency of about 0.41. The key enabling techniques include ultra-long SSFBG, multi-port E/D with high power contrast ratio, optical thresholding, differential phase shift keying modulation with balanced detection, forward error correction, and etc. Using multi-level modulation formats to carry multi-bit information with single pulse, the total capacity and spectral efficiency could be further enhanced.

  14. Luminescent Solar Concentrators in the Algal Industry

    NASA Astrophysics Data System (ADS)

    Hellier, Katie; Corrado, Carley; Carter, Sue; Detweiler, Angela; Bebout, Leslie

    2013-03-01

    Today's industry for renewable energy sources and highly efficient energy management systems is rapidly increasing. Development of increased efficiency Luminescent Solar Concentrators (LSCs) has brought about new applications for commercial interests, including greenhouses for agricultural crops. This project is taking first steps to explore the potential of LSCs to enhance production and reduce costs for algae and cyanobacteria used in biofuels and nutraceuticals. This pilot phase uses LSC filtered light for algal growth trials in greenhouses and laboratory experiments, creating specific wavelength combinations to determine effects of discrete solar light regimes on algal growth and the reduction of heating and water loss in the system. Enhancing the optimal spectra for specific algae will not only increase production, but has the potential to lessen contamination of large scale production due to competition from other algae and bacteria. Providing LSC filtered light will reduce evaporation and heating in regions with limited water supply, while the increased energy output from photovoltaic cells will reduce costs of heating and mixing cultures, thus creating a more efficient and cost effective production system.

  15. Organic light-emitting diodes using novel embedded al gird transparent electrodes

    NASA Astrophysics Data System (ADS)

    Peng, Cuiyun; Chen, Changbo; Guo, Kunping; Tian, Zhenghao; Zhu, Wenqing; Xu, Tao; Wei, Bin

    2017-03-01

    This work demonstrates a novel transparent electrode using embedded Al grids fabricated by a simple and cost-effective approach using photolithography and wet etching. The optical and electrical properties of Al grids versus grid geometry have been systematically investigated, it was found that Al grids exhibited a low sheet resistance of 70 Ω □-1 and a light transmission of 69% at 550 nm with advantages in terms of processing conditions and material cost as well as potential to large scale fabrication. Indium Tin Oxide-free green organic light-emitting diodes (OLED) based on Al grids transparent electrodes was demonstrated, yielding a power efficiency >15 lm W-1 and current efficiency >39 cd A-1 at a brightness of 2396 cd m-2. Furthermore, a reduced efficiency roll-off and higher brightness have been achieved compared with ITO-base device.

  16. Energy efficiency and allometry of movement of swimming and flying animals.

    PubMed

    Bale, Rahul; Hao, Max; Bhalla, Amneet Pal Singh; Patankar, Neelesh A

    2014-05-27

    Which animals use their energy better during movement? One metric to answer this question is the energy cost per unit distance per unit weight. Prior data show that this metric decreases with mass, which is considered to imply that massive animals are more efficient. Although useful, this metric also implies that two dynamically equivalent animals of different sizes will not be considered equally efficient. We resolve this longstanding issue by first determining the scaling of energy cost per unit distance traveled. The scale is found to be M(2/3) or M(1/2), where M is the animal mass. Second, we introduce an energy-consumption coefficient (CE) defined as energy per unit distance traveled divided by this scale. CE is a measure of efficiency of swimming and flying, analogous to how drag coefficient quantifies aerodynamic drag on vehicles. Derivation of the energy-cost scale reveals that the assumption that undulatory swimmers spend energy to overcome drag in the direction of swimming is inappropriate. We derive allometric scalings that capture trends in data of swimming and flying animals over 10-20 orders of magnitude by mass. The energy-consumption coefficient reveals that swimmers beyond a critical mass, and most fliers are almost equally efficient as if they are dynamically equivalent; increasingly massive animals are not more efficient according to the proposed metric. Distinct allometric scalings are discovered for large and small swimmers. Flying animals are found to require relatively more energy compared with swimmers.

  17. Energy efficiency and allometry of movement of swimming and flying animals

    PubMed Central

    Bale, Rahul; Hao, Max; Bhalla, Amneet Pal Singh; Patankar, Neelesh A.

    2014-01-01

    Which animals use their energy better during movement? One metric to answer this question is the energy cost per unit distance per unit weight. Prior data show that this metric decreases with mass, which is considered to imply that massive animals are more efficient. Although useful, this metric also implies that two dynamically equivalent animals of different sizes will not be considered equally efficient. We resolve this longstanding issue by first determining the scaling of energy cost per unit distance traveled. The scale is found to be M2/3 or M1/2, where M is the animal mass. Second, we introduce an energy-consumption coefficient (CE) defined as energy per unit distance traveled divided by this scale. CE is a measure of efficiency of swimming and flying, analogous to how drag coefficient quantifies aerodynamic drag on vehicles. Derivation of the energy-cost scale reveals that the assumption that undulatory swimmers spend energy to overcome drag in the direction of swimming is inappropriate. We derive allometric scalings that capture trends in data of swimming and flying animals over 10–20 orders of magnitude by mass. The energy-consumption coefficient reveals that swimmers beyond a critical mass, and most fliers are almost equally efficient as if they are dynamically equivalent; increasingly massive animals are not more efficient according to the proposed metric. Distinct allometric scalings are discovered for large and small swimmers. Flying animals are found to require relatively more energy compared with swimmers. PMID:24821764

  18. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.

  19. Large-Scale Fabrication of Silicon Nanowires for Solar Energy Applications.

    PubMed

    Zhang, Bingchang; Jie, Jiansheng; Zhang, Xiujuan; Ou, Xuemei; Zhang, Xiaohong

    2017-10-11

    The development of silicon (Si) materials during past decades has boosted up the prosperity of the modern semiconductor industry. In comparison with the bulk-Si materials, Si nanowires (SiNWs) possess superior structural, optical, and electrical properties and have attracted increasing attention in solar energy applications. To achieve the practical applications of SiNWs, both large-scale synthesis of SiNWs at low cost and rational design of energy conversion devices with high efficiency are the prerequisite. This review focuses on the recent progresses in large-scale production of SiNWs, as well as the construction of high-efficiency SiNW-based solar energy conversion devices, including photovoltaic devices and photo-electrochemical cells. Finally, the outlook and challenges in this emerging field are presented.

  20. Flexible Dye-Sensitized Solar Cell Based on Vertical ZnO Nanowire Arrays

    PubMed Central

    2011-01-01

    Flexible dye-sensitized solar cells are fabricated using vertically aligned ZnO nanowire arrays that are transferred onto ITO-coated poly(ethylene terephthalate) substrates using a simple peel-off process. The solar cells demonstrate an energy conversion efficiency of 0.44% with good bending tolerance. This technique paves a new route for building large-scale cost-effective flexible photovoltaic and optoelectronic devices. PMID:27502660

  1. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  2. An extended basis inexact shift-invert Lanczos for the efficient solution of large-scale generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Rewieński, M.; Lamecki, A.; Mrozowski, M.

    2013-09-01

    This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.

  3. The latest developments and outlook for hydrogen liquefaction technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohlig, K.; Decker, L.

    2014-01-29

    Liquefied hydrogen is presently mainly used for space applications and the semiconductor industry. While clean energy applications, for e.g. the automotive sector, currently contribute to this demand with a small share only, their demand may see a significant boost in the next years with the need for large scale liquefaction plants exceeding the current plant sizes by far. Hydrogen liquefaction for small scale plants with a maximum capacity of 3 tons per day (tpd) is accomplished with a Brayton refrigeration cycle using helium as refrigerant. This technology is characterized by low investment costs but lower process efficiency and hence highermore » operating costs. For larger plants, a hydrogen Claude cycle is used, characterized by higher investment but lower operating costs. However, liquefaction plants meeting the potentially high demand in the clean energy sector will need further optimization with regard to energy efficiency and hence operating costs. The present paper gives an overview of the currently applied technologies, including their thermodynamic and technical background. Areas of improvement are identified to derive process concepts for future large scale hydrogen liquefaction plants meeting the needs of clean energy applications with optimized energy efficiency and hence minimized operating costs. Compared to studies in this field, this paper focuses on application of new technology and innovative concepts which are either readily available or will require short qualification procedures. They will hence allow implementation in plants in the close future.« less

  4. Fast Reduction Method in Dominance-Based Information Systems

    NASA Astrophysics Data System (ADS)

    Li, Yan; Zhou, Qinghua; Wen, Yongchuan

    2018-01-01

    In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.

  5. A Magnetic Bead-Integrated Chip for the Large Scale Manufacture of Normalized esiRNAs

    PubMed Central

    Wang, Zhao; Huang, Huang; Zhang, Hanshuo; Sun, Changhong; Hao, Yang; Yang, Junyu; Fan, Yu; Xi, Jianzhong Jeff

    2012-01-01

    The chemically-synthesized siRNA duplex has become a powerful and widely used tool for RNAi loss-of-function studies, but suffers from a high off-target effect problem. Recently, endoribonulease-prepared siRNA (esiRNA) has been shown to be an attractive alternative due to its lower off-target effect and cost effectiveness. However, the current manufacturing method for esiRNA is complicated, mainly in regards to purification and normalization on a large-scale level. In this study, we present a magnetic bead-integrated chip that can immobilize amplification or transcription products on beads and accomplish transcription, digestion, normalization and purification in a robust and convenient manner. This chip is equipped to manufacture ready-to-use esiRNAs on a large-scale level. Silencing specificity and efficiency of these esiRNAs were validated at the transcriptional, translational and functional levels. Manufacture of several normalized esiRNAs in a single well, including those silencing PARP1 and BRCA1, was successfully achieved, and the esiRNAs were subsequently utilized to effectively investigate their synergistic effect on cell viability. A small esiRNA library targeting 68 tyrosine kinase genes was constructed for a loss-of-function study, and four genes were identified in regulating the migration capability of Hela cells. We believe that this approach provides a more robust and cost-effective choice for manufacturing esiRNAs than current approaches, and therefore these heterogeneous RNA strands may have utility in most intensive and extensive applications. PMID:22761791

  6. Effect of dislocations on the open-circuit voltage, short-circuit current and efficiency of heteroepitaxial indium phosphide solar cells

    NASA Technical Reports Server (NTRS)

    Jain, Raj K.; Flood, Dennis J.

    1990-01-01

    Excellent radiation resistance of indium phosphide solar cells makes them a promising candidate for space power applications, but the present high cost of starting substrates may inhibit their large scale use. Thin film indium phosphide cells grown on Si or GaAs substrates have exhibited low efficiencies, because of the generation and propagation of large number of dislocations. Dislocation densities were calculated and its influence on the open circuit voltage, short circuit current, and efficiency of heteroepitaxial indium phosphide cells was studied using the PC-1D. Dislocations act as predominant recombination centers and are required to be controlled by proper transition layers and improved growth techniques. It is shown that heteroepitaxial grown cells could achieve efficiencies in excess of 18 percent AMO by controlling the number of dislocations. The effect of emitter thickness and surface recombination velocity on the cell performance parameters vs. dislocation density is also studied.

  7. Large-scale implementation of disease control programmes: a cost-effectiveness analysis of long-lasting insecticide-treated bed net distribution channels in a malaria-endemic area of western Kenya-a study protocol.

    PubMed

    Gama, Elvis; Were, Vincent; Ouma, Peter; Desai, Meghna; Niessen, Louis; Buff, Ann M; Kariuki, Simon

    2016-11-21

    Historically, Kenya has used various distribution models for long-lasting insecticide-treated bed nets (LLINs) with variable results in population coverage. The models presently vary widely in scale, target population and strategy. There is limited information to determine the best combination of distribution models, which will lead to sustained high coverage and are operationally efficient and cost-effective. Standardised cost information is needed in combination with programme effectiveness estimates to judge the efficiency of LLIN distribution models and options for improvement in implementing malaria control programmes. The study aims to address the information gap, estimating distribution cost and the effectiveness of different LLIN distribution models, and comparing them in an economic evaluation. Evaluation of cost and coverage will be determined for 5 different distribution models in Busia County, an area of perennial malaria transmission in western Kenya. Cost data will be collected retrospectively from health facilities, the Ministry of Health, donors and distributors. Programme-effectiveness data, defined as the number of people with access to an LLIN per 1000 population, will be collected through triangulation of data from a nationally representative, cross-sectional malaria survey, a cross-sectional survey administered to a subsample of beneficiaries in Busia County and LLIN distributors' records. Descriptive statistics and regression analysis will be used for the evaluation. A cost-effectiveness analysis will be performed from a health-systems perspective, and cost-effectiveness ratios will be calculated using bootstrapping techniques. The study has been evaluated and approved by Kenya Medical Research Institute, Scientific and Ethical Review Unit (SERU number 2997). All participants will provide written informed consent. The findings of this economic evaluation will be disseminated through peer-reviewed publications. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  8. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; Werth, Charles J.; Valocchi, Albert J.

    2016-07-01

    Characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydrogeophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with "big data" processing and numerous large-scale numerical simulations. To tackle such difficulties, the principal component geostatistical approach (PCGA) has been proposed as a "Jacobian-free" inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed in the traditional inversion methods. PCGA can be conveniently linked to any multiphysics simulation software with independent parallel executions. In this paper, we extend PCGA to handle a large number of measurements (e.g., 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data were compressed by the zeroth temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Only about 2000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method.

  9. A simple method for decomposition of peracetic acid in a microalgal cultivation system.

    PubMed

    Sung, Min-Gyu; Lee, Hansol; Nam, Kibok; Rexroth, Sascha; Rögner, Matthias; Kwon, Jong-Hee; Yang, Ji-Won

    2015-03-01

    A cost-efficient process devoid of several washing steps was developed, which is related to direct cultivation following the decomposition of the sterilizer. Peracetic acid (PAA) is known to be an efficient antimicrobial agent due to its high oxidizing potential. Sterilization by 2 mM PAA demands at least 1 h incubation time for an effective disinfection. Direct degradation of PAA was demonstrated by utilizing components in conventional algal medium. Consequently, ferric ion and pH buffer (HEPES) showed a synergetic effect for the decomposition of PAA within 6 h. On the contrary, NaNO3, one of the main components in algal media, inhibits the decomposition of PAA. The improved growth of Chlorella vulgaris and Synechocystis PCC6803 was observed in the prepared BG11 by decomposition of PAA. This process involving sterilization and decomposition of PAA should help cost-efficient management of photobioreactors in a large scale for the production of value-added products and biofuels from microalgal biomass.

  10. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    NASA Astrophysics Data System (ADS)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  11. Is there scope for cost savings and efficiency gains in HIV services? A systematic review of the evidence from low- and middle-income countries

    PubMed Central

    Siapka, Mariana; Remme, Michelle; Obure, Carol Dayo; Maier, Claudia B; Dehne, Karl L

    2014-01-01

    Abstract Objective To synthesize the data available – on costs, efficiency and economies of scale and scope – for the six basic programmes of the UNAIDS Strategic Investment Framework, to inform those planning the scale-up of human immunodeficiency virus (HIV) services in low- and middle-income countries. Methods The relevant peer-reviewed and “grey” literature from low- and middle-income countries was systematically reviewed. Search and analysis followed Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines. Findings Of the 82 empirical costing and efficiency studies identified, nine provided data on economies of scale. Scale explained much of the variation in the costs of several HIV services, particularly those of targeted HIV prevention for key populations and HIV testing and treatment. There is some evidence of economies of scope from integrating HIV counselling and testing services with several other services. Cost efficiency may also be improved by reducing input prices, task shifting and improving client adherence. Conclusion HIV programmes need to optimize the scale of service provision to achieve efficiency. Interventions that may enhance the potential for economies of scale include intensifying demand-creation activities, reducing the costs for service users, expanding existing programmes rather than creating new structures, and reducing attrition of existing service users. Models for integrated service delivery – which is, potentially, more efficient than the implementation of stand-alone services – should be investigated further. Further experimental evidence is required to understand how to best achieve efficiency gains in HIV programmes and assess the cost–effectiveness of each service-delivery model. PMID:25110375

  12. A Networked Sensor System for the Analysis of Plot-Scale Hydrology.

    PubMed

    Villalba, German; Plaza, Fernando; Zhong, Xiaoyang; Davis, Tyler W; Navarro, Miguel; Li, Yimei; Slater, Thomas A; Liang, Yao; Liang, Xu

    2017-03-20

    This study presents the latest updates to the Audubon Society of Western Pennsylvania (ASWP) testbed, a $50,000 USD, 104-node outdoor multi-hop wireless sensor network (WSN). The network collects environmental data from over 240 sensors, including the EC-5, MPS-1 and MPS-2 soil moisture and soil water potential sensors and self-made sap flow sensors, across a heterogeneous deployment comprised of MICAz, IRIS and TelosB wireless motes. A low-cost sensor board and software driver was developed for communicating with the analog and digital sensors. Innovative techniques (e.g., balanced energy efficient routing and heterogeneous over-the-air mote reprogramming) maintained high success rates (>96%) and enabled effective software updating, throughout the large-scale heterogeneous WSN. The edaphic properties monitored by the network showed strong agreement with data logger measurements and were fitted to pedotransfer functions for estimating local soil hydraulic properties. Furthermore, sap flow measurements, scaled to tree stand transpiration, were found to be at or below potential evapotranspiration estimates. While outdoor WSNs still present numerous challenges, the ASWP testbed proves to be an effective and (relatively) low-cost environmental monitoring solution and represents a step towards developing a platform for monitoring and quantifying statistically relevant environmental parameters from large-scale network deployments.

  13. A Networked Sensor System for the Analysis of Plot-Scale Hydrology

    PubMed Central

    Villalba, German; Plaza, Fernando; Zhong, Xiaoyang; Davis, Tyler W.; Navarro, Miguel; Li, Yimei; Slater, Thomas A.; Liang, Yao; Liang, Xu

    2017-01-01

    This study presents the latest updates to the Audubon Society of Western Pennsylvania (ASWP) testbed, a $50,000 USD, 104-node outdoor multi-hop wireless sensor network (WSN). The network collects environmental data from over 240 sensors, including the EC-5, MPS-1 and MPS-2 soil moisture and soil water potential sensors and self-made sap flow sensors, across a heterogeneous deployment comprised of MICAz, IRIS and TelosB wireless motes. A low-cost sensor board and software driver was developed for communicating with the analog and digital sensors. Innovative techniques (e.g., balanced energy efficient routing and heterogeneous over-the-air mote reprogramming) maintained high success rates (>96%) and enabled effective software updating, throughout the large-scale heterogeneous WSN. The edaphic properties monitored by the network showed strong agreement with data logger measurements and were fitted to pedotransfer functions for estimating local soil hydraulic properties. Furthermore, sap flow measurements, scaled to tree stand transpiration, were found to be at or below potential evapotranspiration estimates. While outdoor WSNs still present numerous challenges, the ASWP testbed proves to be an effective and (relatively) low-cost environmental monitoring solution and represents a step towards developing a platform for monitoring and quantifying statistically relevant environmental parameters from large-scale network deployments. PMID:28335534

  14. CFD Study of Full-Scale Aerobic Bioreactors: Evaluation of Dynamic O2 Distribution, Gas-Liquid Mass Transfer and Reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbird, David; Sitaraman, Hariswaran; Stickel, Jonathan

    If advanced biofuels are to measurably displace fossil fuels in the near term, they will have to operate at levels of scale, efficiency, and margin unprecedented in the current biotech industry. For aerobically-grown products in particular, scale-up is complex and the practical size, cost, and operability of extremely large reactors is not well understood. Put simply, the problem of how to attain fuel-class production scales comes down to cost-effective delivery of oxygen at high mass transfer rates and low capital and operating costs. To that end, very large reactor vessels (>500 m3) are proposed in order to achieve favorable economiesmore » of scale. Additionally, techno-economic evaluation indicates that bubble-column reactors are more cost-effective than stirred-tank reactors in many low-viscosity cultures. In order to advance the design of extremely large aerobic bioreactors, we have performed computational fluid dynamics (CFD) simulations of bubble-column reactors. A multiphase Euler-Euler model is used to explicitly account for the spatial distribution of air (i.e., gas bubbles) in the reactor. Expanding on the existing bioreactor CFD literature (typically focused on the hydrodynamics of bubbly flows), our simulations include interphase mass transfer of oxygen and a simple phenomenological reaction representing the uptake and consumption of dissolved oxygen by submerged cells. The simulations reproduce the expected flow profiles, with net upward flow in the center of column and downward flow near the wall. At high simulated oxygen uptake rates (OUR), oxygen-depleted regions can be observed in the reactor. By increasing the gas flow to enhance mixing and eliminate depleted areas, a maximum oxygen transfer (OTR) rate is obtained as a function of superficial velocity. These insights regarding minimum superficial velocity and maximum reactor size are incorporated into NREL's larger techno-economic models to supplement standard reactor design equations.« less

  15. An economical device for carbon supplement in large-scale micro-algae production.

    PubMed

    Su, Zhenfeng; Kang, Ruijuan; Shi, Shaoyuan; Cong, Wei; Cai, Zhaoling

    2008-10-01

    One simple but efficient carbon-supplying device was designed and developed, and the correlative carbon-supplying technology was described. The absorbing characterization of this device was studied. The carbon-supplying system proved to be economical for large-scale cultivation of Spirulina sp. in an outdoor raceway pond, and the gaseous carbon dioxide absorptivity was enhanced above 78%, which could reduce the production cost greatly.

  16. One-Pot Large-Scale Synthesis of Carbon Quantum Dots: Efficient Cathode Interlayers for Polymer Solar Cells.

    PubMed

    Yang, Yuzhao; Lin, Xiaofeng; Li, Wenlang; Ou, Jiemei; Yuan, Zhongke; Xie, Fangyan; Hong, Wei; Yu, Dingshan; Ma, Yuguang; Chi, Zhenguo; Chen, Xudong

    2017-05-03

    Cathode interlayers (CILs) with low-cost, low-toxicity, and excellent cathode modification ability are necessary for the large-scale industrialization of polymer solar cells (PSCs). In this contribution, we demonstrated one-pot synthesized carbon quantum dots (C-dots) with high production to serve as efficient CIL for inverted PSCs. The C-dots were synthesized by a facile, economical microwave pyrolysis in a household microwave oven within 7 min. Ultraviolet photoelectron spectroscopy (UPS) studies showed that the C-dots possessed the ability to form a dipole at the interface, resulting in the decrease of the work function (WF) of cathode. External quantum efficiency (EQE) measurements and 2D excitation-emission topographical maps revealed that the C-dots down-shifted the high energy near-ultraviolet light to low energy visible light to generate more photocurrent. Remarkably improvement of power conversion efficiency (PCE) was attained by incorporation of C-dots as CIL. The PCE was boosted up from 4.14% to 8.13% with C-dots as CIL, which is one of the best efficiency for i-PSCs used carbon based materials as interlayers. These results demonstrated that C-dots can be a potential candidate for future low cost and large area PSCs producing.

  17. Controllable lasing performance in solution-processed organic-inorganic hybrid perovskites.

    PubMed

    Kao, Tsung Sheng; Chou, Yu-Hsun; Hong, Kuo-Bin; Huang, Jiong-Fu; Chou, Chun-Hsien; Kuo, Hao-Chung; Chen, Fang-Chung; Lu, Tien-Chang

    2016-11-03

    Solution-processed organic-inorganic perovskites are fascinating due to their remarkable photo-conversion efficiency and great potential in the cost-effective, versatile and large-scale manufacturing of optoelectronic devices. In this paper, we demonstrate that the perovskite nanocrystal sizes can be simply controlled by manipulating the precursor solution concentrations in a two-step sequential deposition process, thus achieving the feasible tunability of excitonic properties and lasing performance in hybrid metal-halide perovskites. The lasing threshold is at around 230 μJ cm -2 in this solution-processed organic-inorganic lead-halide material, which is comparable to the colloidal quantum dot lasers. The efficient stimulated emission originates from the multiple random scattering provided by the micro-meter scale rugged morphology and polycrystalline grain boundaries. Thus the excitonic properties in perovskites exhibit high correlation with the formed morphology of the perovskite nanocrystals. Compared to the conventional lasers normally serving as a coherent light source, the perovskite random lasers are promising in making low-cost thin-film lasing devices for flexible and speckle-free imaging applications.

  18. An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Lu, Dan; Ricciuto, Daniel; Evans, Katherine

    2018-03-01

    Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.

  19. Scalable and cost-effective NGS genotyping in the cloud.

    PubMed

    Souilmi, Yassine; Lancaster, Alex K; Jung, Jae-Yoon; Rizzo, Ettore; Hawkins, Jared B; Powles, Ryan; Amzazi, Saaïd; Ghazal, Hassan; Tonellato, Peter J; Wall, Dennis P

    2015-10-15

    While next-generation sequencing (NGS) costs have plummeted in recent years, cost and complexity of computation remain substantial barriers to the use of NGS in routine clinical care. The clinical potential of NGS will not be realized until robust and routine whole genome sequencing data can be accurately rendered to medically actionable reports within a time window of hours and at scales of economy in the 10's of dollars. We take a step towards addressing this challenge, by using COSMOS, a cloud-enabled workflow management system, to develop GenomeKey, an NGS whole genome analysis workflow. COSMOS implements complex workflows making optimal use of high-performance compute clusters. Here we show that the Amazon Web Service (AWS) implementation of GenomeKey via COSMOS provides a fast, scalable, and cost-effective analysis of both public benchmarking and large-scale heterogeneous clinical NGS datasets. Our systematic benchmarking reveals important new insights and considerations to produce clinical turn-around of whole genome analysis optimization and workflow management including strategic batching of individual genomes and efficient cluster resource configuration.

  20. Eucalyptus plantations for energy production in Hawaii. 1980 annual report, January 1980-December 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitesell, C. D.

    1980-01-01

    In 1980 200 acres of eucalyptus trees were planted for a research and development biomass energy plantation bringing the total area under cultivation to 300 acres. Of this total acreage, 90 acres or 30% was planted in experimental plots. The remaining 70% of the cultivated area was closely monitored to determine the economic cost/benefit ratio of large scale biomass energy production. In the large scale plantings, standard field practices were set up for all phases of production: nursery, clearing, planting, weed control and fertilization. These practices were constantly evaluated for potential improvements in efficiency and reduced cost. Promising experimental treatmentsmore » were implemented on a large scale to test their effectiveness under field production conditions. In the experimental areas all scheduled data collection in 1980 has been completed and most measurements have been keypunched and analyzed. Soil samples and leaf samples have been analyzed for nutrient concentrations. Crop logging procedures have been set up to monitor tree growth through plant tissue analysis. An intensive computer search on biomass, nursery practices, harvesting equipment and herbicide applications has been completed through the services of the US Forest Service.« less

  1. Transaction-Based Building Controls Framework, Volume 1: Reference Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somasundaram, Sriram; Pratt, Robert G.; Akyol, Bora A.

    This document proposes a framework concept to achieve the objectives of raising buildings’ efficiency and energy savings potential benefitting building owners and operators. We call it a transaction-based framework, wherein mutually-beneficial and cost-effective market-based transactions can be enabled between multiple players across different domains. Transaction-based building controls are one part of the transactional energy framework. While these controls realize benefits by enabling automatic, market-based intra-building efficiency optimizations, the transactional energy framework provides similar benefits using the same market -based structure, yet on a larger scale and beyond just buildings, to the society at large.

  2. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  3. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  4. New High Performance Water Vapor Membranes to Improve Fuel Cell Balance of Plant Efficiency and Lower Costs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagener, Earl; Topping, Chris; Morgan, Brad

    Hydrogen fuel cells are currently one of the more promising long term alternative energy options and out of the range of fuel cell technologies under development, proton exchange membranes [PEMs] have the advantage of being able to deliver high power density at relatively low operating temperatures. This is essential for systems such as fuel cell vehicles (FCV) and many stationary applications that undergoing frequent on/off cycling. One of the biggest challenges for PEM systems is the need to maintain a high level of hydration in the cell to enable efficient conduction of protons from the anode to the cathode. Inmore » addition to significant power loss, low humidity conditions lead to increased stress on the membranes which can result in both physical and chemical degradation. Therefore, an effective fuel cell humidifier can be critical for the efficient operation and durability of the system under high load and low humidity conditions. The most common types of water vapor transport (WVT) devices are based on water permeable membrane based separators. Successful membranes must effectively permeate water vapor while restricting crossover of air, and be robust to the temperature and humidity fluctuations experienced in fuel cell systems. DOE sponsored independent evaluations indicate that balance of plant components, including humidification devices, make up more than half of the cost of current automotive fuel cell systems. Despite its relatively widespread us in other applications, the current industry standard perfluorosulfonic acid based Nafion® remains expensive compared with non-perfluorinated polymer membranes. During Phase II of this project, we demonstrated the improved performance of our semi-fluorinated perfluorocyclobutyl polymer based membranes compared with the current industry standard perfluorosulfonic acid based Nafion®, at ~ 50% lower cost. Building on this work, highlights of our Phase IIB developments, in close collaboration with leading global automotive component supplier Dana Holding Corporation include: • Development of a lower cost series of ionomers, with reduced synthetic steps and purification requirements and improved scale-ability, while maintaining performance advantages over Nafion® demonstrated during Phase II. • Demonstration of efficient, continuous production of down-selected WVT membrane configurations at commercial continuous roll coating facilities. We see no major issues producing Tetramer supported WVT membranes on a large commercial scale. • Following the production and testing of three prototype humidifier stacks, a full size humidifier unit was manufactured and successfully tested by an automotive customer for performance and durability. • Assuming the availability of a reasonably priced support, our cost projections for mid to large scale production of Tetramer WVT membranes are within the acceptable range of the leading automotive manufacturers and at a large scale, our calculations based on bulk sourcing of raw materials indicate we can achieve the project goal of $25/m2.« less

  5. Optimal city size and population density for the 21st century.

    PubMed

    Speare A; White, M J

    1990-10-01

    The thesis that large scale urban areas result in greater efficiency, reduced costs, and a better quality of life is reexamined. The environmental and social costs are measured for different scales of settlement. The desirability and perceived problems of a particular place are examined in relation to size of place. The consequences of population decline are considered. New York city is described as providing both opportunities in employment, shopping, and cultural activities as well as a high cost of living, crime, and pollution. The historical development of large cities in the US is described. Immigration has contributed to a greater concentration of population than would have otherwise have occurred. The spatial proximity of goods and services argument (agglomeration economies) has changed with advancements in technology such as roads, trucking, and electronic communication. There is no optimal city size. The overall effect of agglomeration can be assessed by determining whether the markets for goods and labor are adequate to maximize well-being and balance the negative and positive aspects of urbanization. The environmental costs of cities increase with size when air quality, water quality, sewage treatment, and hazardous waste disposal is considered. Smaller scale and lower density cities have the advantages of a lower concentration of pollutants. Also, mobilization for program support is easier with homogenous population. Lower population growth in large cities would contribute to a higher quality of life, since large metropolitan areas have a concentration of immigrants, younger age distributions, and minority groups with higher than average birth rates. The negative consequences of decline can be avoided if reduction of population in large cities takes place gradually. For example, poorer quality housing can be removed for open space. Cities should, however, still attract all classes of people with opportunities equally available.

  6. Are large farms more efficient? Tenure security, farm size and farm efficiency: evidence from northeast China

    NASA Astrophysics Data System (ADS)

    Zhou, Yuepeng; Ma, Xianlei; Shi, Xiaoping

    2017-04-01

    How to increase production efficiency, guarantee grain security, and increase farmers' income using the limited farmland is a great challenge that China is facing. Although theory predicts that secure property rights and moderate scale management of farmland can increase land productivity, reduce farm-related costs, and raise farmer's income, empirical studies on the size and magnitude of these effects are scarce. A number of studies have examined the impacts of land tenure or farm size on productivity or efficiency, respectively. There are also a few studies linking farm size, land tenure and efficiency together. However, to our best knowledge, there are no studies considering tenure security and farm efficiency together for different farm scales in China. In addition, there is little study analyzing the profit frontier. In this study, we particularly focus on the impacts of land tenure security and farm size on farm profit efficiency, using farm level data collected from 23 villages, 811 households in Liaoning in 2015. 7 different farm scales have been identified to further represent small farms, median farms, moderate-scale farms, and large farms. Technical efficiency is analyzed with stochastic frontier production function. The profit efficiency is regressed on a set of explanatory variables which includes farm size dummies, land tenure security indexes, and household characteristics. We found that: 1) The technical efficiency scores for production efficiency (average score = 0.998) indicate that it is already very close to the production frontier, and thus there is little room to improve production efficiency. However, there is larger space to raise profit efficiency (average score = 0.768) by investing more on farm size expansion, seed, hired labor, pesticide, and irrigation. 2) Farms between 50-80 mu are most efficient from the viewpoint of profit efficiency. The so-called moderate-scale farms (100-150 mu) according to the governmental guideline show no advantage in efficiency. 3) Formal land certificates and farmer's participation in land rental market are found to be important determinants of the profit efficiency across different scale of farms. 4) Fertilizer use has been excessive in Liaoning and could lead to the decline of crop profit.

  7. In Vitro Cryopreservation of Date Palm Caulogenic Meristems.

    PubMed

    Fki, Lotfi; Chkir, Olfa; Kriaa, Walid; Nasri, Ameni; Baklouti, Emna; Masmoudi, Raja B; Rival, Alain; Drira, Noureddine; Panis, Bart

    2017-01-01

    Cryopreservation is the technology of choice not only for plant genetic resource preservation but also for virus eradication and for the efficient management of large-scale micropropagation. In this chapter, we describe three cryopreservation protocols (standard vitrification, droplet vitrification, and encapsulation vitrification) for date palm highly proliferating meristems that are initiated from vitro-cultures using plant growth regulator-free MS medium. The positive impact of sucrose preculture and cold hardening treatments on survival rates is significant. Regeneration rates obtained with standard vitrification, encapsulation-vitrification, and droplet-vitrification protocols can reach 30, 40, and 70%, respectively. All regenerated plants from non-cryopreserved or cryopreserved explants don't show morphological variation by maintaining genetic integrity without adverse effect of cryogenic treatment. Cryopreservation of date palm vitro-cultures enables commercial tissue culture laboratories to move to large-scale propagation from cryopreserved cell lines producing true-to-type plants after clonal field-testing trials. When comparing the cost of cryostorage and in-field conservation of date palm cultivars, tissue cryopreservation is the most cost-effective. Moreover, many of the risks linked to field conservation like erosion due to climatic, edaphic, and phytopathologic constraints are circumvented.

  8. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  9. High-Efficiency InGaN/GaN Quantum Well-Based Vertical Light-Emitting Diodes Fabricated on β-Ga2O3 Substrate.

    PubMed

    Muhammed, Mufasila M; Alwadai, Norah; Lopatin, Sergei; Kuramata, Akito; Roqan, Iman S

    2017-10-04

    We demonstrate a state-of-the-art high-efficiency GaN-based vertical light-emitting diode (VLED) grown on a transparent and conductive (-201)-oriented (β-Ga 2 O 3 ) substrate, obtained using a straightforward growth process that does not require a high-cost lift-off technique or complex fabrication process. The high-resolution scanning transmission electron microscopy (STEM) images confirm that we produced high quality upper layers, including a multiquantum well (MQW) grown on the masked β-Ga 2 O 3 substrate. STEM imaging also shows a well-defined MQW without InN diffusion into the barrier. Electroluminescence (EL) measurements at room temperature indicate that we achieved a very high internal quantum efficiency (IQE) of 78%; at lower temperatures, IQE reaches ∼86%. The photoluminescence (PL) and time-resolved PL analysis indicate that, at a high carrier injection density, the emission is dominated by radiative recombination with a negligible Auger effect; no quantum-confined Stark effect is observed. At low temperatures, no efficiency droop is observed at a high carrier injection density, indicating the superior VLED structure obtained without lift-off processing, which is cost-effective for large-scale devices.

  10. Towards high efficiency heliostat fields

    NASA Astrophysics Data System (ADS)

    Arbes, Florian; Wöhrbach, Markus; Gebreiter, Daniel; Weinrebe, Gerhard

    2017-06-01

    CSP power plants have great potential to substantially contribute to world energy supply. To set this free, cost reductions are required for future projects. Heliostat field layout optimization offers a great opportunity to improve field efficiency. Field efficiency primarily depends on the positions of the heliostats around the tower, commonly known as the heliostat field layout. Heliostat shape also influences efficiency. Improvements to optical efficiency results in electricity cost reduction without adding any extra technical complexity. Due to computational challenges heliostat fields are often arranged in patterns. The mathematical models of the radial staggered or spiral patterns are based on two parameters and thus lead to uniform patterns. Optical efficiencies of a heliostat field do not change uniformly with the distance to the tower, they even differ in the northern and southern field. A fixed pattern is not optimal in many parts of the heliostat field, especially when used as large scaled heliostat field. In this paper, two methods are described which allow to modify field density suitable to inconsistent field efficiencies. A new software for large scale heliostat field evaluation is presented, it allows for fast optimizations of several parameters for pattern modification routines. It was used to design a heliostat field with 23,000 heliostats, which is currently planned for a site in South Africa.

  11. Combined heat and power systems: economic and policy barriers to growth

    PubMed Central

    2012-01-01

    Background Combined Heat and Power (CHP) systems can provide a range of benefits to users with regards to efficiency, reliability, costs and environmental impact. Furthermore, increasing the amount of electricity generated by CHP systems in the United States has been identified as having significant potential for impressive economic and environmental outcomes on a national scale. Given the benefits from increasing the adoption of CHP technologies, there is value in improving our understanding of how desired increases in CHP adoption can be best achieved. These obstacles are currently understood to stem from regulatory as well as economic and technological barriers. In our research, we answer the following questions: Given the current policy and economic environment facing the CHP industry, what changes need to take place in this space in order for CHP systems to be competitive in the energy market? Methods We focus our analysis primarily on Combined Heat and Power Systems that use natural gas turbines. Our analysis takes a two-pronged approach. We first conduct a statistical analysis of the impact of state policies on increases in electricity generated from CHP system. Second, we conduct a Cost-Benefit analysis to determine in which circumstances funding incentives are necessary to make CHP technologies cost-competitive. Results Our policy analysis shows that regulatory improvements do not explain the growth in adoption of CHP technologies but hold the potential to encourage increases in electricity generated from CHP system in small-scale applications. Our Cost-Benefit analysis shows that CHP systems are only cost competitive in large-scale applications and that funding incentives would be necessary to make CHP technology cost-competitive in small-scale applications. Conclusion From the synthesis of these analyses we conclude that because large-scale applications of natural gas turbines are already cost-competitive, policy initiatives aimed at a CHP market dominated primarily by large-scale (and therefore already cost-competitive) systems have not been effectively directed. Our recommendation is that for CHP technologies using natural gas turbines, policy focuses should be on increasing CHP growth in small-scale systems. This result can be best achieved through redirection of state and federal incentives, research and development, adoption of smart grid technology, and outreach and education. PMID:22540988

  12. Clinical governance and operations management methodologies.

    PubMed

    Davies, C; Walley, P

    2000-01-01

    The clinical governance mechanism, introduced since 1998 in the UK National Health Service (NHS), aims to deliver high quality care with efficient, effective and cost-effective patient services. Scally and Donaldson recognised that new approaches are needed, and operations management techniques comprise potentially powerful methodologies in understanding the process of care, which can be applied both within and across professional boundaries. This paper summarises four studies in hospital Trusts which took approaches to improving process that were different from and less structured than business process re-engineering (BPR). The problems were then amenable to change at a relatively low cost and short timescale, producing significant improvement to patient care. This less structured approach to operations management avoided incurring overhead costs of large scale and costly change such as new information technology (IT) systems. The most successful changes were brought about by formal tools to control quantity, content and timing of changes.

  13. Development and manufacture of reactive-transfer-printed CIGS photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Eldada, Louay; Sang, Baosheng; Lu, Dingyuan; Stanbery, Billy J.

    2010-09-01

    In recent years, thin-film photovoltaic (PV) companies started realizing their low manufacturing cost potential, and grabbing an increasingly larger market share from multicrystalline silicon companies. Copper Indium Gallium Selenide (CIGS) is the most promising thin-film PV material, having demonstrated the highest energy conversion efficiency in both cells and modules. However, most CIGS manufacturers still face the challenge of delivering a reliable and rapid manufacturing process that can scale effectively and deliver on the promise of this material system. HelioVolt has developed a reactive transfer process for CIGS absorber formation that has the benefits of good compositional control, high-quality CIGS grains, and a fast reaction. The reactive transfer process is a two stage CIGS fabrication method. Precursor films are deposited onto substrates and reusable print plates in the first stage, while in the second stage, the CIGS layer is formed by rapid heating with Se confinement. High quality CIGS films with large grains were produced on a full-scale manufacturing line, and resulted in high-efficiency large-form-factor modules. With 14% cell efficiency and 12% module efficiency, HelioVolt started to commercialize the process on its first production line with 20 MW nameplate capacity.

  14. Los Alamos Discovers Super Efficient Solar Using Perovskite Crystals

    ScienceCinema

    Mohite, Aditya; Nie, Wanyi

    2018-05-11

    State-of-the-art photovoltaics using high-purity, large-area, wafer-scale single-crystalline semiconductors grown by sophisticated, high temperature crystal-growth processes offer promising routes for developing low-cost, solar-based clean global energy solutions for the future. Solar cells composed of the recently discovered material organic-inorganic perovskites offer the efficiency of silicon, yet suffer from a variety of deficiencies limiting the commercial viability of perovskite photovoltaic technology. In research to appear in Science, Los Alamos National Laboratory researchers reveal a new solution-based hot-casting technique that eliminates these limitations, one that allows for the growth of high-quality, large-area, millimeter-scale perovskite crystals and demonstrates that highly efficient and reproducible solar cells with reduced trap assisted recombination can be realized.

  15. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    DOE PAGES

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; ...

    2016-06-09

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  17. Economically viable large-scale hydrogen liquefaction

    NASA Astrophysics Data System (ADS)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  18. Effectiveness and cost-effectiveness of telehealthcare for chronic obstructive pulmonary disease: study protocol for a cluster randomized controlled trial.

    PubMed

    Udsen, Flemming Witt; Lilholt, Pernille Heyckendorff; Hejlesen, Ole; Ehlers, Lars Holger

    2014-05-21

    Several feasibility studies show promising results of telehealthcare on health outcomes and health-related quality of life for patients suffering from chronic obstructive pulmonary disease, and some of these studies show that telehealthcare may even lower healthcare costs. However, the only large-scale trial we have so far - the Whole System Demonstrator Project in England - has raised doubts about these results since it conclude that telehealthcare as a supplement to usual care is not likely to be cost-effective compared with usual care alone. The present study is known as 'TeleCare North' in Denmark. It seeks to address these doubts by implementing a large-scale, pragmatic, cluster-randomized trial with nested economic evaluation. The purpose of the study is to assess the effectiveness and the cost-effectiveness of a telehealth solution for patients suffering from chronic obstructive pulmonary disease compared to usual practice. General practitioners will be responsible for recruiting eligible participants (1,200 participants are expected) for the trial in the geographical area of the North Denmark Region. Twenty-six municipality districts in the region define the randomization clusters. The primary outcomes are changes in health-related quality of life, and the incremental cost-effectiveness ratio measured from baseline to follow-up at 12 months. Secondary outcomes are changes in mortality and physiological indicators (diastolic and systolic blood pressure, pulse, oxygen saturation, and weight). There has been a call for large-scale clinical trials with rigorous cost-effectiveness assessments in telehealthcare research. This study is meant to improve the international evidence base for the effectiveness and cost-effectiveness of telehealthcare to patients suffering from chronic obstructive pulmonary disease by implementing a large-scale pragmatic cluster-randomized clinical trial. Clinicaltrials.gov, http://NCT01984840, November 14, 2013.

  19. Scaling of surface-plasma reactors with a significantly increased energy density for NO conversion.

    PubMed

    Malik, Muhammad Arif; Xiao, Shu; Schoenbach, Karl H

    2012-03-30

    Comparative studies revealed that surface plasmas developing along a solid-gas interface are significantly more effective and energy efficient for remediation of toxic pollutants in air than conventional plasmas propagating in air. Scaling of the surface plasma reactors to large volumes by operating them in parallel suffers from a serious problem of adverse effects of the space charges generated at the dielectric surfaces of the neighboring discharge chambers. This study revealed that a conductive foil on the cathode potential placed between the dielectric plates as a shield not only decoupled the discharges, but also increased the electrical power deposited in the reactor by a factor of about forty over the electrical power level obtained without shielding and without loss of efficiency for NO removal. The shield had no negative effect on efficiency, which is verified by the fact that the energy costs for 50% NO removal were about 60 eV/molecule and the energy constant, k(E), was about 0.02 L/J in both the shielded and unshielded cases. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. On the Path to SunShot. The Role of Advancements in Solar Photovoltaic Efficiency, Reliability, and Costs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodhouse, Michael; Jones-Albertus, Rebecca; Feldman, David

    2016-05-01

    This report examines the remaining challenges to achieving the competitive photovoltaic (PV) costs and large-scale deployment envisioned under the U.S. Department of Energy's SunShot Initiative. Solar-energy cost reductions can be realized through lower PV module and balance-of-system (BOS) costs as well as improved system efficiency and reliability. Numerous combinations of PV improvements could help achieve the levelized cost of electricity (LCOE) goals because of the tradeoffs among key metrics like module price, efficiency, and degradation rate as well as system price and lifetime. Using LCOE modeling based on bottom-up cost analysis, two specific pathways are mapped to exemplify the manymore » possible approaches to module cost reductions of 29%-38% between 2015 and 2020. BOS hardware and soft cost reductions, ranging from 54%-77% of total cost reductions, are also modeled. The residential sector's high supply-chain costs, labor requirements, and customer-acquisition costs give it the greatest BOS cost-reduction opportunities, followed by the commercial sector, although opportunities are available to the utility-scale sector as well. Finally, a future scenario is considered in which very high PV penetration requires additional costs to facilitate grid integration and increased power-system flexibility--which might necessitate even lower solar LCOEs. The analysis of a pathway to 3-5 cents/kWh PV systems underscores the importance of combining robust improvements in PV module and BOS costs as well as PV system efficiency and reliability if such aggressive long-term targets are to be achieved.« less

  1. Prospects, recent advancements and challenges of different wastewater streams for microalgal cultivation.

    PubMed

    Guldhe, Abhishek; Kumari, Sheena; Ramanna, Luveshan; Ramsundar, Prathana; Singh, Poonam; Rawat, Ismail; Bux, Faizal

    2017-12-01

    Microalgae are recognized as one of the most powerful biotechnology platforms for many value added products including biofuels, bioactive compounds, animal and aquaculture feed etc. However, large scale production of microalgal biomass poses challenges due to the requirements of large amounts of water and nutrients for cultivation. Using wastewater for microalgal cultivation has emerged as a potential cost effective strategy for large scale microalgal biomass production. This approach also offers an efficient means to remove nutrients and metals from wastewater making wastewater treatment sustainable and energy efficient. Therefore, much research has been conducted in the recent years on utilizing various wastewater streams for microalgae cultivation. This review identifies and discusses the opportunities and challenges of different wastewater streams for microalgal cultivation. Many alternative routes for microalgal cultivation have been proposed to tackle some of the challenges that occur during microalgal cultivation in wastewater such as nutrient deficiency, substrate inhibition, toxicity etc. Scope and challenges of microalgal biomass grown on wastewater for various applications are also discussed along with the biorefinery approach. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Locating inefficient links in a large-scale transportation network

    NASA Astrophysics Data System (ADS)

    Sun, Li; Liu, Like; Xu, Zhongzhi; Jie, Yang; Wei, Dong; Wang, Pu

    2015-02-01

    Based on data from geographical information system (GIS) and daily commuting origin destination (OD) matrices, we estimated the distribution of traffic flow in the San Francisco road network and studied Braess's paradox in a large-scale transportation network with realistic travel demand. We measured the variation of total travel time Δ T when a road segment is closed, and found that | Δ T | follows a power-law distribution if Δ T < 0 or Δ T > 0. This implies that most roads have a negligible effect on the efficiency of the road network, while the failure of a few crucial links would result in severe travel delays, and closure of a few inefficient links would counter-intuitively reduce travel costs considerably. Generating three theoretical networks, we discovered that the heterogeneously distributed travel demand may be the origin of the observed power-law distributions of | Δ T | . Finally, a genetic algorithm was used to pinpoint inefficient link clusters in the road network. We found that closing specific road clusters would further improve the transportation efficiency.

  3. Review of status developments of high-efficiency crystalline silicon solar cells

    NASA Astrophysics Data System (ADS)

    Liu, Jingjing; Yao, Yao; Xiao, Shaoqing; Gu, Xiaofeng

    2018-03-01

    In order to further improve cell efficiency and reduce cost in achieving grid parity, a large number of PV manufacturing companies, universities and research institutes have been devoted to a variety of low-cost and high-efficiency crystalline Si solar cells. In this article, the cell structures, characteristics and efficiency progresses of several types of high-efficiency crystalline Si solar cells that have been in small scale production or are promising in mass production are presented, including passivated emitter rear cell, tunnel oxide passivated contact solar cell, interdigitated back contact cell, heterojunction with intrinsic thin-layer cell, and heterojunction solar cells with interdigitated back contacts. Both the industrialization status and future development trend of high-efficiency crystalline silicon solar cells are also pinpointed.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feibel, C.E.

    This study uses multiple data collection and research methods including in depth interviews, 271 surveys of shared taxi and minibus operators, participant observation, secondary sources, and the literature on public transport from low, medium, and high-income countries. Extensive use is also made of a survey administered in Istanbul in 1976 to 1935 paratransit operators. Primary findings are that private buses are more efficient than public buses on a cost per passenger-km basis, and that private minibuses are as efficient as public buses. In terms of energy efficiency, minibuses are almost as efficient as public and private buses using actual-occupancy levels.more » Large shared taxis are twice as cost and energy efficient as cars, and small shared taxis 50% more efficient. In terms of investment cost per seat, large shared taxis have the lowest cost followed by smaller shared taxis, minibuses, and buses. Considering actual occupancy levels, minibuses are only slightly less effective in terms of congestion than buses, and large and small shared taxis are twice as effective as cars. It is also shown that minibuses and shared taxis have better service quality than buses because of higher frequencies and speeds, and because they provide a much higher probability of getting a seat than buses. Analysis of regulation and policy suggests that there are many unintended cost of public-transport regulations.« less

  5. Aeration costs in stirred-tank and bubble column bioreactors

    DOE PAGES

    Humbird, D.; Davis, R.; McMillan, J. D.

    2017-08-10

    To overcome knowledge gaps in the economics of large-scale aeration for production of commodity products, Aspen Plus is used to simulate steady-state oxygen delivery in both stirred-tank and bubble column bioreactors, using published engineering correlations for oxygen mass transfer as a function of aeration rate and power input, coupled with new equipment cost estimates developed in Aspen Capital Cost Estimator and validated against vendor quotations. Here, these simulations describe the cost efficiency of oxygen delivery as a function of oxygen uptake rate and vessel size, and show that capital and operating costs for oxygen delivery drop considerably moving from standard-sizemore » (200 m 3) to world-class size (500 m 3) reactors, but only marginally in further scaling up to hypothetically large (1000 m 3) reactors. Finally, this analysis suggests bubble-column reactor systems can reduce overall costs for oxygen delivery by 10-20% relative to stirred tanks at low to moderate oxygen transfer rates up to 150 mmol/L-h.« less

  6. Aeration costs in stirred-tank and bubble column bioreactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbird, D.; Davis, R.; McMillan, J. D.

    To overcome knowledge gaps in the economics of large-scale aeration for production of commodity products, Aspen Plus is used to simulate steady-state oxygen delivery in both stirred-tank and bubble column bioreactors, using published engineering correlations for oxygen mass transfer as a function of aeration rate and power input, coupled with new equipment cost estimates developed in Aspen Capital Cost Estimator and validated against vendor quotations. Here, these simulations describe the cost efficiency of oxygen delivery as a function of oxygen uptake rate and vessel size, and show that capital and operating costs for oxygen delivery drop considerably moving from standard-sizemore » (200 m 3) to world-class size (500 m 3) reactors, but only marginally in further scaling up to hypothetically large (1000 m 3) reactors. Finally, this analysis suggests bubble-column reactor systems can reduce overall costs for oxygen delivery by 10-20% relative to stirred tanks at low to moderate oxygen transfer rates up to 150 mmol/L-h.« less

  7. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  8. Costs and cost-effectiveness of vector control in Eritrea using insecticide-treated bed nets.

    PubMed

    Yukich, Joshua O; Zerom, Mehari; Ghebremeskel, Tewolde; Tediosi, Fabrizio; Lengeler, Christian

    2009-03-30

    While insecticide-treated nets (ITNs) are a recognized effective method for preventing malaria, there has been an extensive debate in recent years about the best large-scale implementation strategy. Implementation costs and cost-effectiveness are important elements to consider when planning ITN programmes, but so far little information on these aspects is available from national programmes. This study uses a standardized methodology, as part of a larger comparative study, to collect cost data and cost-effectiveness estimates from a large programme providing ITNs at the community level and ante-natal care facilities in Eritrea. This is a unique model of ITN implementation fully integrated into the public health system. Base case analysis results indicated that the average annual cost of ITN delivery (2005 USD 3.98) was very attractive when compared with past ITN delivery studies at different scales. Financing was largely from donor sources though the Eritrean government and net users also contributed funding. The intervention's cost-effectiveness was in a highly attractive range for sub-Saharan Africa. The cost per DALY averted was USD 13 - 44. The cost per death averted was USD 438-1449. Distribution of nets coincided with significant increases in coverage and usage of nets nationwide, approaching or exceeding international targets in some areas. ITNs can be cost-effectively delivered at a large scale in sub-Saharan Africa through a distribution system that is highly integrated into the health system. Operating and sustaining such a system still requires strong donor funding and support as well as a functional and extensive system of health facilities and community health workers already in place.

  9. Electricity's Future: The Shift to Efficiency and Small-Scale Power. Worldwatch Paper 61.

    ERIC Educational Resources Information Center

    Flavin, Christopher

    Electricity, which has largely supplanted oil as the most controversial energy issue of the 1980s, is at the center of some of the world's bitterest economic and environmental controversies. Soaring costs, high interest rates, and environmental damage caused by large power plants have wreaked havoc on the once booming electricity industry.…

  10. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  11. An economy of scale system's mensuration of large spacecraft

    NASA Technical Reports Server (NTRS)

    Deryder, L. J.

    1981-01-01

    The systems technology and cost particulars of using multipurpose platforms versus several sizes of bus type free flyer spacecraft to accomplish the same space experiment missions. Computer models of these spacecraft bus designs were created to obtain data relative to size, weight, power, performance, and cost. To answer the question of whether or not large scale does produce economy, the dominant cost factors were determined and the programmatic effect on individual experiment costs were evaluated.

  12. Gas-Centered Swirl Coaxial Liquid Injector Evaluations

    NASA Technical Reports Server (NTRS)

    Cohn, A. K.; Strakey, P. A.; Talley, D. G.

    2005-01-01

    Development of Liquid Rocket Engines is expensive. Extensive testing at large scales usually required. In order to verify engine lifetime, large number of tests required. Limited Resources available for development. Sub-scale cold-flow and hot-fire testing is extremely cost effective. Could be a necessary (but not sufficient) condition for long engine lifetime. Reduces overall costs and risk of large scale testing. Goal: Determine knowledge that can be gained from sub-scale cold-flow and hot-fire evaluations of LRE injectors. Determine relationships between cold-flow and hot-fire data.

  13. A comparative study of all-vanadium and iron-chromium redox flow batteries for large-scale energy storage

    NASA Astrophysics Data System (ADS)

    Zeng, Y. K.; Zhao, T. S.; An, L.; Zhou, X. L.; Wei, L.

    2015-12-01

    The promise of redox flow batteries (RFBs) utilizing soluble redox couples, such as all vanadium ions as well as iron and chromium ions, is becoming increasingly recognized for large-scale energy storage of renewables such as wind and solar, owing to their unique advantages including scalability, intrinsic safety, and long cycle life. An ongoing question associated with these two RFBs is determining whether the vanadium redox flow battery (VRFB) or iron-chromium redox flow battery (ICRFB) is more suitable and competitive for large-scale energy storage. To address this concern, a comparative study has been conducted for the two types of battery based on their charge-discharge performance, cycle performance, and capital cost. It is found that: i) the two batteries have similar energy efficiencies at high current densities; ii) the ICRFB exhibits a higher capacity decay rate than does the VRFB; and iii) the ICRFB is much less expensive in capital costs when operated at high power densities or at large capacities.

  14. Commentary: Environmental nanophotonics and energy

    NASA Astrophysics Data System (ADS)

    Smith, Geoff B.

    2011-01-01

    The reasons nanophotonics is proving central to meeting the need for large gains in energy efficiency and renewable energy supply are analyzed. It enables optimum management and use of environmental energy flows at low cost and on a sufficient scale by providing spectral, directional and temporal control in tune with radiant flows from the sun, and the local atmosphere. Benefits and problems involved in large scale manufacture and deployment are discussed including how managing and avoiding safety issues in some nanosystems will occur, a process long established in nature.

  15. Large-scale cauliflower-shaped hierarchical copper nanostructures for efficient photothermal conversion.

    PubMed

    Fan, Peixun; Wu, Hui; Zhong, Minlin; Zhang, Hongjun; Bai, Benfeng; Jin, Guofan

    2016-08-14

    Efficient solar energy harvesting and photothermal conversion have essential importance for many practical applications. Here, we present a laser-induced cauliflower-shaped hierarchical surface nanostructure on a copper surface, which exhibits extremely high omnidirectional absorption efficiency over a broad electromagnetic spectral range from the UV to the near-infrared region. The measured average hemispherical absorptance is as high as 98% within the wavelength range of 200-800 nm, and the angle dependent specular reflectance stays below 0.1% within the 0-60° incident angle. Such a structured copper surface can exhibit an apparent heating up effect under the sunlight illumination. In the experiment of evaporating water, the structured surface yields an overall photothermal conversion efficiency over 60% under an illuminating solar power density of ∼1 kW m(-2). The presented technology provides a cost-effective, reliable, and simple way for realizing broadband omnidirectional light absorptive metal surfaces for efficient solar energy harvesting and utilization, which is highly demanded in various light harvesting, anti-reflection, and photothermal conversion applications. Since the structure is directly formed by femtosecond laser writing, it is quite suitable for mass production and can be easily extended to a large surface area.

  16. Producing Hydrogen With Sunlight

    NASA Technical Reports Server (NTRS)

    Biddle, J. R.; Peterson, D. B.; Fujita, T.

    1987-01-01

    Costs high but reduced by further research. Producing hydrogen fuel on large scale from water by solar energy practical if plant costs reduced, according to study. Sunlight attractive energy source because it is free and because photon energy converts directly to chemical energy when it breaks water molecules into diatomic hydrogen and oxygen. Conversion process low in efficiency and photochemical reactor must be spread over large area, requiring large investment in plant. Economic analysis pertains to generic photochemical processes. Does not delve into details of photochemical reactor design because detailed reactor designs do not exist at this early stage of development.

  17. Costs and consequences of large-scale vector control for malaria

    PubMed Central

    Yukich, Joshua O; Lengeler, Christian; Tediosi, Fabrizio; Brown, Nick; Mulligan, Jo-Ann; Chavasse, Des; Stevens, Warren; Justino, John; Conteh, Lesong; Maharaj, Rajendra; Erskine, Marcy; Mueller, Dirk H; Wiseman, Virginia; Ghebremeskel, Tewolde; Zerom, Mehari; Goodman, Catherine; McGuire, David; Urrutia, Juan Manuel; Sakho, Fana; Hanson, Kara; Sharp, Brian

    2008-01-01

    Background Five large insecticide-treated net (ITN) programmes and two indoor residual spraying (IRS) programmes were compared using a standardized costing methodology. Methods Costs were measured locally or derived from existing studies and focused on the provider perspective, but included the direct costs of net purchases by users, and are reported in 2005 USD. Effectiveness was estimated by combining programme outputs with standard impact indicators. Findings Conventional ITNs: The cost per treated net-year of protection ranged from USD 1.21 in Eritrea to USD 6.05 in Senegal. The cost per child death averted ranged from USD 438 to USD 2,199 when targeting to children was successful. Long-lasting insecticidal nets (LLIN) of five years duration: The cost per treated-net year of protection ranged from USD 1.38 in Eritrea to USD 1.90 in Togo. The cost per child death averted ranged from USD 502 to USD 692. IRS: The costs per person-year of protection for all ages were USD 3.27 in KwaZulu Natal and USD 3.90 in Mozambique. If only children under five years of age were included in the denominator the cost per person-year of protection was higher: USD 23.96 and USD 21.63. As a result, the cost per child death averted was higher than for ITNs: USD 3,933–4,357. Conclusion Both ITNs and IRS are highly cost-effective vector control strategies. Integrated ITN free distribution campaigns appeared to be the most efficient way to rapidly increase ITN coverage. Other approaches were as or more cost-effective, and appeared better suited to "keep-up" coverage levels. ITNs are more cost-effective than IRS for highly endemic settings, especially if high ITN coverage can be achieved with some demographic targeting. PMID:19091114

  18. Cost-effectiveness comparison of response strategies to a large-scale anthrax attack on the chicago metropolitan area: impact of timing and surge capacity.

    PubMed

    Kyriacou, Demetrios N; Dobrez, Debra; Parada, Jorge P; Steinberg, Justin M; Kahn, Adam; Bennett, Charles L; Schmitt, Brian P

    2012-09-01

    Rapid public health response to a large-scale anthrax attack would reduce overall morbidity and mortality. However, there is uncertainty about the optimal cost-effective response strategy based on timing of intervention, public health resources, and critical care facilities. We conducted a decision analytic study to compare response strategies to a theoretical large-scale anthrax attack on the Chicago metropolitan area beginning either Day 2 or Day 5 after the attack. These strategies correspond to the policy options set forth by the Anthrax Modeling Working Group for population-wide responses to a large-scale anthrax attack: (1) postattack antibiotic prophylaxis, (2) postattack antibiotic prophylaxis and vaccination, (3) preattack vaccination with postattack antibiotic prophylaxis, and (4) preattack vaccination with postattack antibiotic prophylaxis and vaccination. Outcomes were measured in costs, lives saved, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios (ICERs). We estimated that postattack antibiotic prophylaxis of all 1,390,000 anthrax-exposed people beginning on Day 2 after attack would result in 205,835 infected victims, 35,049 fulminant victims, and 28,612 deaths. Only 6,437 (18.5%) of the fulminant victims could be saved with the existing critical care facilities in the Chicago metropolitan area. Mortality would increase to 69,136 if the response strategy began on Day 5. Including postattack vaccination with antibiotic prophylaxis of all exposed people reduces mortality and is cost-effective for both Day 2 (ICER=$182/QALY) and Day 5 (ICER=$1,088/QALY) response strategies. Increasing ICU bed availability significantly reduces mortality for all response strategies. We conclude that postattack antibiotic prophylaxis and vaccination of all exposed people is the optimal cost-effective response strategy for a large-scale anthrax attack. Our findings support the US government's plan to provide antibiotic prophylaxis and vaccination for all exposed people within 48 hours of the recognition of a large-scale anthrax attack. Future policies should consider expanding critical care capacity to allow for the rescue of more victims.

  19. Cost-Effectiveness Comparison of Response Strategies to a Large-Scale Anthrax Attack on the Chicago Metropolitan Area: Impact of Timing and Surge Capacity

    PubMed Central

    Dobrez, Debra; Parada, Jorge P.; Steinberg, Justin M.; Kahn, Adam; Bennett, Charles L.; Schmitt, Brian P.

    2012-01-01

    Rapid public health response to a large-scale anthrax attack would reduce overall morbidity and mortality. However, there is uncertainty about the optimal cost-effective response strategy based on timing of intervention, public health resources, and critical care facilities. We conducted a decision analytic study to compare response strategies to a theoretical large-scale anthrax attack on the Chicago metropolitan area beginning either Day 2 or Day 5 after the attack. These strategies correspond to the policy options set forth by the Anthrax Modeling Working Group for population-wide responses to a large-scale anthrax attack: (1) postattack antibiotic prophylaxis, (2) postattack antibiotic prophylaxis and vaccination, (3) preattack vaccination with postattack antibiotic prophylaxis, and (4) preattack vaccination with postattack antibiotic prophylaxis and vaccination. Outcomes were measured in costs, lives saved, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios (ICERs). We estimated that postattack antibiotic prophylaxis of all 1,390,000 anthrax-exposed people beginning on Day 2 after attack would result in 205,835 infected victims, 35,049 fulminant victims, and 28,612 deaths. Only 6,437 (18.5%) of the fulminant victims could be saved with the existing critical care facilities in the Chicago metropolitan area. Mortality would increase to 69,136 if the response strategy began on Day 5. Including postattack vaccination with antibiotic prophylaxis of all exposed people reduces mortality and is cost-effective for both Day 2 (ICER=$182/QALY) and Day 5 (ICER=$1,088/QALY) response strategies. Increasing ICU bed availability significantly reduces mortality for all response strategies. We conclude that postattack antibiotic prophylaxis and vaccination of all exposed people is the optimal cost-effective response strategy for a large-scale anthrax attack. Our findings support the US government's plan to provide antibiotic prophylaxis and vaccination for all exposed people within 48 hours of the recognition of a large-scale anthrax attack. Future policies should consider expanding critical care capacity to allow for the rescue of more victims. PMID:22845046

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dale K. Kotter; Steven D. Novack

    DRAFT For Submittal to Journal of Solar Energy - Rev 10.1 ---SOL-08-1091 SOLAR Nantenna Electromagnetic Collectors Dale K. Kotter Idaho National Laboratory Steven D. Novack Idaho National Laboratory W. Dennis Slafer MicroContinuum, Inc. Patrick Pinhero University of Missouri ABSTRACT The research described in this paper explores a new and efficient approach for producing electricity from the abundant energy of the sun, using nanoantenna (nantenna) electromagnetic collectors (NECs). NEC devices target mid-infrared wavelengths, where conventional photovoltaic (PV) solar cells are inefficient and where there is an abundance of solar energy. The initial concept of designing NECs was based on scaling ofmore » radio frequency antenna theory to the infrared and visible regions. This approach initially proved unsuccessful because the optical behavior of materials in the terahertz (THz) region was overlooked and, in addition, economical nanofabrication methods were not previously available to produce the optical antenna elements. This paper demonstrates progress in addressing significant technological barriers, including: 1) development of frequency-dependent modeling of double-feedpoint square spiral nantenna elements; 2) selection of materials with proper THz properties; and 3) development of novel manufacturing methods that could potentially enable economical large-scale manufacturing. We have shown that nantennas can collect infrared energy and induce THz currents, and we have also developed cost-effective proof-of-concept fabrication techniques for the large-scale manufacture of simple square loop nantenna arrays. Future work is planned to embed rectifiers into the double-feedpoint antenna structures. This work represents an important first step toward the ultimate realization of a low-cost device that will collect as well as convert this radiation into electricity. This could lead to a broadband, high conversion efficiency low-cost solution to complement conventional PV devices.« less

  1. Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.

    PubMed

    Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P

    2010-12-22

    Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.

  2. Developing eThread pipeline using SAGA-pilot abstraction for large-scale structural bioinformatics.

    PubMed

    Ragothaman, Anjani; Boddu, Sairam Chowdary; Kim, Nayong; Feinstein, Wei; Brylinski, Michal; Jha, Shantenu; Kim, Joohyun

    2014-01-01

    While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread--a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure.

  3. Developing eThread Pipeline Using SAGA-Pilot Abstraction for Large-Scale Structural Bioinformatics

    PubMed Central

    Ragothaman, Anjani; Feinstein, Wei; Jha, Shantenu; Kim, Joohyun

    2014-01-01

    While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread—a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure. PMID:24995285

  4. Methods comparison for microsatellite marker development: Different isolation methods, different yield efficiency

    NASA Astrophysics Data System (ADS)

    Zhan, Aibin; Bao, Zhenmin; Hu, Xiaoli; Lu, Wei; Hu, Jingjie

    2009-06-01

    Microsatellite markers have become one kind of the most important molecular tools used in various researches. A large number of microsatellite markers are required for the whole genome survey in the fields of molecular ecology, quantitative genetics and genomics. Therefore, it is extremely necessary to select several versatile, low-cost, efficient and time- and labor-saving methods to develop a large panel of microsatellite markers. In this study, we used Zhikong scallop ( Chlamys farreri) as the target species to compare the efficiency of the five methods derived from three strategies for microsatellite marker development. The results showed that the strategy of constructing small insert genomic DNA library resulted in poor efficiency, while the microsatellite-enriched strategy highly improved the isolation efficiency. Although the mining public database strategy is time- and cost-saving, it is difficult to obtain a large number of microsatellite markers, mainly due to the limited sequence data of non-model species deposited in public databases. Based on the results in this study, we recommend two methods, microsatellite-enriched library construction method and FIASCO-colony hybridization method, for large-scale microsatellite marker development. Both methods were derived from the microsatellite-enriched strategy. The experimental results obtained from Zhikong scallop also provide the reference for microsatellite marker development in other species with large genomes.

  5. Load Balancing Unstructured Adaptive Grids for CFD Problems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid

    1996-01-01

    Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.

  6. Sustainable electrical energy storage through the ferrocene/ferrocenium redox reaction in aprotic electrolyte.

    PubMed

    Zhao, Yu; Ding, Yu; Song, Jie; Li, Gang; Dong, Guangbin; Goodenough, John B; Yu, Guihua

    2014-10-06

    The large-scale, cost-effective storage of electrical energy obtained from the growing deployment of wind and solar power is critically needed for the integration into the grid of these renewable energy sources. Rechargeable batteries having a redox-flow cathode represent a viable solution for either a Li-ion or a Na-ion battery provided a suitable low-cost redox molecule soluble in an aprotic electrolyte can be identified that is stable for repeated cycling and does not cross the separator membrane to the anode. Here we demonstrate an environmentally friendly, low-cost ferrocene/ferrocenium molecular redox couple that shows about 95% energy efficiency and about 90% capacity retention after 250 full charge/discharge cycles. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Effective Pb2+ removal from water using nanozerovalent iron stored 10 months

    NASA Astrophysics Data System (ADS)

    Ahmed, M. A.; Bishay, Samiha T.; Ahmed, Fatma M.; El-Dek, S. I.

    2017-10-01

    Heavy metal removal from water required reliable and cost-effective considerations, fast separation as well as easy methodology. In this piece of research, nanozerovalent iron (NZVI) was prepared as ideal sorbent for Pb2+ removal. The sample was characterized using X-ray diffraction (XRD), high-resolution transmission electron microscope (HRTEM), and atomic force microscope (AFM-SPM). Batch experiments comprised the effect of pH value and contact time on the adsorption process. The same NZVI was stored for a shelf time (10 months) and the batch experiment was repeated. The outcomes of the investigation assured that NZVI publicized an extraordinary large metal uptake (98%) after a short contact time (10 h). The stored sample revealed the same effectiveness on Pb2+ removal under the same conditions. The results of the physical properties, magnetic susceptibility, and conductance were correlated with the adsorption efficiency. This work offers evidence that these NZVI particles could be potential candidate for Pb2+ removal in large scale, stored for a long time using a simple, green, and cost-effective methodology, and represent an actual feedback in waste water treatment.

  8. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, H.-W.; Chang, N.-B., E-mail: nchang@mail.ucf.ed; Chen, J.-C.

    2010-07-15

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19more » large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.« less

  9. Accelerating root system phenotyping of seedlings through a computer-assisted processing pipeline.

    PubMed

    Dupuy, Lionel X; Wright, Gladys; Thompson, Jacqueline A; Taylor, Anna; Dekeyser, Sebastien; White, Christopher P; Thomas, William T B; Nightingale, Mark; Hammond, John P; Graham, Neil S; Thomas, Catherine L; Broadley, Martin R; White, Philip J

    2017-01-01

    There are numerous systems and techniques to measure the growth of plant roots. However, phenotyping large numbers of plant roots for breeding and genetic analyses remains challenging. One major difficulty is to achieve high throughput and resolution at a reasonable cost per plant sample. Here we describe a cost-effective root phenotyping pipeline, on which we perform time and accuracy benchmarking to identify bottlenecks in such pipelines and strategies for their acceleration. Our root phenotyping pipeline was assembled with custom software and low cost material and equipment. Results show that sample preparation and handling of samples during screening are the most time consuming task in root phenotyping. Algorithms can be used to speed up the extraction of root traits from image data, but when applied to large numbers of images, there is a trade-off between time of processing the data and errors contained in the database. Scaling-up root phenotyping to large numbers of genotypes will require not only automation of sample preparation and sample handling, but also efficient algorithms for error detection for more reliable replacement of manual interventions.

  10. An integrated approach for monitoring efficiency and investments of activated sludge-based wastewater treatment plants at large spatial scale.

    PubMed

    De Gisi, Sabino; Sabia, Gianpaolo; Casella, Patrizia; Farina, Roberto

    2015-08-01

    WISE, the Water Information System for Europe, is the web-portal of the European Commission (EU) that disseminates the quality state of the receiving water bodies and the efficiency of the municipal wastewater treatment plants (WWTPs) in order to monitor advances in the application of both the Water Framework Directive (WFD) as well as the Urban Wastewater Treatment Directive (UWWTD). With the intention to develop WISE applications, the aim of the work was to define and apply an integrated approach capable of monitoring the efficiency and investments of activated sludge-based WWTPs located in a large spatial area, providing the following outcomes useful to the decision-makers: (i) the identification of critical facilities and their critical processes by means of a Performance Assessment System (PAS), (ii) the choice of the most suitable upgrading actions, through a scenario analysis. (iii) the assessment of the investment costs to upgrade the critical WWTPs and (iv) the prioritization of the critical facilities by means of a multi-criteria approach which includes the stakeholders involvement, along with the integration of some technical, environmental, economic and health aspects. The implementation of the proposed approach to a high number of municipal WWTPs highlighted how the PAS developed was able to identify critical processes with a particular effectiveness in identifying the critical nutrient removal ones. In addition, a simplified approach that considers the cost related to a basic-configuration and those for the WWTP integration, allowed to link the critical processes identified and the investment costs. Finally, the questionnaire for the acquisition of data such as that provided by the Italian Institute of Statistics, the PAS defined and the database on the costs, if properly adapted, may allow for the extension of the integrated approach on an EU-scale by providing useful information to water utilities as well as institutions. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Review of the harvesting and extraction program within the National Alliance for Advanced Biofuels and Bioproducts

    DOE PAGES

    Marrone, Babetta L.; Lacey, Ronald E.; Anderson, Daniel B.; ...

    2017-08-07

    Energy-efficient and scalable harvesting and lipid extraction processes must be developed in order for the algal biofuels and bioproducts industry to thrive. The major challenge for harvesting is the handling of large volumes of cultivation water to concentrate low amounts of biomass. For lipid extraction, the major energy and cost drivers are associated with disrupting the algae cell wall and drying the biomass before solvent extraction of the lipids. Here we review the research and development conducted by the Harvesting and Extraction Team during the 3-year National Alliance for Advanced Biofuels and Bioproducts (NAABB) algal consortium project. The harvesting andmore » extraction team investigated five harvesting and three wet extraction technologies at lab bench scale for effectiveness, and conducted a techoeconomic study to evaluate their costs and energy efficiency compared to available baseline technologies. Based on this study, three harvesting technologies were selected for further study at larger scale. We evaluated the selected harvesting technologies: electrocoagulation, membrane filtration, and ultrasonic harvesting, in a field study at minimum scale of 100 L/h. None of the extraction technologies were determined to be ready for scale-up; therefore, an emerging extraction technology (wet solvent extraction) was selected from industry to provide scale-up data and capabilities to produce lipid and lipid-extracted materials for the NAABB program. One specialized extraction/adsorption technology was developed that showed promise for recovering high value co-products from lipid extracts. Overall, the NAABB Harvesting and Extraction Team improved the readiness level of several innovative, energy efficient technologies to integrate with algae production processes and captured valuable lessons learned about scale-up challenges.« less

  12. Review of the harvesting and extraction program within the National Alliance for Advanced Biofuels and Bioproducts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marrone, Babetta L.; Lacey, Ronald E.; Anderson, Daniel B.

    Energy-efficient and scalable harvesting and lipid extraction processes must be developed in order for the algal biofuels and bioproducts industry to thrive. The major challenge for harvesting is the handling of large volumes of cultivation water to concentrate low amounts of biomass. For lipid extraction, the major energy and cost drivers are associated with disrupting the algae cell wall and drying the biomass before solvent extraction of the lipids. Here we review the research and development conducted by the Harvesting and Extraction Team during the 3-year National Alliance for Advanced Biofuels and Bioproducts (NAABB) algal consortium project. The harvesting andmore » extraction team investigated five harvesting and three wet extraction technologies at lab bench scale for effectiveness, and conducted a techoeconomic study to evaluate their costs and energy efficiency compared to available baseline technologies. Based on this study, three harvesting technologies were selected for further study at larger scale. We evaluated the selected harvesting technologies: electrocoagulation, membrane filtration, and ultrasonic harvesting, in a field study at minimum scale of 100 L/h. None of the extraction technologies were determined to be ready for scale-up; therefore, an emerging extraction technology (wet solvent extraction) was selected from industry to provide scale-up data and capabilities to produce lipid and lipid-extracted materials for the NAABB program. One specialized extraction/adsorption technology was developed that showed promise for recovering high value co-products from lipid extracts. Overall, the NAABB Harvesting and Extraction Team improved the readiness level of several innovative, energy efficient technologies to integrate with algae production processes and captured valuable lessons learned about scale-up challenges.« less

  13. A Fundamental Study for Efficient Implementaion of Online Collaborative Activities in Large-Scale Classes

    ERIC Educational Resources Information Center

    Matsuba, Ryuichi; Suzuki, Yusei; Kubota, Shin-Ichiro; Miyazaki, Makoto

    2015-01-01

    We study tactics for writing skills development through cross-disciplinary learning in online large-scale classes, and particularly are interested in implementation of online collaborative activities such as peer reviewing of writing. The goal of our study is to carry out collaborative works efficiently via online effectively in large-scale…

  14. Large-scale Cortical Network Properties Predict Future Sound-to-Word Learning Success

    PubMed Central

    Sheppard, John Patrick; Wang, Ji-Ping; Wong, Patrick C. M.

    2013-01-01

    The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants’ future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults. PMID:22360625

  15. Cost Scaling of a Real-World Exhaust Waste Heat Recovery Thermoelectric Generator: A Deeper Dive

    NASA Astrophysics Data System (ADS)

    Hendricks, Terry J.; Yee, Shannon; LeBlanc, Saniya

    2016-03-01

    Cost is equally important to power density or efficiency for the adoption of waste heat recovery thermoelectric generators (TEG) in many transportation and industrial energy recovery applications. In many cases, the system design that minimizes cost (e.g., the /W value) can be very different than the design that maximizes the system's efficiency or power density, and it is important to understand the relationship between those designs to optimize TEG performance-cost compromises. Expanding on recent cost analysis work and using more detailed system modeling, an enhanced cost scaling analysis of a waste heat recovery TEG with more detailed, coupled treatment of the heat exchangers has been performed. In this analysis, the effect of the heat lost to the environment and updated relationships between the hot-side and cold-side conductances that maximize power output are considered. This coupled thermal and thermoelectric (TE) treatment of the exhaust waste heat recovery TEG yields modified cost scaling and design optimization equations, which are now strongly dependent on the heat leakage fraction, exhaust mass flow rate, and heat exchanger effectiveness. This work shows that heat exchanger costs most often dominate the overall TE system costs, that it is extremely difficult to escape this regime, and in order to achieve TE system costs of 1/W it is necessary to achieve heat exchanger costs of 1/(W/K). Minimum TE system costs per watt generally coincide with maximum power points, but preferred TE design regimes are identified where there is little cost penalty for moving into regions of higher efficiency and slightly lower power outputs. These regimes are closely tied to previously identified low cost design regimes. This work shows that the optimum fill factor F opt minimizing system costs decreases as heat losses increase, and increases as exhaust mass flow rate and heat exchanger effectiveness increase. These findings have profound implications on the design and operation of various TE waste heat recovery systems. This work highlights the importance of heat exchanger costs on the overall TEG system costs, quantifies the possible TEG performance-cost domain space based on heat exchanger effects, and provides a focus for future system research and development efforts.

  16. Evaluation of environmental sampling methods for detection of Salmonella enterica in a large animal veterinary hospital.

    PubMed

    Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey

    2018-04-01

    Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.

  17. Use of spent mushroom substrate for production of Bacillus thuringiensis by solid-state fermentation.

    PubMed

    Wu, Songqing; Lan, Yanjiao; Huang, Dongmei; Peng, Yan; Huang, Zhipeng; Xu, Lei; Gelbic, Ivan; Carballar-Lejarazu, Rebeca; Guan, Xiong; Zhang, Lingling; Zou, Shuangquan

    2014-02-01

    The aim of this study was to explore a cost-effective method for the mass production of Bacillus thuringiensis (Bt) by solid-state fermentation. As a locally available agroindustrial byproduct, spent mushroom substrate (SMS) was used as raw material for Bt cultivation, and four combinations of SMS-based media were designed. Fermentation conditions were optimized on the best medium and the optimal conditions were determined as follows: temperature 32 degrees C, initial pH value 6, moisture content 50%, the ratio of sieved material to initial material 1:3, and inoculum volume 0.5 ml. Large scale production of B. thuringiensis subsp. israelensis (Bti) LLP29 was conducted on the optimal medium at optimal conditions. High toxicity (1,487 international toxic units/milligram) and long larvicidal persistence of the product were observed in the study, which illustrated that SMS-based solid-state fermentation medium was efficient and economical for large scale industrial production of Bt-based biopesticides. The cost of production of 1 kg of Bt was approximately US$0.075.

  18. Scaling properties of European research units

    PubMed Central

    Jamtveit, Bjørn; Jettestuen, Espen; Mathiesen, Joachim

    2009-01-01

    A quantitative characterization of the scale-dependent features of research units may provide important insight into how such units are organized and how they grow. The relative importance of top-down versus bottom-up controls on their growth may be revealed by their scaling properties. Here we show that the number of support staff in Scandinavian research units, ranging in size from 20 to 7,800 staff members, is related to the number of academic staff by a power law. The scaling exponent of ≈1.30 is broadly consistent with a simple hierarchical model of the university organization. Similar scaling behavior between small and large research units with a wide range of ambitions and strategies argues against top-down control of the growth. Top-down effects, and externally imposed effects from changing political environments, can be observed as fluctuations around the main trend. The observed scaling law implies that cost-benefit arguments for merging research institutions into larger and larger units may have limited validity unless the productivity per academic staff and/or the quality of the products are considerably higher in larger institutions. Despite the hierarchical structure of most large-scale research units in Europe, the network structures represented by the academic component of such units are strongly antihierarchical and suboptimal for efficient communication within individual units. PMID:19625626

  19. Networked remote area dental services: a viable, sustainable approach to oral health care in challenging environments.

    PubMed

    Dyson, Kate; Kruger, Estie; Tennant, Marc

    2012-12-01

    This study examines the cost effectiveness of a model of remote area oral health service. Retrospective financial analysis. Rural and remote primary health services. Clinical activity data and associated cost data relating to the provision of a networked visiting oral health service by the Centre for Rural and Remote Oral Health formed the basis of the study data frameset. The cost-effectiveness of the Centre's model of service provision at five rural and remote sites in Western Australia during the calendar years 2006, 2008 and 2010 was examined in the study. Calculations of the service provision costs and value of care provided were made using data records and the Fee Schedule of Dental Services for Dentists. The ratio of service provision costs to the value of care provided was determined for each site and was benchmarked against the equivalent ratios applicable to large scale government sector models of service provision. The use of networked models have been effective in other disciplines but this study is the first to show a networked hub and spoke approach of five spokes to one hub is cost efficient in remote oral health care. By excluding special cost-saving initiatives introduced by the Centre, the study examines easily translatable direct service provision costs against direct clinical care outcomes in some of Australia's most challenging locations. This study finds that networked hub and spoke models of care can be financially efficient arrangements in remote oral health care. © 2012 The Authors. Australian Journal of Rural Health © National Rural Health Alliance Inc.

  20. Estimating unbiased economies of scale of HIV prevention projects: a case study of Avahan.

    PubMed

    Lépine, Aurélia; Vassall, Anna; Chandrashekar, Sudha; Blanc, Elodie; Le Nestour, Alexis

    2015-04-01

    Governments and donors are investing considerable resources on HIV prevention in order to scale up these services rapidly. Given the current economic climate, providers of HIV prevention services increasingly need to demonstrate that these investments offer good 'value for money'. One of the primary routes to achieve efficiency is to take advantage of economies of scale (a reduction in the average cost of a health service as provision scales-up), yet empirical evidence on economies of scale is scarce. Methodologically, the estimation of economies of scale is hampered by several statistical issues preventing causal inference and thus making the estimation of economies of scale complex. In order to estimate unbiased economies of scale when scaling up HIV prevention services, we apply our analysis to one of the few HIV prevention programmes globally delivered at a large scale: the Indian Avahan initiative. We costed the project by collecting data from the 138 Avahan NGOs and the supporting partners in the first four years of its scale-up, between 2004 and 2007. We develop a parsimonious empirical model and apply a system Generalized Method of Moments (GMM) and fixed-effects Instrumental Variable (IV) estimators to estimate unbiased economies of scale. At the programme level, we find that, after controlling for the endogeneity of scale, the scale-up of Avahan has generated high economies of scale. Our findings suggest that average cost reductions per person reached are achievable when scaling-up HIV prevention in low and middle income countries. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. New Distributed Multipole Methods for Accurate Electrostatics for Large-Scale Biomolecular Simultations

    NASA Astrophysics Data System (ADS)

    Sagui, Celeste

    2006-03-01

    An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.

  2. Parallel Simulation of Unsteady Turbulent Flames

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1996-01-01

    Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.

  3. Kinetic Super-Resolution Long-Wave Infrared (KSR LWIR) Thermography Diagnostic for Building Envelopes: Camp Lejeune, NC

    DTIC Science & Technology

    2015-08-18

    of Defense (DoD) to achieve cost-effective energy efficiency at much greater scale than other commercially available techniques of measuring energy...recommends specific energy conservation measures (ECMs), and quantifies significant potential return on investment. ERDC/CERL TR-15-18 iii...effective energy efficiency at much greater scale than other commercially available techniques of measuring energy loss due to envelope inefficien- cies

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, C.E.; Kuhn, I.F. Jr.

    The fuel cell electric vehicle (FCEV) is undoubtedly the only option that can meet both the California zero emission vehicle (ZEV) standard and the President`s goal of tripling automobile efficiency without sacrificing performance in a standard 5-passenger vehicle. The three major automobile companies are designing and developing FCEVs powered directly by hydrogen under cost-shared contracts with the Department of Energy. Once developed, these vehicles will need a reliable and inexpensive source of hydrogen. Steam reforming of natural gas would produce the least expensive hydrogen, but funding may not be sufficient initially to build both large steam reforming plants and themore » transportation infrastructure necessary to deliver that hydrogen to geographically scattered FCEV fleets or individual drivers. This analysis evaluates the economic feasibility of using small scale water electrolysis to provide widely dispersed but cost-effective hydrogen for early FCEV demonstrations. We estimate the cost of manufacturing a complete electrolysis system in large quantities, including compression and storage, and show that electrolytic hydrogen could be cost competitive with fully taxed gasoline, using existing residential off-peak electricity rates.« less

  5. Efficient high-dimensional characterization of conductivity in a sand box using massive MRI-imaged concentration data

    NASA Astrophysics Data System (ADS)

    Lee, J. H.; Yoon, H.; Kitanidis, P. K.; Werth, C. J.; Valocchi, A. J.

    2015-12-01

    Characterizing subsurface properties, particularly hydraulic conductivity, is crucial for reliable and cost-effective groundwater supply management, contaminant remediation, and emerging deep subsurface activities such as geologic carbon storage and unconventional resources recovery. With recent advances in sensor technology, a large volume of hydro-geophysical and chemical data can be obtained to achieve high-resolution images of subsurface properties, which can be used for accurate subsurface flow and reactive transport predictions. However, subsurface characterization with a plethora of information requires high, often prohibitive, computational costs associated with "big data" processing and large-scale numerical simulations. As a result, traditional inversion techniques are not well-suited for problems that require coupled multi-physics simulation models with massive data. In this work, we apply a scalable inversion method called Principal Component Geostatistical Approach (PCGA) for characterizing heterogeneous hydraulic conductivity (K) distribution in a 3-D sand box. The PCGA is a Jacobian-free geostatistical inversion approach that uses the leading principal components of the prior information to reduce computational costs, sometimes dramatically, and can be easily linked with any simulation software. Sequential images of transient tracer concentrations in the sand box were obtained using magnetic resonance imaging (MRI) technique, resulting in 6 million tracer-concentration data [Yoon et. al., 2008]. Since each individual tracer observation has little information on the K distribution, the dimension of the data was reduced using temporal moments and discrete cosine transform (DCT). Consequently, 100,000 unknown K values consistent with the scale of MRI data (at a scale of 0.25^3 cm^3) were estimated by matching temporal moments and DCT coefficients of the original tracer data. Estimated K fields are close to the true K field, and even small-scale variability of the sand box was captured to highlight high K connectivity and contrasts between low and high K zones. Total number of 1,000 MODFLOW and MT3DMS simulations were required to obtain final estimates and corresponding estimation uncertainty, showing the efficiency and effectiveness of our method.

  6. Continuous Flow Polymer Synthesis toward Reproducible Large-Scale Production for Efficient Bulk Heterojunction Organic Solar Cells.

    PubMed

    Pirotte, Geert; Kesters, Jurgen; Verstappen, Pieter; Govaerts, Sanne; Manca, Jean; Lutsen, Laurence; Vanderzande, Dirk; Maes, Wouter

    2015-10-12

    Organic photovoltaics (OPV) have attracted great interest as a solar cell technology with appealing mechanical, aesthetical, and economies-of-scale features. To drive OPV toward economic viability, low-cost, large-scale module production has to be realized in combination with increased top-quality material availability and minimal batch-to-batch variation. To this extent, continuous flow chemistry can serve as a powerful tool. In this contribution, a flow protocol is optimized for the high performance benzodithiophene-thienopyrroledione copolymer PBDTTPD and the material quality is probed through systematic solar-cell evaluation. A stepwise approach is adopted to turn the batch process into a reproducible and scalable continuous flow procedure. Solar cell devices fabricated using the obtained polymer batches deliver an average power conversion efficiency of 7.2 %. Upon incorporation of an ionic polythiophene-based cathodic interlayer, the photovoltaic performance could be enhanced to a maximum efficiency of 9.1 %. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Progress in amorphous silicon based large-area multijunction modules

    NASA Astrophysics Data System (ADS)

    Carlson, D. E.; Arya, R. R.; Bennett, M.; Chen, L.-F.; Jansen, K.; Li, Y.-M.; Maley, N.; Morris, J.; Newton, J.; Oswald, R. S.; Rajan, K.; Vezzetti, D.; Willing, F.; Yang, L.

    1996-01-01

    Solarex, a business unit of Amoco/Enron Solar, is scaling up its a-Si:H/a-SiGe:H tandem device technology for the production of 8 ft2 modules. The current R&D effort is focused on improving the performance, reliability and cost-effectiveness of the tandem junction technology by systematically optimizing the materials and interfaces in small-area single- and tandem junction cells. Average initial conversion efficiencies of 8.8% at 85% yield have been obtained in pilot production runs with 4 ft2 tandem modules.

  8. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Sivalingam, Kantharuban; Valeev, Edward F.; Neese, Frank

    2016-03-01

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling "partially contracted" NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient "electron pair prescreening" that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison between the partial and strong contraction schemes is made, with conclusions that discourage the strong contraction scheme as a basis for local correlation methods due to its non-invariance with respect to rotations in the inactive and external subspaces. A minimal set of conservatively chosen truncation thresholds controls the accuracy of the method. With the default thresholds, about 99.9% of the canonical partially contracted NEVPT2 correlation energy is recovered while the crossover of the computational cost with the already very efficient canonical method occurs reasonably early; in linear chain type compounds at a chain length of around 80 atoms. Calculations are reported for systems with more than 300 atoms and 5400 basis functions.

  9. Application and research of block caving in Pulang copper mine

    NASA Astrophysics Data System (ADS)

    Ge, Qifa; Fan, Wenlu; Zhu, Weigen; Chen, Xiaowei

    2018-01-01

    The application of block caving in mines shows significant advantages in large scale, low cost and high efficiency, thus block caving is worth promoting in the mines that meets the requirement of natural caving. Due to large scale of production and low ore grade in Pulang copper mine in China, comprehensive analysis and research were conducted on rock mechanics, mining sequence, undercutting and stability of bottom structure in terms of raising mine benefit and maximizing the recovery mineral resources. Finally this study summarizes that block caving is completely suitable for Pulang copper mine.

  10. 1.55 μm room-temperature lasing from subwavelength quantum-dot microdisks directly grown on (001) Si

    NASA Astrophysics Data System (ADS)

    Shi, Bei; Zhu, Si; Li, Qiang; Tang, Chak Wah; Wan, Yating; Hu, Evelyn L.; Lau, Kei May

    2017-03-01

    Miniaturized laser sources can benefit a wide variety of applications ranging from on-chip optical communications and data processing, to biological sensing. There is a tremendous interest in integrating these lasers with rapidly advancing silicon photonics, aiming to provide the combined strength of the optoelectronic integrated circuits and existing large-volume, low-cost silicon-based manufacturing foundries. Using III-V quantum dots as the active medium has been proven to lower power consumption and improve device temperature stability. Here, we demonstrate room-temperature InAs/InAlGaAs quantum-dot subwavelength microdisk lasers epitaxially grown on (001) Si, with a lasing wavelength of 1563 nm, an ultralow-threshold of 2.73 μW, and lasing up to 60 °C under pulsed optical pumping. This result unambiguously offers a promising path towards large-scale integration of cost-effective and energy-efficient silicon-based long-wavelength lasers.

  11. Internationalization Measures in Large Scale Research Projects

    NASA Astrophysics Data System (ADS)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  12. The scale dependence of optical diversity in a prairie ecosystem

    NASA Astrophysics Data System (ADS)

    Gamon, J. A.; Wang, R.; Stilwell, A.; Zygielbaum, A. I.; Cavender-Bares, J.; Townsend, P. A.

    2015-12-01

    Biodiversity loss, one of the most crucial challenges of our time, endangers ecosystem services that maintain human wellbeing. Traditional methods of measuring biodiversity require extensive and costly field sampling by biologists with extensive experience in species identification. Remote sensing can be used for such assessment based upon patterns of optical variation. This provides efficient and cost-effective means to determine ecosystem diversity at different scales and over large areas. Sampling scale has been described as a "fundamental conceptual problem" in ecology, and is an important practical consideration in both remote sensing and traditional biodiversity studies. On the one hand, with decreasing spatial and spectral resolution, the differences among different optical types may become weak or even disappear. Alternately, high spatial and/or spectral resolution may introduce redundant or contradictory information. For example, at high resolution, the variation within optical types (e.g., between leaves on a single plant canopy) may add complexity unrelated to specie richness. We studied the scale-dependence of optical diversity in a prairie ecosystem at Cedar Creek Ecosystem Science Reserve, Minnesota, USA using a variety of spectrometers from several platforms on the ground and in the air. Using the coefficient of variation (CV) of spectra as an indicator of optical diversity, we found that high richness plots generally have a higher coefficient of variation. High resolution imaging spectrometer data (1 mm pixels) showed the highest sensitivity to richness level. With decreasing spatial resolution, the difference in CV between richness levels decreased, but remained significant. These findings can be used to guide airborne studies of biodiversity and develop more effective large-scale biodiversity sampling methods.

  13. On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat

    NASA Astrophysics Data System (ADS)

    Hua, H.

    2016-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.

  14. A comparative assessment of the financial costs and carbon benefits of REDD+ strategies in Southeast Asia

    NASA Astrophysics Data System (ADS)

    Graham, Victoria; Laurance, Susan G.; Grech, Alana; McGregor, Andrew; Venter, Oscar

    2016-11-01

    REDD+ holds potential for mitigating emissions from tropical forest loss by providing financial incentives for carbon stored in forests, but its economic viability is under scrutiny. The primary narrative raised in the literature is that REDD+ will be of limited utility for reducing forest carbon loss in Southeast Asia, while the level of finance committed falls short of profits from alternative land-use activities in the region, including large-scale timber and oil palm operations. Here we assess the financial costs and carbon benefits of various REDD+ strategies deployed in the region. We find the cost of reducing emissions ranges from 9 to 75 per tonne of avoided carbon emissions. The strategies focused on reducing forest degradation and promoting forest regrowth are the most cost-effective ways of reducing emissions and used in over 60% of REDD+ projects. By comparing the financial costs and carbon benefits of a broader range of strategies than previously assessed, we highlight the variation between different strategies and draw attention to opportunities where REDD+ can achieve maximum carbon benefits cost-effectively. These findings have broad policy implications for Southeast Asia. Until carbon finance escalates, emissions reductions can be maximized from reforestation, reduced-impact logging and investing in improved management of protected areas. Targeting cost-efficient opportunities for REDD+ is important to improve the efficiency of national REDD+ policy, which in-turn fosters greater financial and political support for the scheme.

  15. Aqueous Two-Phase Systems at Large Scale: Challenges and Opportunities.

    PubMed

    Torres-Acosta, Mario A; Mayolo-Deloisa, Karla; González-Valdez, José; Rito-Palomares, Marco

    2018-06-07

    Aqueous two-phase systems (ATPS) have proved to be an efficient and integrative operation to enhance recovery of industrially relevant bioproducts. After ATPS discovery, a variety of works have been published regarding their scaling from 10 to 1000 L. Although ATPS have achieved high recovery and purity yields, there is still a gap between their bench-scale use and potential industrial applications. In this context, this review paper critically analyzes ATPS scale-up strategies to enhance the potential industrial adoption. In particular, large-scale operation considerations, different phase separation procedures, the available optimization techniques (univariate, response surface methodology, and genetic algorithms) to maximize recovery and purity and economic modeling to predict large-scale costs, are discussed. ATPS intensification to increase the amount of sample to process at each system, developing recycling strategies and creating highly efficient predictive models, are still areas of great significance that can be further exploited with the use of high-throughput techniques. Moreover, the development of novel ATPS can maximize their specificity increasing the possibilities for the future industry adoption of ATPS. This review work attempts to present the areas of opportunity to increase ATPS attractiveness at industrial levels. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  17. Scale effects between body size and limb design in quadrupedal mammals.

    PubMed

    Kilbourne, Brandon M; Hoffman, Louwrens C

    2013-01-01

    Recently the metabolic cost of swinging the limbs has been found to be much greater than previously thought, raising the possibility that limb rotational inertia influences the energetics of locomotion. Larger mammals have a lower mass-specific cost of transport than smaller mammals. The scaling of the mass-specific cost of transport is partly explained by decreasing stride frequency with increasing body size; however, it is unknown if limb rotational inertia also influences the mass-specific cost of transport. Limb length and inertial properties--limb mass, center of mass (COM) position, moment of inertia, radius of gyration, and natural frequency--were measured in 44 species of terrestrial mammals, spanning eight taxonomic orders. Limb length increases disproportionately with body mass via positive allometry (length ∝ body mass(0.40)); the positive allometry of limb length may help explain the scaling of the metabolic cost of transport. When scaled against body mass, forelimb inertial properties, apart from mass, scale with positive allometry. Fore- and hindlimb mass scale according to geometric similarity (limb mass ∝ body mass(1.0)), as do the remaining hindlimb inertial properties. The positive allometry of limb length is largely the result of absolute differences in limb inertial properties between mammalian subgroups. Though likely detrimental to locomotor costs in large mammals, scale effects in limb inertial properties appear to be concomitant with scale effects in sensorimotor control and locomotor ability in terrestrial mammals. Across mammals, the forelimb's potential for angular acceleration scales according to geometric similarity, whereas the hindlimb's potential for angular acceleration scales with positive allometry.

  18. Scale Effects between Body Size and Limb Design in Quadrupedal Mammals

    PubMed Central

    Kilbourne, Brandon M.; Hoffman, Louwrens C.

    2013-01-01

    Recently the metabolic cost of swinging the limbs has been found to be much greater than previously thought, raising the possibility that limb rotational inertia influences the energetics of locomotion. Larger mammals have a lower mass-specific cost of transport than smaller mammals. The scaling of the mass-specific cost of transport is partly explained by decreasing stride frequency with increasing body size; however, it is unknown if limb rotational inertia also influences the mass-specific cost of transport. Limb length and inertial properties – limb mass, center of mass (COM) position, moment of inertia, radius of gyration, and natural frequency – were measured in 44 species of terrestrial mammals, spanning eight taxonomic orders. Limb length increases disproportionately with body mass via positive allometry (length ∝ body mass0.40); the positive allometry of limb length may help explain the scaling of the metabolic cost of transport. When scaled against body mass, forelimb inertial properties, apart from mass, scale with positive allometry. Fore- and hindlimb mass scale according to geometric similarity (limb mass ∝ body mass1.0), as do the remaining hindlimb inertial properties. The positive allometry of limb length is largely the result of absolute differences in limb inertial properties between mammalian subgroups. Though likely detrimental to locomotor costs in large mammals, scale effects in limb inertial properties appear to be concomitant with scale effects in sensorimotor control and locomotor ability in terrestrial mammals. Across mammals, the forelimb's potential for angular acceleration scales according to geometric similarity, whereas the hindlimb's potential for angular acceleration scales with positive allometry. PMID:24260117

  19. Risk of large-scale evacuation based on the effectiveness of rescue strategies under different crowd densities.

    PubMed

    Wang, Jinghong; Lo, Siuming; Wang, Qingsong; Sun, Jinhua; Mu, Honglin

    2013-08-01

    Crowd density is a key factor that influences the moving characteristics of a large group of people during a large-scale evacuation. In this article, the macro features of crowd flow and subsequent rescue strategies were considered, and a series of characteristic crowd densities that affect large-scale people movement, as well as the maximum bearing density when the crowd is extremely congested, were analyzed. On the basis of characteristic crowd densities, the queuing theory was applied to simulate crowd movement. Accordingly, the moving characteristics of the crowd and the effects of typical crowd density-which is viewed as the representation of the crowd's arrival intensity in front of the evacuation passageways-on rescue strategies was studied. Furthermore, a "risk axle of crowd density" is proposed to determine the efficiency of rescue strategies in a large-scale evacuation, i.e., whether the rescue strategies are able to effectively maintain or improve evacuation efficiency. Finally, through some rational hypotheses for the value of evacuation risk, a three-dimensional distribution of the evacuation risk is established to illustrate the risk axle of crowd density. This work aims to make some macro, but original, analysis on the risk of large-scale crowd evacuation from the perspective of the efficiency of rescue strategies. © 2012 Society for Risk Analysis.

  20. Benchmarking the cost efficiency of community care in Australian child and adolescent mental health services: implications for future benchmarking.

    PubMed

    Furber, Gareth; Brann, Peter; Skene, Clive; Allison, Stephen

    2011-06-01

    The purpose of this study was to benchmark the cost efficiency of community care across six child and adolescent mental health services (CAMHS) drawn from different Australian states. Organizational, contact and outcome data from the National Mental Health Benchmarking Project (NMHBP) data-sets were used to calculate cost per "treatment hour" and cost per episode for the six participating organizations. We also explored the relationship between intake severity as measured by the Health of the Nations Outcome Scales for Children and Adolescents (HoNOSCA) and cost per episode. The average cost per treatment hour was $223, with cost differences across the six services ranging from a mean of $156 to $273 per treatment hour. The average cost per episode was $3349 (median $1577) and there were significant differences in the CAMHS organizational medians ranging from $388 to $7076 per episode. HoNOSCA scores explained at best 6% of the cost variance per episode. These large cost differences indicate that community CAMHS have the potential to make substantial gains in cost efficiency through collaborative benchmarking. Benchmarking forums need considerable financial and business expertise for detailed comparison of business models for service provision.

  1. Towards fully spray coated organic light emitting devices

    NASA Astrophysics Data System (ADS)

    Gilissen, Koen; Stryckers, Jeroen; Manca, Jean; Deferme, Wim

    2014-10-01

    Pi-conjugated polymer light emitting devices have the potential to be the next generation of solid state lighting. In order to achieve this goal, a low cost, efficient and large area production process is essential. Polymer based light emitting devices are generally deposited using techniques based on solution processing e.g.: spin coating, ink jet printing. These techniques are not well suited for cost-effective, high throughput, large area mass production of these organic devices. Ultrasonic spray deposition however, is a deposition technique that is fast, efficient and roll to roll compatible which can be easily scaled up for the production of large area polymer light emitting devices (PLEDs). This deposition technique has already successfully been employed to produce organic photovoltaic devices (OPV)1. Recently the electron blocking layer PEDOT:PSS2 and metal top contact3 have been successfully spray coated as part of the organic photovoltaic device stack. In this study, the effects of ultrasonic spray deposition of polymer light emitting devices are investigated. For the first time - to our knowledge -, spray coating of the active layer in PLED is demonstrated. Different solvents are tested to achieve the best possible spray-able dispersion. The active layer morphology is characterized and optimized to produce uniform films with optimal thickness. Furthermore these ultrasonic spray coated films are incorporated in the polymer light emitting device stack to investigate the device characteristics and efficiency. Our results show that after careful optimization of the active layer, ultrasonic spray coating is prime candidate as deposition technique for mass production of PLEDs.

  2. Energy Storage for the Power Grid

    ScienceCinema

    Imhoff, Carl; Vaishnav, Dave; Wang, Wei

    2018-05-30

    The iron vanadium redox flow battery was developed by researchers at Pacific Northwest National Laboratory as a solution to large-scale energy storage for the power grid. This technology provides the energy industry and the nation with a reliable, stable, safe, and low-cost storage alternative for a cleaner, efficient energy future.

  3. A Short History of Performance Assessment: Lessons Learned.

    ERIC Educational Resources Information Center

    Madaus, George F.; O'Dwyer, Laura M.

    1999-01-01

    Places performance assessment in the context of high-stakes uses, describes underlying technologies, and outlines the history of performance testing from 210 B.C.E. to the present. Historical issues of fairness, efficiency, cost, and infrastructure influence contemporary efforts to use performance assessments in large-scale, high-stakes testing…

  4. A feasibility study of large-scale photobiological hydrogen production utilizing mariculture-raised cyanobacteria.

    PubMed

    Sakurai, Hidehiro; Masukawa, Hajime; Kitashima, Masaharu; Inoue, Kazuhito

    2010-01-01

    In order to decrease CO(2) emissions from the burning of fossil fuels, the development of new renewable energy sources sufficiently large in quantity is essential. To meet this need, we propose large-scale H(2) production on the sea surface utilizing cyanobacteria. Although many of the relevant technologies are in the early stage of development, this chapter briefly examines the feasibility of such H(2) production, in order to illustrate that under certain conditions large-scale photobiological H(2) production can be viable. Assuming that solar energy is converted to H(2) at 1.2% efficiency, the future cost of H(2) can be estimated to be about 11 (pipelines) and 26.4 (compression and marine transportation) cents kWh(-1), respectively.

  5. A stepped strategy that aims at the nationwide implementation of the Enhanced Recovery After Surgery programme in major gynaecological surgery: study protocol of a cluster randomised controlled trial.

    PubMed

    de Groot, Jeanny Ja; Maessen, José Mc; Slangen, Brigitte Fm; Winkens, Bjorn; Dirksen, Carmen D; van der Weijden, Trudy

    2015-07-30

    Enhanced Recovery After Surgery (ERAS) programmes aim at an early recovery after surgical trauma and consequently at a reduced length of hospitalisation. This paper presents the protocol for a study that focuses on large-scale implementation of the ERAS programme in major gynaecological surgery in the Netherlands. The trial will evaluate effectiveness and costs of a stepped implementation approach that is characterised by tailoring the intensity of implementation activities to the needs of organisations and local barriers for change, in comparison with the generic breakthrough strategy that is usually applied in large-scale improvement projects in the Netherlands. All Dutch hospitals authorised to perform major abdominal surgery in gynaecological oncology patients are eligible for inclusion in this cluster randomised controlled trial. The hospitals that already fully implemented the ERAS programme in their local perioperative management or those who predominantly admit gynaecological surgery patients to an external hospital replacement care facility will be excluded. Cluster randomisation will be applied at the hospital level and will be stratified based on tertiary status. Hospitals will be randomly assigned to the stepped implementation strategy or the breakthrough strategy. The control group will receive the traditional breakthrough strategy with three educational sessions and the use of plan-do-study-act cycles for planning and executing local improvement activities. The intervention group will receive an innovative stepped strategy comprising four levels of intensity of support. Implementation starts with generic low-cost activities and may build up to the highest level of tailored and labour-intensive activities. The decision for a stepwise increase in intensive support will be based on the success of implementation so far. Both implementation strategies will be completed within 1 year and evaluated on effect, process, and cost-effectiveness. The primary outcome is length of postoperative hospital stay. Additional outcome measures are length of recovery, guideline adherence, and mean implementation costs per patient. This study takes up the challenge to evaluate an efficient strategy for large-scale implementation. Comparing effectiveness and costs of two different approaches, this study will help to define a preferred strategy for nationwide dissemination of best practices. Dutch Trial Register NTR4058.

  6. Improved uniformity in high-performance organic photovoltaics enabled by (3-aminopropyl)triethoxysilane cathode functionalization.

    PubMed

    Luck, Kyle A; Shastry, Tejas A; Loser, Stephen; Ogien, Gabriel; Marks, Tobin J; Hersam, Mark C

    2013-12-28

    Organic photovoltaics have the potential to serve as lightweight, low-cost, mechanically flexible solar cells. However, losses in efficiency as laboratory cells are scaled up to the module level have to date impeded large scale deployment. Here, we report that a 3-aminopropyltriethoxysilane (APTES) cathode interfacial treatment significantly enhances performance reproducibility in inverted high-efficiency PTB7:PC71BM organic photovoltaic cells, as demonstrated by the fabrication of 100 APTES-treated devices versus 100 untreated controls. The APTES-treated devices achieve a power conversion efficiency of 8.08 ± 0.12% with histogram skewness of -0.291, whereas the untreated controls achieve 7.80 ± 0.26% with histogram skewness of -1.86. By substantially suppressing the interfacial origins of underperforming cells, the APTES treatment offers a pathway for fabricating large-area modules with high spatial performance uniformity.

  7. Efficient preconditioning of the electronic structure problem in large scale ab initio molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiffmann, Florian; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch

    2015-06-28

    We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filteringmore » small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.« less

  8. An Evaluation of the Cost Effectiveness of Alternative Compensatory Reading Programs, Volume IV: Cost Analysis of Summer Programs. Final Report.

    ERIC Educational Resources Information Center

    Al-Salam, Nabeel; Flynn, Donald L.

    This report describes the results of a study of the cost and cost effectiveness of 27 summer reading programs, carried through as part of a large-scale evaluation of compensatory reading programs. Three other reports describe cost and cost-effectiveness studies of programs during the regular school year. On an instructional-hour basis, the total…

  9. Improved Magnetron Stability and Reduced Noise in Efficient Transmitters for Superconducting Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kazakevich, G.; Johnson, R.; Lebedev, V.

    State of the art high-current superconducting accelerators require efficient RF sources with a fast dynamic phase and power control. This allows for compensation of the phase and amplitude deviations of the accelerating voltage in the Superconducting RF (SRF) cavities caused by microphonics, etc. Efficient magnetron transmitters with fast phase and power control are attractive RF sources for this application. They are more cost effective than traditional RF sources such as klystrons, IOTs and solid-state amplifiers used with large scale accelerator projects. However, unlike traditional RF sources, controlled magnetrons operate as forced oscillators. Study of the impact of the controlling signalmore » on magnetron stability, noise and efficiency is therefore important. This paper discusses experiments with 2.45 GHz, 1 kW tubes and verifies our analytical model which is based on the charge drift approximation.« less

  10. Enhancing Solar Cell Efficiencies through 1-D Nanostructures

    PubMed Central

    2009-01-01

    The current global energy problem can be attributed to insufficient fossil fuel supplies and excessive greenhouse gas emissions resulting from increasing fossil fuel consumption. The huge demand for clean energy potentially can be met by solar-to-electricity conversions. The large-scale use of solar energy is not occurring due to the high cost and inadequate efficiencies of existing solar cells. Nanostructured materials have offered new opportunities to design more efficient solar cells, particularly one-dimensional (1-D) nanomaterials for enhancing solar cell efficiencies. These 1-D nanostructures, including nanotubes, nanowires, and nanorods, offer significant opportunities to improve efficiencies of solar cells by facilitating photon absorption, electron transport, and electron collection; however, tremendous challenges must be conquered before the large-scale commercialization of such cells. This review specifically focuses on the use of 1-D nanostructures for enhancing solar cell efficiencies. Other nanostructured solar cells or solar cells based on bulk materials are not covered in this review. Major topics addressed include dye-sensitized solar cells, quantum-dot-sensitized solar cells, and p-n junction solar cells.

  11. Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing

    NASA Astrophysics Data System (ADS)

    Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey

    Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.

  12. Investigation of optimum conditions and costs estimation for degradation of phenol by solar photo-Fenton process

    NASA Astrophysics Data System (ADS)

    Gar Alalm, Mohamed; Tawfik, Ahmed; Ookawara, Shinichi

    2017-03-01

    In this study, solar photo-Fenton reaction using compound parabolic collectors reactor was assessed for removal of phenol from aqueous solution. The effect of irradiation time, initial concentration, initial pH, and dosage of Fenton reagent were investigated. H2O2 and aromatic intermediates (catechol, benzoquinone, and hydroquinone) were quantified during the reaction to study the pathways of the oxidation process. Complete degradation of phenol was achieved after 45 min of irradiation when the initial concentration was 100 mg/L. However, increasing the initial concentration up to 500 mg/L inhibited the degradation efficiency. The dosage of H2O2 and Fe+2 significantly affected the degradation efficiency of phenol. The observed optimum pH for the reaction was 3.1. Phenol degradation at different concentration was fitted to the pseudo-first order kinetic according to Langmuir-Hinshelwood model. Costs estimation for a large scale reactor based was performed. The total costs of the best economic condition with maximum degradation of phenol are 2.54 €/m3.

  13. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  14. Relay discovery and selection for large-scale P2P streaming

    PubMed Central

    Zhang, Chengwei; Wang, Angela Yunxian

    2017-01-01

    In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers’ network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used “best-out-of-K” selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs. PMID:28410384

  15. Relay discovery and selection for large-scale P2P streaming.

    PubMed

    Zhang, Chengwei; Wang, Angela Yunxian; Hei, Xiaojun

    2017-01-01

    In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.

  16. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis.

    PubMed

    Chen, Ho-Wen; Chang, Ni-Bin; Chen, Jeng-Chung; Tsai, Shu-Ju

    2010-07-01

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA)--a production economics tool--to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  17. Tuning Chemical Potential Difference across Alternately Doped Graphene p-n Junctions for High-Efficiency Photodetection.

    PubMed

    Lin, Li; Xu, Xiang; Yin, Jianbo; Sun, Jingyu; Tan, Zhenjun; Koh, Ai Leen; Wang, Huan; Peng, Hailin; Chen, Yulin; Liu, Zhongfan

    2016-07-13

    Being atomically thin, graphene-based p-n junctions hold great promise for applications in ultrasmall high-efficiency photodetectors. It is well-known that the efficiency of such photodetectors can be improved by optimizing the chemical potential difference of the graphene p-n junction. However, to date, such tuning has been limited to a few hundred millielectronvolts. To improve this critical parameter, here we report that using a temperature-controlled chemical vapor deposition process, we successfully achieved modulation-doped growth of an alternately nitrogen- and boron-doped graphene p-n junction with a tunable chemical potential difference up to 1 eV. Furthermore, such p-n junction structure can be prepared on a large scale with stable, uniform, and substitutional doping and exhibits a single-crystalline nature. This work provides a feasible method for synthesizing low-cost, large-scale, high efficiency graphene p-n junctions, thus facilitating their applications in optoelectronic and energy conversion devices.

  18. Falling Particles: Concept Definition and Capital Cost Estimate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoddard, Larry; Galluzzo, Geoff; Adams, Shannon

    2016-06-30

    The Department of Energy’s (DOE) Office of Renewable Power (ORP) has been tasked to provide effective program management and strategic direction for all of the DOE’s Energy Efficiency & Renewable Energy’s (EERE’s) renewable power programs. The ORP’s efforts to accomplish this mission are aligned with national energy policies, DOE strategic planning, EERE’s strategic planning, Congressional appropriation, and stakeholder advice. ORP is supported by three renewable energy offices, of which one is the Solar Energy Technology Office (SETO) whose SunShot Initiative has a mission to accelerate research, development and large scale deployment of solar technologies in the United States. SETO hasmore » a goal of reducing the cost of Concentrating Solar Power (CSP) by 75 percent of 2010 costs by 2020 to reach parity with base-load energy rates, and to reduce costs 30 percent further by 2030. The SunShot Initiative is promoting the implementation of high temperature CSP with thermal energy storage allowing generation during high demand hours. The SunShot Initiative has funded significant research and development work on component testing, with attention to high temperature molten salts, heliostats, receiver designs, and high efficiency high temperature supercritical CO 2 (sCO2) cycles.« less

  19. Analysis of efficiency of waste reverse logistics for recycling.

    PubMed

    Veiga, Marcelo M

    2013-10-01

    Brazil is an agricultural country with the highest pesticide consumption in the world. Historically, pesticide packaging has not been disposed of properly. A federal law requires the chemical industry to provide proper waste management for pesticide-related products. A reverse logistics program was implemented, which has been hailed a great success. This program was designed to target large rural communities, where economy of scale can take place. Over the last 10 years, the recovery rate has been very poor in most small rural communities. The objective of this study was to analyze the case of this compulsory reverse logistics program for pesticide packaging under the recent Brazilian Waste Management Policy, which enforces recycling as the main waste management solution. This results of this exploratory research indicate that despite its aggregate success, the reverse logistics program is not efficient for small rural communities. It is not possible to use the same logistic strategy for small and large communities. The results also indicate that recycling might not be the optimal solution, especially in developing countries with unsatisfactory recycling infrastructure and large transportation costs. Postponement and speculation strategies could be applied for improving reverse logistics performance. In most compulsory reverse logistics programs, there is no economical solution. Companies should comply with the law by ranking cost-effective alternatives.

  20. Accessing Secondary Markets as a Capital Source for Energy Efficiency Finance Programs: Program Design Considerations for Policymakers and Administrators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramer, C.; Martin, E. Fadrhonc; Thompson, P.

    Estimates of the total opportunity for investment in cost-effective energy efficiency in the United States are typically in the range of several hundred billion dollars (Choi Granade, et al., 2009 and Fulton & Brandenburg, 2012).1,2 To access this potential, many state policymakers and utility regulators have established aggressive energy efficiency savings targets. Current levels of taxpayer and utility bill-payer funding for energy efficiency is only a small fraction of the total investment needed to meet these targets (SEE Action Financing Solutions Working Group, 2013). Given this challenge, some energy efficiency program administrators are working to access private capital sources withmore » the aim of amplifying the funds available for investment. In this context, efficient access to secondary market capital has been advanced as one important enabler of the energy efficiency industry “at scale.”3 The question of what role secondary markets can play in bringing energy efficiency to scale is largely untested despite extensive attention from media, technical publications, advocates, and others. Only a handful of transactions of energy efficiency loan products have been executed to date, and it is too soon to draw robust conclusions from these deals. At the same time, energy efficiency program administrators and policymakers face very real decisions regarding whether and how to access secondary markets as part of their energy efficiency deployment strategy.« less

  1. Strategy for large-scale isolation of enantiomers in drug discovery.

    PubMed

    Leek, Hanna; Thunberg, Linda; Jonson, Anna C; Öhlén, Kristina; Klarqvist, Magnus

    2017-01-01

    A strategy for large-scale chiral resolution is illustrated by the isolation of pure enantiomer from a 5kg batch. Results from supercritical fluid chromatography will be presented and compared with normal phase liquid chromatography. Solubility of the compound in the supercritical mobile phase was shown to be the limiting factor. To circumvent this, extraction injection was used but shown not to be efficient for this compound. Finally, a method for chiral resolution by crystallization was developed and applied to give diastereomeric salt with an enantiomeric excess of 99% at a 91% yield. Direct access to a diverse separation tool box will be shown to be essential for solving separation problems in the most cost and time efficient way. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics

    PubMed Central

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-01-01

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry. PMID:24603964

  3. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics.

    PubMed

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-03-07

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.

  4. Reforming the wastewater treatment sector in Italy: Implications of plant size, structure, and scale economies

    NASA Astrophysics Data System (ADS)

    Fraquelli, Giovanni; Giandrone, Roberto

    2003-10-01

    In the context of the restructuring of the water industry, this work examines the treatment processes of urban wastewaters in Italy, with reference to costs, size, and technology. The operating cost function of 103 plants confirms scope economies from vertical integration and strong economies of scale for the smaller structure, confirming the benefits coming from the aggregation of the existing little firms. A minimum efficient size at about 100,000 inhabitants, however, inhibits the creation of large monopolies at a local level and enables the maintenance of indirect competition. Among the explanatory variables of running costs, the pollution load of the input wastewater takes on a high statistical significance and suggests environmental prevention, while the strong impact of sludge concentration means it should be considered in the new tariff systems. The recent introduction of advanced treatments is expensive, but the costs are balanced by a notable improvement in the pureness of the effluent waters. As for general environmental policies, it is necessary to find a good compromise between the need to improve the effectiveness of the existing plants and the investments in areas where the water treatment service is still inexistent.

  5. Cost Scaling of a Real-World Exhaust Waste Heat Recovery Thermoelectric Generator: A Deeper Dive

    NASA Technical Reports Server (NTRS)

    Hendricks, Terry J.; Yee, Shannon; LeBlanc, Saniya

    2015-01-01

    Cost is equally important to power density or efficiency for the adoption of waste heat recovery thermoelectric generators (TEG) in many transportation and industrial energy recovery applications. In many cases the system design that minimizes cost (e.g., the $/W value) can be very different than the design that maximizes the system's efficiency or power density, and it is important to understand the relationship between those designs to optimize TEG performance-cost compromises. Expanding on recent cost analysis work and using more detailed system modeling, an enhanced cost scaling analysis of a waste heat recovery thermoelectric generator with more detailed, coupled treatment of the heat exchangers has been performed. In this analysis, the effect of the heat lost to the environment and updated relationships between the hot-side and cold-side conductances that maximize power output are considered. This coupled thermal and thermoelectric treatment of the exhaust waste heat recovery thermoelectric generator yields modified cost scaling and design optimization equations, which are now strongly dependent on the heat leakage fraction, exhaust mass flow rate, and heat exchanger effectiveness. This work shows that heat exchanger costs most often dominate the overall TE system costs, that it is extremely difficult to escape this regime, and in order to achieve TE system costs of $1/W it is necessary to achieve heat exchanger costs of $1/(W/K). Minimum TE system costs per watt generally coincide with maximum power points, but Preferred TE Design Regimes are identified where there is little cost penalty for moving into regions of higher efficiency and slightly lower power outputs. These regimes are closely tied to previously-identified low cost design regimes. This work shows that the optimum fill factor Fopt minimizing system costs decreases as heat losses increase, and increases as exhaust mass flow rate and heat exchanger effectiveness increase. These findings have profound implications on the design and operation of various thermoelectric (TE) waste heat 3 recovery systems. This work highlights the importance of heat exchanger costs on the overall TEG system costs, quantifies the possible TEG performance-cost domain space based on heat exchanger effects, and provides a focus for future system research and development efforts.

  6. Low-Cost High-Pressure Hydrogen Generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cropley, Cecelia C.; Norman, Timothy J.

    Electrolysis of water, particularly in conjunction with renewable energy sources, is potentially a cost-effective and environmentally friendly method of producing hydrogen at dispersed forecourt sites, such as automotive fueling stations. The primary feedstock for an electrolyzer is electricity, which could be produced by renewable sources such as wind or solar that do not produce carbon dioxide or other greenhouse gas emissions. However, state-of-the-art electrolyzer systems are not economically competitive for forecourt hydrogen production due to their high capital and operating costs, particularly the cost of the electricity used by the electrolyzer stack. In this project, Giner Electrochemical Systems, LLC (GES)more » developed a low cost, high efficiency proton-exchange membrane (PEM) electrolysis system for hydrogen production at moderate pressure (300 to 400 psig). The electrolyzer stack operates at differential pressure, with hydrogen produced at moderate pressure while oxygen is evolved at near-atmospheric pressure, reducing the cost of the water feed and oxygen handling subsystems. The project included basic research on catalysts and membranes to improve the efficiency of the electrolysis reaction as well as development of advanced materials and component fabrication methods to reduce the capital cost of the electrolyzer stack and system. The project culminated in delivery of a prototype electrolyzer module to the National Renewable Energy Laboratory for testing at the National Wind Technology Center. Electrolysis cell efficiency of 72% (based on the lower heating value of hydrogen) was demonstrated using an advanced high-strength membrane developed in this project. This membrane would enable the electrolyzer system to exceed the DOE 2012 efficiency target of 69%. GES significantly reduced the capital cost of a PEM electrolyzer stack through development of low cost components and fabrication methods, including a 60% reduction in stack parts count. Economic analysis indicates that hydrogen could be produced for $3.79 per gge at an electricity cost of $0.05/kWh by the lower-cost PEM electrolyzer developed in this project, assuming high-volume production of large-scale electrolyzer systems.« less

  7. Optimal estimation and scheduling in aquifer management using the rapid feedback control method

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric

    2017-12-01

    Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.

  8. Parallel multi-join query optimization algorithm for distributed sensor network in the internet of things

    NASA Astrophysics Data System (ADS)

    Zheng, Yan

    2015-03-01

    Internet of things (IoT), focusing on providing users with information exchange and intelligent control, attracts a lot of attention of researchers from all over the world since the beginning of this century. IoT is consisted of large scale of sensor nodes and data processing units, and the most important features of IoT can be illustrated as energy confinement, efficient communication and high redundancy. With the sensor nodes increment, the communication efficiency and the available communication band width become bottle necks. Many research work is based on the instance which the number of joins is less. However, it is not proper to the increasing multi-join query in whole internet of things. To improve the communication efficiency between parallel units in the distributed sensor network, this paper proposed parallel query optimization algorithm based on distribution attributes cost graph. The storage information relations and the network communication cost are considered in this algorithm, and an optimized information changing rule is established. The experimental result shows that the algorithm has good performance, and it would effectively use the resource of each node in the distributed sensor network. Therefore, executive efficiency of multi-join query between different nodes could be improved.

  9. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2006-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  10. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2005-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  11. Optimization of hybrid power system composed of SMES and flywheel MG for large pulsed load

    NASA Astrophysics Data System (ADS)

    Niiyama, K.; Yagai, T.; Tsuda, M.; Hamajima, T.

    2008-09-01

    A superconducting magnetic storage system (SMES) has some advantages such as rapid large power response and high storage efficiency which are superior to other energy storage systems. A flywheel motor generator (FWMG) has large scaled capacity and high reliability, and hence is broadly utilized for a large pulsed load, while it has comparatively low storage efficiency due to high mechanical loss compared with SMES. A fusion power plant such as International Thermo-Nuclear Experimental Reactor (ITER) requires a large and long pulsed load which causes a frequency deviation in a utility power system. In order to keep the frequency within an allowable deviation, we propose a hybrid power system for the pulsed load, which equips the SMES and the FWMG with the utility power system. We evaluate installation cost and frequency control performance of three power systems combined with energy storage devices; (i) SMES with the utility power, (ii) FWMG with the utility power, (iii) both SMES and FWMG with the utility power. The first power system has excellent frequency power control performance but its installation cost is high. The second system has inferior frequency control performance but its installation cost is the lowest. The third system has good frequency control performance and its installation cost is attained lower than the first power system by adjusting the ratio between SMES and FWMG.

  12. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  13. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE PAGES

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...

    2017-06-06

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  14. Fabrication & characterization of thin film Perovskite solar cells under ambient conditions

    NASA Astrophysics Data System (ADS)

    Shah, Vivek T.

    High efficiency solar cells based on inorganic materials such as silicon have been commercialized and used to harness energy from the sun and convert it into electrical energy. However, they are energy-intensive and rigid. Thin film solar cells based on inorganic-organic hybrid lead halide perovskite compounds have the potential to be a disruptive technology in the field of renewable energy sector of the economy. Perovskite solar cell (PSC) technology is a viable candidate for low-cost large scale production as it is solution processable at low temperature on a flexible substrate. However, for commercialization, PSCs need to compete with the cost and efficiency of crystalline silicon solar cells. High efficiency PSCs have been fabricated under highly controlled conditions in what is known as a glove-box, which adds to the cost of fabrication of PSCs. This additional cost can be significantly reduced by eliminating the use of glove-box for fabrication. Therefore, in this work, thin film PSCs were fabricated at ambient conditions on glass substrates. A power conversion efficiency of 5.6% was achieved with optimum fabrication control and minimal exposure to moisture.

  15. Carbon Nanotube Integration with a CMOS Process

    PubMed Central

    Perez, Maximiliano S.; Lerner, Betiana; Resasco, Daniel E.; Pareja Obregon, Pablo D.; Julian, Pedro M.; Mandolesi, Pablo S.; Buffa, Fabian A.; Boselli, Alfredo; Lamagna, Alberto

    2010-01-01

    This work shows the integration of a sensor based on carbon nanotubes using CMOS technology. A chip sensor (CS) was designed and manufactured using a 0.30 μm CMOS process, leaving a free window on the passivation layer that allowed the deposition of SWCNTs over the electrodes. We successfully investigated with the CS the effect of humidity and temperature on the electrical transport properties of SWCNTs. The possibility of a large scale integration of SWCNTs with CMOS process opens a new route in the design of more efficient, low cost sensors with high reproducibility in their manufacture. PMID:22319330

  16. Toward large-scale solar energy systems with peak concentrations of 20,000 suns

    NASA Astrophysics Data System (ADS)

    Kribus, Abraham

    1997-10-01

    The heliostat field plays a crucial role in defining the achievable limits for central receiver system efficiency and cost. Increasing system efficiency, thus reducing the reflective area and system cost, can be achieved by increasing the concentration and the receiver temperature. The concentration achievable in central receiver plants, however, is constrained by current heliostat technology and design practices. The factors affecting field performance are surface and tracking errors, astigmatism, shadowing, blocking and dilution. These are geometric factors that can be systematically treated and reduced. We present improvements in collection optics and technology that may boost concentration (up to 20,000 peak), achievable temperature (2,000 K), and efficiency in solar central receiver plants. The increased performance may significantly reduce the cost of solar energy in existing applications, and enable solar access to new ultra-high-temperature applications, such as: future gas turbines approaching 60% combined cycle efficiency; high-temperature thermo-chemical processes; and gas-dynamic processes.

  17. Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup

    PubMed Central

    Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.

    2010-01-01

    Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651

  18. PEM Electrolyzer Incorporating an Advanced Low-Cost Membrane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamdan, Monjid

    The Department of Energy (DOE) has identified hydrogen production by electrolysis of water at forecourt stations as a critical technology for transition to the hydrogen economy; however, the cost of hydrogen produced by present commercially available electrolysis systems is considerably higher than the DOE 2015 and 2020 cost targets. Analyses of proton-exchange membrane (PEM) electrolyzer systems indicate that reductions in electricity consumption and electrolyzer stack and system capital cost are required to meet the DOE cost targets. The primary objective is to develop and demonstrate a cost-effective energy-based system for electrolytic generation of hydrogen. The goal is to increase PEMmore » electrolyzer efficiency and to reduce electrolyzer stack and system capital cost to meet the DOE cost targets for distributed electrolysis. To accomplish this objective, work was conducted by a team consisting of Giner, Inc. (Giner), Virginia Polytechnic Institute & University (VT), and domnick hunter group, a subsidiary of Parker Hannifin (Parker). The project focused on four (4) key areas: (1) development of a high-efficiency, high-strength membrane; (2) development of a long-life cell-separator; (3) scale-up of cell active area to 290 cm2 (from 160 cm²); and (4) development of a prototype commercial electrolyzer system. In each of the key stack development areas Giner and our team members conducted focused development in laboratory-scale hardware, with analytical support as necessary, followed by life-testing of the most promising candidate materials. Selected components were then scaled up and incorporated into low-cost scaled-up stack hardware. The project culminated in the fabrication and testing of a highly efficient electrolyzer system for production of 0.5 kg/hr hydrogen and validation of the stack and system in testing at the National Renewable Energy Laboratory (NREL).« less

  19. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  20. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yang; Sivalingam, Kantharuban; Neese, Frank, E-mail: Frank.Neese@cec.mpg.de

    2016-03-07

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still twomore » important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling “partially contracted” NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient “electron pair prescreening” that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison between the partial and strong contraction schemes is made, with conclusions that discourage the strong contraction scheme as a basis for local correlation methods due to its non-invariance with respect to rotations in the inactive and external subspaces. A minimal set of conservatively chosen truncation thresholds controls the accuracy of the method. With the default thresholds, about 99.9% of the canonical partially contracted NEVPT2 correlation energy is recovered while the crossover of the computational cost with the already very efficient canonical method occurs reasonably early; in linear chain type compounds at a chain length of around 80 atoms. Calculations are reported for systems with more than 300 atoms and 5400 basis functions.« less

  1. Development of a millimetrically scaled biodiesel transesterification device that relies on droplet-based co-axial fluidics

    NASA Astrophysics Data System (ADS)

    Yeh, S. I.; Huang, Y. C.; Cheng, C. H.; Cheng, C. M.; Yang, J. T.

    2016-07-01

    In this study, we investigated a fluidic system that adheres to new concepts of energy production. To improve efficiency, cost, and ease of manufacture, a millimetrically scaled device that employs a droplet-based co-axial fluidic system was devised to complete alkali-catalyzed transesterification for biodiesel production. The large surface-to-volume ratio of the droplet-based system, and the internal circulation induced inside the moving droplets, significantly enhanced the reaction rate of immiscible liquids used here - soybean oil and methanol. This device also decreased the molar ratio between methanol and oil to near the stoichiometric coefficients of a balanced chemical equation, which enhanced the total biodiesel volume produced, and decreased the costs of purification and recovery of excess methanol. In this work, the droplet-based co-axial fluidic system performed better than other methods of continuous-flow production. We achieved an efficiency that is much greater than that of reported systems. This study demonstrated the high potential of droplet-based fluidic chips for energy production. The small energy consumption and low cost of the highly purified biodiesel transesterification system described conforms to the requirements of distributed energy (inexpensive production on a moderate scale) in the world.

  2. Large Scale Triboelectric Nanogenerator and Self-Powered Pressure Sensor Array Using Low Cost Roll-to-Roll UV Embossing

    PubMed Central

    Dhakar, Lokesh; Gudla, Sudeep; Shan, Xuechuan; Wang, Zhiping; Tay, Francis Eng Hock; Heng, Chun-Huat; Lee, Chengkuo

    2016-01-01

    Triboelectric nanogenerators (TENGs) have emerged as a potential solution for mechanical energy harvesting over conventional mechanisms such as piezoelectric and electromagnetic, due to easy fabrication, high efficiency and wider choice of materials. Traditional fabrication techniques used to realize TENGs involve plasma etching, soft lithography and nanoparticle deposition for higher performance. But lack of truly scalable fabrication processes still remains a critical challenge and bottleneck in the path of bringing TENGs to commercial production. In this paper, we demonstrate fabrication of large scale triboelectric nanogenerator (LS-TENG) using roll-to-roll ultraviolet embossing to pattern polyethylene terephthalate sheets. These LS-TENGs can be used to harvest energy from human motion and vehicle motion from embedded devices in floors and roads, respectively. LS-TENG generated a power density of 62.5 mW m−2. Using roll-to-roll processing technique, we also demonstrate a large scale triboelectric pressure sensor array with pressure detection sensitivity of 1.33 V kPa−1. The large scale pressure sensor array has applications in self-powered motion tracking, posture monitoring and electronic skin applications. This work demonstrates scalable fabrication of TENGs and self-powered pressure sensor arrays, which will lead to extremely low cost and bring them closer to commercial production. PMID:26905285

  3. Rational design of competitive electrocatalysts for the oxygen reduction reaction in hydrogen fuel cells

    NASA Astrophysics Data System (ADS)

    Stolbov, Sergey; Alcántara Ortigoza, Marisol

    2012-02-01

    The large-scale application of one of the most promising clean and renewable sources of energy, hydrogen fuel cells, still awaits efficient and cost-effective electrocatalysts for the oxygen reduction reaction (ORR) occurring on the cathode. We demonstrate that truly rational design renders electrocatalysts possessing both qualities. By unifying the knowledge on surface morphology, composition, electronic structure and reactivity, we solve that sandwich-like structures are an excellent choice for optimization. Their constituting species couple synergistically yielding reaction-environment stability, cost-effectiveness and tunable reactivity. This cooperative-action concept enabled us to predict two advantageous ORR electrocatalysts. Density functional theory calculations of the reaction free-energy diagrams confirm that these materials are more active toward ORR than the so far best Pt-based catalysts. Our designing concept advances also a general approach for engineering materials in heterogeneous catalysis.

  4. Improved Efficient Routing Strategy on Scale-Free Networks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhong-Yuan; Liang, Man-Gui

    Since the betweenness of nodes in complex networks can theoretically represent the traffic load of nodes under the currently used routing strategy, we propose an improved efficient (IE) routing strategy to enhance to the network traffic capacity based on the betweenness centrality. Any node with the highest betweenness is susceptible to traffic congestion. An efficient way to improve the network traffic capacity is to redistribute the heavy traffic load from these central nodes to non-central nodes, so in this paper, we firstly give a path cost function by considering the sum of node betweenness with a tunable parameter β along the actual path. Then, by minimizing the path cost, our IE routing strategy achieved obvious improvement on the network transport efficiency. Simulations on scale-free Barabási-Albert (BA) networks confirmed the effectiveness of our strategy, when compared with the efficient routing (ER) and the shortest path (SP) routing.

  5. Scaling up malaria intervention "packages" in Senegal: using cost effectiveness data for improving allocative efficiency and programmatic decision-making.

    PubMed

    Faye, Sophie; Cico, Altea; Gueye, Alioune Badara; Baruwa, Elaine; Johns, Benjamin; Ndiop, Médoune; Alilio, Martin

    2018-04-10

    Senegal's National Malaria Control Programme (NMCP) implements control interventions in the form of targeted packages: (1) scale-up for impact (SUFI), which includes bed nets, intermittent preventive treatment in pregnancy, rapid diagnostic tests, and artemisinin combination therapy; (2) SUFI + reactive case investigation (focal test and treat); (3) SUFI + indoor residual spraying (IRS); (4) SUFI + seasonal malaria chemoprophylaxis (SMC); and, (5) SUFI + SMC + IRS. This study estimates the cost effectiveness of each of these packages to provide the NMCP with data for improving allocative efficiency and programmatic decision-making. This study is a retrospective analysis for the period 2013-2014 covering all 76 Senegal districts. The yearly implementation cost for each intervention was estimated and the information was aggregated into a package cost for all covered districts. The change in the burden of malaria associated with each package was estimated using the number of disability adjusted life-years (DALYs) averted. The cost effectiveness (cost per DALY averted) was then calculated for each package. The cost per DALY averted ranged from $76 to $1591 across packages. Using World Health Organization standards, 4 of the 5 packages were "very cost effective" (less than Senegal's GDP per capita). Relative to the 2 other packages implemented in malaria control districts, the SUFI + SMC package was the most cost-effective package at $76 per DALY averted. SMC seems to make IRS more cost effective: $582 per DALY averted for SUFI + IRS compared with $272 for the SUFI + IRS + SMC package. The SUFI + focal test and treat, implemented in malaria elimination districts, had a cost per DALY averted of $1591 and was only "cost-effective" (less than three times Senegal's per capita GDP). Senegal's choice of deploying malaria interventions by packages seems to be effectively targeting high burden areas with a wide range of interventions. However, not all districts showed the same level of performance, indicating that efficiency gains are still possible.

  6. Nonepitaxial Thin-Film InP for Scalable and Efficient Photocathodes.

    PubMed

    Hettick, Mark; Zheng, Maxwell; Lin, Yongjing; Sutter-Fella, Carolin M; Ager, Joel W; Javey, Ali

    2015-06-18

    To date, some of the highest performance photocathodes of a photoelectrochemical (PEC) cell have been shown with single-crystalline p-type InP wafers, exhibiting half-cell solar-to-hydrogen conversion efficiencies of over 14%. However, the high cost of single-crystalline InP wafers may present a challenge for future large-scale industrial deployment. Analogous to solar cells, a thin-film approach could address the cost challenges by utilizing the benefits of the InP material while decreasing the use of expensive materials and processes. Here, we demonstrate this approach, using the newly developed thin-film vapor-liquid-solid (TF-VLS) nonepitaxial growth method combined with an atomic-layer deposition protection process to create thin-film InP photocathodes with large grain size and high performance, in the first reported solar device configuration generated by materials grown with this technique. Current-voltage measurements show a photocurrent (29.4 mA/cm(2)) and onset potential (630 mV) approaching single-crystalline wafers and an overall power conversion efficiency of 11.6%, making TF-VLS InP a promising photocathode for scalable and efficient solar hydrogen generation.

  7. Co-Cultivation of Fungal and Microalgal Cells as an Efficient System for Harvesting Microalgal Cells, Lipid Production and Wastewater Treatment

    PubMed Central

    Wrede, Digby; Taha, Mohamed; Miranda, Ana F.; Kadali, Krishna; Stevenson, Trevor; Ball, Andrew S.; Mouradov, Aidyn

    2014-01-01

    The challenges which the large scale microalgal industry is facing are associated with the high cost of key operations such as harvesting, nutrient supply and oil extraction. The high-energy input for harvesting makes current commercial microalgal biodiesel production economically unfeasible and can account for up to 50% of the total cost of biofuel production. Co-cultivation of fungal and microalgal cells is getting increasing attention because of high efficiency of bio-flocculation of microalgal cells with no requirement for added chemicals and low energy inputs. Moreover, some fungal and microalgal strains are well known for their exceptional ability to purify wastewater, generating biomass that represents a renewable and sustainable feedstock for biofuel production. We have screened the flocculation efficiency of the filamentous fungus A. fumigatus against 11 microalgae representing freshwater, marine, small (5 µm), large (over 300 µm), heterotrophic, photoautotrophic, motile and non-motile strains. Some of the strains are commercially used for biofuel production. Lipid production and composition were analysed in fungal-algal pellets grown on media containing alternative carbon, nitrogen and phosphorus sources contained in wheat straw and swine wastewater, respectively. Co-cultivation of algae and A. fumigatus cells showed additive and synergistic effects on biomass production, lipid yield and wastewater bioremediation efficiency. Analysis of fungal-algal pellet's fatty acids composition suggested that it can be tailored and optimised through co-cultivating different algae and fungi without the need for genetic modification. PMID:25419574

  8. Co-cultivation of fungal and microalgal cells as an efficient system for harvesting microalgal cells, lipid production and wastewater treatment.

    PubMed

    Wrede, Digby; Taha, Mohamed; Miranda, Ana F; Kadali, Krishna; Stevenson, Trevor; Ball, Andrew S; Mouradov, Aidyn

    2014-01-01

    The challenges which the large scale microalgal industry is facing are associated with the high cost of key operations such as harvesting, nutrient supply and oil extraction. The high-energy input for harvesting makes current commercial microalgal biodiesel production economically unfeasible and can account for up to 50% of the total cost of biofuel production. Co-cultivation of fungal and microalgal cells is getting increasing attention because of high efficiency of bio-flocculation of microalgal cells with no requirement for added chemicals and low energy inputs. Moreover, some fungal and microalgal strains are well known for their exceptional ability to purify wastewater, generating biomass that represents a renewable and sustainable feedstock for biofuel production. We have screened the flocculation efficiency of the filamentous fungus A. fumigatus against 11 microalgae representing freshwater, marine, small (5 µm), large (over 300 µm), heterotrophic, photoautotrophic, motile and non-motile strains. Some of the strains are commercially used for biofuel production. Lipid production and composition were analysed in fungal-algal pellets grown on media containing alternative carbon, nitrogen and phosphorus sources contained in wheat straw and swine wastewater, respectively. Co-cultivation of algae and A. fumigatus cells showed additive and synergistic effects on biomass production, lipid yield and wastewater bioremediation efficiency. Analysis of fungal-algal pellet's fatty acids composition suggested that it can be tailored and optimised through co-cultivating different algae and fungi without the need for genetic modification.

  9. Ultrafast and large scale preparation of superior catalyst for oxygen evolution reaction

    NASA Astrophysics Data System (ADS)

    Tian, Xianqing; Liu, Yunhua; Xiao, Dan; Sun, Jie

    2017-10-01

    The development of efficient and earth abundant catalyst for the oxygen evolution reaction (OER) is a key challenge for the renewable energy research community. Here, we report a facile and ultrafast route to immobilize nickel-iron layered double hydroxide (NiFe-LDH) nanoparticles on nickel foam (NF) via soaking the direct electroless deposited prussian blue analogue (PBA) on NF in 1 M KOH. This NiFe-LDH/NF electrode can be prepared in a few seconds without further treatments. It has three-dimensional interpenetrating network originated from its PBA precursor which facilitate the diffusion and ad/desorption of the reactants and producing for OER. And further characterization of the Faradaic efficiency and forced convection tests show direct evidence to demonstrate the formation of free intermediate(s) in the OER process. This electrode (typically NiFe-LDH-20s/NF) exhibits outstanding electrocatalytic activity with low overpotential of ∼0.240 V at 10 mA cm-2, low Tafel slope of 38 mV dec-1, and great stability. This feasible strategy affords a new strategy for the large scale manufacture of low-cost, effective and robust OER electrodes.

  10. High Efficiency Solar Thermochemical Reactor for Hydrogen Production.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDaniel, Anthony H.

    2017-09-30

    This research and development project is focused on the advancement of a technology that produces hydrogen at a cost that is competitive with fossil-based fuels for transportation. A twostep, solar-driven WS thermochemical cycle is theoretically capable of achieving an STH conversion ratio that exceeds the DOE target of 26% at a scale large enough to support an industrialized economy [1]. The challenge is to transition this technology from the laboratory to the marketplace and produce hydrogen at a cost that meets or exceeds DOE targets.

  11. Cost and cost-effectiveness of nationwide school-based helminth control in Uganda

    PubMed Central

    BROOKER, SIMON; KABATEREINE, NARCIS B; FLEMING, FIONA; DEVLIN, NANCY

    2009-01-01

    Estimates of cost and cost-effectiveness are typically based on a limited number of small-scale studies with no investigation of the existence of economies to scale or intra-country variation in cost and cost-effectiveness. This information gap hinders the efficient allocation of health care resources and the ability to generalize estimates to other settings. The current study investigates the intra-country variation in the cost and cost-effectiveness of nationwide school-based treatment of helminth (worm) infection in Uganda. Programme cost data were collected through semi-structured interviews with districts officials and from accounting records in six of the 23 intervention districts. Both financial and economic costs were assessed. Costs were estimated on the basis of cost in US$ per schoolchild treated and an incremental cost effectiveness ratio (cost in US$ per case of anaemia averted) was used to evaluate programme cost-effectiveness. Sensitivity analysis was performed to assess the effect of discount rate and drug price. The overall economic cost per child treated in the six districts was US$ 0.54 and the cost-effectiveness was US$ 3.19 per case of anaemia averted. Analysis indicated that estimates of both cost and cost-effectiveness differ markedly with the total number of children which received treatment, indicating economies of scale. There was also substantial variation between districts in the cost per individual treated (US$ 0.41-0.91) and cost per anaemia case averted (US$ 1.70-9.51). Independent variables were shown to be statistically associated with both sets of estimates. This study highlights the potential bias in transferring data across settings without understanding the nature of observed variations. PMID:18024966

  12. Adapting Wave-front Algorithms to Efficiently Utilize Systems with Deep Communication Hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J.; Lang, Michael; Pakin, Scott

    2011-09-30

    Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance especially in hybrid systems using accelerators. Processorcores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contains wavefront processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundarymore » data downstream and whose cost is typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional steps in the parallel computation and higher use of on-chip communications. This tradeoff is explored using a performance model. An implementation using the Reverse-acceleration programming model on the petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less

  13. Adapting wave-front algorithms to efficiently utilize systems with deep communication hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J; Lang, Michael; Pakin, Scott

    2009-01-01

    Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance. Processor-cores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contain wave-front processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundary data downstream and whose cost ismore » typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional computation and higher use of on-chip communications. This tradeoff is explored using a performance model and an implementation on the Petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less

  14. Large-scale Eucalyptus energy farms and power cogeneration

    Treesearch

    Robert C. Noroña

    1983-01-01

    A thorough evaluation of all factors possibly affecting a large-scale planting of eucalyptus is foremost in determining the cost effectiveness of the planned operation. Seven basic areas of concern must be analyzed:1. Species Selection 2. Site Preparation 3. Planting 4. Weed Control 5....

  15. Development of a Two-Stage Microalgae Dewatering Process – A Life Cycle Assessment Approach

    PubMed Central

    Soomro, Rizwan R.; Zeng, Xianhai; Lu, Yinghua; Lin, Lu; Danquah, Michael K.

    2016-01-01

    Even though microalgal biomass is leading the third generation biofuel research, significant effort is required to establish an economically viable commercial-scale microalgal biofuel production system. Whilst a significant amount of work has been reported on large-scale cultivation of microalgae using photo-bioreactors and pond systems, research focus on establishing high performance downstream dewatering operations for large-scale processing under optimal economy is limited. The enormous amount of energy and associated cost required for dewatering large-volume microalgal cultures has been the primary hindrance to the development of the needed biomass quantity for industrial-scale microalgal biofuels production. The extremely dilute nature of large-volume microalgal suspension and the small size of microalgae cells in suspension create a significant processing cost during dewatering and this has raised major concerns towards the economic success of commercial-scale microalgal biofuel production as an alternative to conventional petroleum fuels. This article reports an effective framework to assess the performance of different dewatering technologies as the basis to establish an effective two-stage dewatering system. Bioflocculation coupled with tangential flow filtration (TFF) emerged a promising technique with total energy input of 0.041 kWh, 0.05 kg CO2 emissions and a cost of $ 0.0043 for producing 1 kg of microalgae biomass. A streamlined process for operational analysis of two-stage microalgae dewatering technique, encompassing energy input, carbon dioxide emission, and process cost, is presented. PMID:26904075

  16. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  17. Influence of injection temperatures and fiberglass compositions on mechanical properties of polypropylene

    NASA Astrophysics Data System (ADS)

    Keey, Tony Tiew Chun; Azuddin, M.

    2017-06-01

    Injection molding process appears to be one of the most suitable mass and cost efficiency manufacturing processes for polymeric parts nowadays due to its high efficiency of large scale production. When down-scaling the products and components, the limits of conventional injection molding process are reached. These constraints had initiated the development of conventional injection molding process into a new era of micro injection molding technology. In this study, fiberglass reinforced polypropylenes (PP) with various glass fiber percentage materials were used. The study start with fabrication of micro tensile specimens at three different injection temperature, 260°C, 270°C and 280°C for different percentage by weight of fiberglass reinforced PP. Then evaluate the effects of various injection temperatures on the tensile properties of micro tensile specimens. Different percentage by weight of fiberglass reinforced PP were tested as well and it was found that 20% fiberglass reinforced PP possessed the greatest percentage increase of tensile strength with increasing temperatures.

  18. Facilitating CCS Business Planning by Extending the Functionality of the SimCCS Integrated System Model

    DOE PAGES

    Ellett, Kevin M.; Middleton, Richard S.; Stauffer, Philip H.; ...

    2017-08-18

    The application of integrated system models for evaluating carbon capture and storage technology has expanded steadily over the past few years. To date, such models have focused largely on hypothetical scenarios of complex source-sink matching involving numerous large-scale CO 2 emitters, and high-volume, continuous reservoirs such as deep saline formations to function as geologic sinks for carbon storage. Though these models have provided unique insight on the potential costs and feasibility of deploying complex networks of integrated infrastructure, there remains a pressing need to translate such insight to the business community if this technology is to ever achieve a trulymore » meaningful impact in greenhouse gas mitigation. Here, we present a new integrated system modelling tool termed SimCCUS aimed at providing crucial decision support for businesses by extending the functionality of a previously developed model called SimCCS. The primary innovation of the SimCCUS tool development is the incorporation of stacked geological reservoir systems with explicit consideration of processes and costs associated with the operation of multiple CO 2 utilization and storage targets from a single geographic location. In such locations provide significant efficiencies through economies of scale, effectively minimizing CO 2 storage costs while simultaneously maximizing revenue streams via the utilization of CO 2 as a commodity for enhanced hydrocarbon recovery.« less

  19. Facilitating CCS Business Planning by Extending the Functionality of the SimCCS Integrated System Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellett, Kevin M.; Middleton, Richard S.; Stauffer, Philip H.

    The application of integrated system models for evaluating carbon capture and storage technology has expanded steadily over the past few years. To date, such models have focused largely on hypothetical scenarios of complex source-sink matching involving numerous large-scale CO 2 emitters, and high-volume, continuous reservoirs such as deep saline formations to function as geologic sinks for carbon storage. Though these models have provided unique insight on the potential costs and feasibility of deploying complex networks of integrated infrastructure, there remains a pressing need to translate such insight to the business community if this technology is to ever achieve a trulymore » meaningful impact in greenhouse gas mitigation. Here, we present a new integrated system modelling tool termed SimCCUS aimed at providing crucial decision support for businesses by extending the functionality of a previously developed model called SimCCS. The primary innovation of the SimCCUS tool development is the incorporation of stacked geological reservoir systems with explicit consideration of processes and costs associated with the operation of multiple CO 2 utilization and storage targets from a single geographic location. In such locations provide significant efficiencies through economies of scale, effectively minimizing CO 2 storage costs while simultaneously maximizing revenue streams via the utilization of CO 2 as a commodity for enhanced hydrocarbon recovery.« less

  20. An innovative integrated oxidation ditch with vertical circle (IODVC) for wastewater treatment.

    PubMed

    Xia, Shi-bin; Liu, Jun-xin

    2004-01-01

    The oxidation ditch process is economic and efficient for wastewater treatment, but its application is limited in case where land is costly due to its large land area required. An innovative integrated oxidation ditch with vertical circle (IODVC) system was developed to treat domestic and industrial wastewater aiming to save land area. The new system consists of a single-channel divided into two ditches(the top one and the bottom one by a plate), a brush, and an innovative integral clarifier. Different from the horizontal circle of the conventional oxidation ditch, the flow of IODVC system recycles from the top zone to the bottom zone in the vertical circle as the brush is running, and then the IODVC saved land area required by about 50% compared with a conventional oxidation ditch with an intrachannel clarifier. The innovative integral clarifier is effective for separation of liquid and solids, and is preferably positioned at the opposite end of the brush in the ditch. It does not affect the hydrodynamic characteristics of the mixed liquor in the ditch, and the sludge can automatically return to the down ditch without any pump. In this study, experiments of domestic and dye wastewater treatment were carried out in bench scale and in full scale, respectively. Results clearly showed that the IODVC efficiently removed pollutants in the wastewaters, i.e., the average of COD removals for domestic and dye wastewater treatment were 95% and 90%, respectively, and that the IODVC process may provide a cost effective way for full scale dye wastewater treatment.

  1. Highly parallel single-molecule amplification approach based on agarose droplet polymerase chain reaction for efficient and cost-effective aptamer selection.

    PubMed

    Zhang, Wei Yun; Zhang, Wenhua; Liu, Zhiyuan; Li, Cong; Zhu, Zhi; Yang, Chaoyong James

    2012-01-03

    We have developed a novel method for efficiently screening affinity ligands (aptamers) from a complex single-stranded DNA (ssDNA) library by employing single-molecule emulsion polymerase chain reaction (PCR) based on the agarose droplet microfluidic technology. In a typical systematic evolution of ligands by exponential enrichment (SELEX) process, the enriched library is sequenced first, and tens to hundreds of aptamer candidates are analyzed via a bioinformatic approach. Possible candidates are then chemically synthesized, and their binding affinities are measured individually. Such a process is time-consuming, labor-intensive, inefficient, and expensive. To address these problems, we have developed a highly efficient single-molecule approach for aptamer screening using our agarose droplet microfluidic technology. Statistically diluted ssDNA of the pre-enriched library evolved through conventional SELEX against cancer biomarker Shp2 protein was encapsulated into individual uniform agarose droplets for droplet PCR to generate clonal agarose beads. The binding capacity of amplified ssDNA from each clonal bead was then screened via high-throughput fluorescence cytometry. DNA clones with high binding capacity and low K(d) were chosen as the aptamer and can be directly used for downstream biomedical applications. We have identified an ssDNA aptamer that selectively recognizes Shp2 with a K(d) of 24.9 nM. Compared to a conventional sequencing-chemical synthesis-screening work flow, our approach avoids large-scale DNA sequencing and expensive, time-consuming DNA synthesis of large populations of DNA candidates. The agarose droplet microfluidic approach is thus highly efficient and cost-effective for molecular evolution approaches and will find wide application in molecular evolution technologies, including mRNA display, phage display, and so on. © 2011 American Chemical Society

  2. Active Self-Testing Noise Measurement Sensors for Large-Scale Environmental Sensor Networks

    PubMed Central

    Domínguez, Federico; Cuong, Nguyen The; Reinoso, Felipe; Touhafi, Abdellah; Steenhaut, Kris

    2013-01-01

    Large-scale noise pollution sensor networks consist of hundreds of spatially distributed microphones that measure environmental noise. These networks provide historical and real-time environmental data to citizens and decision makers and are therefore a key technology to steer environmental policy. However, the high cost of certified environmental microphone sensors render large-scale environmental networks prohibitively expensive. Several environmental network projects have started using off-the-shelf low-cost microphone sensors to reduce their costs, but these sensors have higher failure rates and produce lower quality data. To offset this disadvantage, we developed a low-cost noise sensor that actively checks its condition and indirectly the integrity of the data it produces. The main design concept is to embed a 13 mm speaker in the noise sensor casing and, by regularly scheduling a frequency sweep, estimate the evolution of the microphone's frequency response over time. This paper presents our noise sensor's hardware and software design together with the results of a test deployment in a large-scale environmental network in Belgium. Our middle-range-value sensor (around €50) effectively detected all experienced malfunctions, in laboratory tests and outdoor deployments, with a few false positives. Future improvements could further lower the cost of our sensor below €10. PMID:24351634

  3. SO x /NO x Removal from Flue Gas Streams by Solid Adsorbents: A Review of Current Challenges and Future Directions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rezaei, Fateme; Rownaghi, Ali A.; Monjezi, Saman

    One of the main challenges in the power and chemical industries is to remove generated toxic or environmentally harmful gases before atmospheric emission. To comply with stringent environmental and pollutant emissions control regulations, coal-fired power plants must be equipped with new technologies that are efficient and less energy-intensive than status quo technologies for flue gas cleanup. While conventional sulfur oxide (SOx) and nitrogen oxide (NOx) removal technologies benefit from their large-scale implementation and maturity, they are quite energy-intensive. In view of this, the development of lower-cost, less energy-intensive technologies could offer an advantage. Significant energy and cost savings can potentiallymore » be realized by using advanced adsorbent materials. One of the major barriers to the development of such technologies remains the development of materials that are efficient and productive in removing flue gas contaminants. In this review, adsorption-based removal of SOx/NOx impurities from flue gas is discussed, with a focus on important attributes of the solid adsorbent materials as well as implementation of the materials in conventional and emerging acid gas removal technologies. The requirements for effective adsorbents are noted with respect to their performance, key limitations, and suggested future research directions. The final section includes some key areas for future research and provides a possible roadmap for the development of technologies for the removal of flue gas impurities that are more efficient and cost-effective than status quo approaches.« less

  4. What Determines HIV Prevention Costs at Scale? Evidence from the Avahan Programme in India.

    PubMed

    Lépine, Aurélia; Chandrashekar, Sudhashree; Shetty, Govindraj; Vickerman, Peter; Bradley, Janet; Alary, Michel; Moses, Stephen; Vassall, Anna

    2016-02-01

    Expanding essential health services through non-government organisations (NGOs) is a central strategy for achieving universal health coverage in many low-income and middle-income countries. Human immunodeficiency virus (HIV) prevention services for key populations are commonly delivered through NGOs and have been demonstrated to be cost-effective and of substantial global public health importance. However, funding for HIV prevention remains scarce, and there are growing calls internationally to improve the efficiency of HIV prevention programmes as a key strategy to reach global HIV targets. To date, there is limited evidence on the determinants of costs of HIV prevention delivered through NGOs; and thus, policymakers have little guidance in how best to design programmes that are both effective and efficient. We collected economic costs from the Indian Avahan initiative, the largest HIV prevention project conducted globally, during the first 4 years of its implementation. We use a fixed-effect panel estimator and a random-intercept model to investigate the determinants of average cost. We find that programme design choices such as NGO scale, the extent of community involvement, the way in which support is offered to NGOs and how clinical services are organised substantially impact average cost in a grant-based payment setting. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.

  5. Sharing the cost of river basin adaptation portfolios to climate change: Insights from social justice and cooperative game theory

    NASA Astrophysics Data System (ADS)

    Girard, Corentin; Rinaudo, Jean-Daniel; Pulido-Velazquez, Manuel

    2016-10-01

    The adaptation of water resource systems to the potential impacts of climate change requires mixed portfolios of supply and demand adaptation measures. The issue is not only to select efficient, robust, and flexible adaptation portfolios but also to find equitable strategies of cost allocation among the stakeholders. Our work addresses such cost allocation problems by applying two different theoretical approaches: social justice and cooperative game theory in a real case study. First of all, a cost-effective portfolio of adaptation measures at the basin scale is selected using a least-cost optimization model. Cost allocation solutions are then defined based on economic rationality concepts from cooperative game theory (the Core). Second, interviews are conducted to characterize stakeholders' perceptions of social justice principles associated with the definition of alternatives cost allocation rules. The comparison of the cost allocation scenarios leads to contrasted insights in order to inform the decision-making process at the river basin scale and potentially reap the efficiency gains from cooperation in the design of river basin adaptation portfolios.

  6. Local unitary transformation method for large-scale two-component relativistic calculations: case for a one-electron Dirac Hamiltonian.

    PubMed

    Seino, Junji; Nakai, Hiromi

    2012-06-28

    An accurate and efficient scheme for two-component relativistic calculations at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level is presented. The present scheme, termed local unitary transformation (LUT), is based on the locality of the relativistic effect. Numerical assessments of the LUT scheme were performed in diatomic molecules such as HX and X(2) (X = F, Cl, Br, I, and At) and hydrogen halide clusters, (HX)(n) (X = F, Cl, Br, and I). Total energies obtained by the LUT method agree well with conventional IODKH results. The computational costs of the LUT method are drastically lower than those of conventional methods since in the former there is linear-scaling with respect to the system size and a small prefactor.

  7. Generating multi-photon W-like states for perfect quantum teleportation and superdense coding

    NASA Astrophysics Data System (ADS)

    Li, Ke; Kong, Fan-Zhen; Yang, Ming; Ozaydin, Fatih; Yang, Qing; Cao, Zhuo-Liang

    2016-08-01

    An interesting aspect of multipartite entanglement is that for perfect teleportation and superdense coding, not the maximally entangled W states but a special class of non-maximally entangled W-like states are required. Therefore, efficient preparation of such W-like states is of great importance in quantum communications, which has not been studied as much as the preparation of W states. In this paper, we propose a simple optical scheme for efficient preparation of large-scale polarization-based entangled W-like states by fusing two W-like states or expanding a W-like state with an ancilla photon. Our scheme can also generate large-scale W states by fusing or expanding W or even W-like states. The cost analysis shows that in generating large-scale W states, the fusion mechanism achieves a higher efficiency with non-maximally entangled W-like states than maximally entangled W states. Our scheme can also start fusion or expansion with Bell states, and it is composed of a polarization-dependent beam splitter, two polarizing beam splitters and photon detectors. Requiring no ancilla photon or controlled gate to operate, our scheme can be realized with the current photonics technology and we believe it enable advances in quantum teleportation and superdense coding in multipartite settings.

  8. Genome Partitioner: A web tool for multi-level partitioning of large-scale DNA constructs for synthetic biology applications.

    PubMed

    Christen, Matthias; Del Medico, Luca; Christen, Heinz; Christen, Beat

    2017-01-01

    Recent advances in lower-cost DNA synthesis techniques have enabled new innovations in the field of synthetic biology. Still, efficient design and higher-order assembly of genome-scale DNA constructs remains a labor-intensive process. Given the complexity, computer assisted design tools that fragment large DNA sequences into fabricable DNA blocks are needed to pave the way towards streamlined assembly of biological systems. Here, we present the Genome Partitioner software implemented as a web-based interface that permits multi-level partitioning of genome-scale DNA designs. Without the need for specialized computing skills, biologists can submit their DNA designs to a fully automated pipeline that generates the optimal retrosynthetic route for higher-order DNA assembly. To test the algorithm, we partitioned a 783 kb Caulobacter crescentus genome design. We validated the partitioning strategy by assembling a 20 kb test segment encompassing a difficult to synthesize DNA sequence. Successful assembly from 1 kb subblocks into the 20 kb segment highlights the effectiveness of the Genome Partitioner for reducing synthesis costs and timelines for higher-order DNA assembly. The Genome Partitioner is broadly applicable to translate DNA designs into ready to order sequences that can be assembled with standardized protocols, thus offering new opportunities to harness the diversity of microbial genomes for synthetic biology applications. The Genome Partitioner web tool can be accessed at https://christenlab.ethz.ch/GenomePartitioner.

  9. A Short Progress Report on High-Efficiency Perovskite Solar Cells.

    PubMed

    Tang, He; He, Shengsheng; Peng, Chuangwei

    2017-12-01

    Faced with the increasingly serious energy and environmental crisis in the world nowadays, the development of renewable energy has attracted increasingly more attention of all countries. Solar energy as an abundant and cheap energy is one of the most promising renewable energy sources. While high-performance solar cells have been well developed in the last couple of decades, the high module cost largely hinders wide deployment of photovoltaic devices. In the last 10 years, this urgent demand for cost-effective solar cells greatly facilitates the research of solar cells. This paper reviews the recent development of cost-effective and high-efficient solar cell technologies. This report paper covers low-cost and high-efficiency perovskite solar cells. The development and the state-of-the-art results of perovskite solar cell technologies are also introduced.

  10. An efficient and reliable predictive method for fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-13

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  11. An efficient and reliable predictive method for fluidized bed simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-29

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  12. A high-performance dual-scale porous electrode for vanadium redox flow batteries

    NASA Astrophysics Data System (ADS)

    Zhou, X. L.; Zeng, Y. K.; Zhu, X. B.; Wei, L.; Zhao, T. S.

    2016-09-01

    In this work, we present a simple and cost-effective method to form a dual-scale porous electrode by KOH activation of the fibers of carbon papers. The large pores (∼10 μm), formed between carbon fibers, serve as the macroscopic pathways for high electrolyte flow rates, while the small pores (∼5 nm), formed on carbon fiber surfaces, act as active sites for rapid electrochemical reactions. It is shown that the Brunauer-Emmett-Teller specific surface area of the carbon paper is increased by a factor of 16 while maintaining the same hydraulic permeability as that of the original carbon paper electrode. We then apply the dual-scale electrode to a vanadium redox flow battery (VRFB) and demonstrate an energy efficiency ranging from 82% to 88% at current densities of 200-400 mA cm-2, which is record breaking as the highest performance of VRFB in the open literature.

  13. 13.2% efficiency Si nanowire/PEDOT:PSS hybrid solar cell using a transfer-imprinted Au mesh electrode

    PubMed Central

    Park, Kwang-Tae; Kim, Han-Jung; Park, Min-Joon; Jeong, Jun-Ho; Lee, Jihye; Choi, Dae-Geun; Lee, Jung-Ho; Choi, Jun-Hyuk

    2015-01-01

    In recent years, inorganic/organic hybrid solar cell concept has received growing attention for alternative energy solution because of the potential for facile and low-cost fabrication and high efficiency. Here, we report highly efficient hybrid solar cells based on silicon nanowires (SiNWs) and poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS) using transfer-imprinted metal mesh front electrodes. Such a structure increases the optical absorption and shortens the carrier transport distance, thus, it greatly increases the charge carrier collection efficiency. Compared with hybrid cells formed using indium tin oxide (ITO) electrodes, we find an increase in power conversion efficiency from 5.95% to 13.2%, which is attributed to improvements in both the electrical and optical properties of the Au mesh electrode. Our fabrication strategy for metal mesh electrode is suitable for the large-scale fabrication of flexible transparent electrodes, paving the way towards low-cost, high-efficiency, flexible solar cells. PMID:26174964

  14. Barriers to Building Energy Efficiency (BEE) promotion: A transaction costs perspective

    NASA Astrophysics Data System (ADS)

    Qian Kun, Queena

    Worldwide, buildings account for a surprisingly high 40% of global energy consumption, and the resulting carbon footprint significantly exceeds that of all forms of transportation combined. Large and attractive opportunities exist to reduce buildings' energy use at lower costs and higher returns than in other sectors. This thesis analyzes the concerns of the market stakeholders, mainly real estate developers and end-users, in terms of transaction costs as they make decisions about investing in Building Energy Efficiency (BEE). It provides a detailed analysis of the current situation and future prospects for BEE adoption by the market's stakeholders. It delineates the market and lays out the economic and institutional barriers to the large-scale deployment of energy-efficient building techniques. The aim of this research is to investigate the barriers raised by transaction costs that hinder market stakeholders from investing in BEES. It explains interactions among stakeholders in general and in the specific case of Hong Kong as they consider transaction costs. It focuses on the influence of transaction costs on the decision-making of the stakeholders during the entire process of real estate development. The objectives are: 1) To establish an analytical framework for understanding the barriers to BEE investment with consideration of transaction costs; 2) To build a theoretical game model of decision making among the BEE market stakeholders; 3) To study the empirical data from questionnaire surveys of building designers and from focused interviews with real estate developers in Hong Kong; 4) To triangulate the study's empirical findings with those of the theoretical model and analytical framework. The study shows that a coherent institutional framework needs to be established to ensure that the design and implementation of BEE policies acknowledge the concerns of market stakeholders by taking transaction costs into consideration. Regulatory and incentive options should be integrated into BEE policies to minimize efficiency gaps and to realize a sizeable increase in the number of energy-efficient buildings in the next decades. Specifically, the analysis shows that a thorough understanding of the transaction costs borne by particular stakeholders could improve the energy efficiency of buildings, even without improvements in currently available technology.

  15. Solving lot-sizing problem with quantity discount and transportation cost

    NASA Astrophysics Data System (ADS)

    Lee, Amy H. I.; Kang, He-Yau; Lai, Chun-Mei

    2013-04-01

    Owing to today's increasingly competitive market and ever-changing manufacturing environment, the inventory problem is becoming more complicated to solve. The incorporation of heuristics methods has become a new trend to tackle the complex problem in the past decade. This article considers a lot-sizing problem, and the objective is to minimise total costs, where the costs include ordering, holding, purchase and transportation costs, under the requirement that no inventory shortage is allowed in the system. We first formulate the lot-sizing problem as a mixed integer programming (MIP) model. Next, an efficient genetic algorithm (GA) model is constructed for solving large-scale lot-sizing problems. An illustrative example with two cases in a touch panel manufacturer is used to illustrate the practicality of these models, and a sensitivity analysis is applied to understand the impact of the changes in parameters to the outcomes. The results demonstrate that both the MIP model and the GA model are effective and relatively accurate tools for determining the replenishment for touch panel manufacturing for multi-periods with quantity discount and batch transportation. The contributions of this article are to construct an MIP model to obtain an optimal solution when the problem is not too complicated itself and to present a GA model to find a near-optimal solution efficiently when the problem is complicated.

  16. Large-Scale Network Analysis of Whole-Brain Resting-State Functional Connectivity in Spinal Cord Injury: A Comparative Study.

    PubMed

    Kaushal, Mayank; Oni-Orisan, Akinwunmi; Chen, Gang; Li, Wenjun; Leschke, Jack; Ward, Doug; Kalinosky, Benjamin; Budde, Matthew; Schmit, Brian; Li, Shi-Jiang; Muqeet, Vaishnavi; Kurpad, Shekar

    2017-09-01

    Network analysis based on graph theory depicts the brain as a complex network that allows inspection of overall brain connectivity pattern and calculation of quantifiable network metrics. To date, large-scale network analysis has not been applied to resting-state functional networks in complete spinal cord injury (SCI) patients. To characterize modular reorganization of whole brain into constituent nodes and compare network metrics between SCI and control subjects, fifteen subjects with chronic complete cervical SCI and 15 neurologically intact controls were scanned. The data were preprocessed followed by parcellation of the brain into 116 regions of interest (ROI). Correlation analysis was performed between every ROI pair to construct connectivity matrices and ROIs were categorized into distinct modules. Subsequently, local efficiency (LE) and global efficiency (GE) network metrics were calculated at incremental cost thresholds. The application of a modularity algorithm organized the whole-brain resting-state functional network of the SCI and the control subjects into nine and seven modules, respectively. The individual modules differed across groups in terms of the number and the composition of constituent nodes. LE demonstrated statistically significant decrease at multiple cost levels in SCI subjects. GE did not differ significantly between the two groups. The demonstration of modular architecture in both groups highlights the applicability of large-scale network analysis in studying complex brain networks. Comparing modules across groups revealed differences in number and membership of constituent nodes, indicating modular reorganization due to neural plasticity.

  17. Cost/CYP: a bottom line that helps keep CSM projects cost-efficient.

    PubMed

    1985-01-01

    In contraceptive social marketing (CSM), the objective is social good, but project managers also need to run a tight ship, trimming costs, allocating scarce funds, and monitoring their program's progress. 1 way CSM managers remain cost-conscious is through the concept of couple-years-of-protection (CYP). Devised 2 decades ago as an administrative tool to compare the effects of different contraceptive methods, CYP's uses have multiplied to include assessing program output and cost effectiveness. Some of the factors affecting cost/CYP are a project's age, sales volume, management efficiency, and product prices and line. These factors are interconnected. The cost/CYP figures given here do not include outlays for commodities. While the Agency for International Development's commodity costs alter slightly with each new purchase contrast, the agency reports that a condom costs about 4 cents (US), an oral contraceptive (OC) cycle about 12 cents, and a spermicidal tablet about 7 cents. CSM projects have relatively high start-up costs. Within a project's first 2 years, expenses must cover such marketing activities as research, packaging, warehousing, and heavy promotion. As a project ages, sales should grow, producing revenues that gradually amortize these initial costs. The Nepal CSM project provides an example of how cost/CYP can improve as a program ages. In 1978, the year sales began, the project's cost/CYP was about $84. For some time the project struggled to get its products to its target market and gradually overcome several major hurdles. The acquisition of jeeps eased distribution and, by adding another condom brand, sales were increased still more, bringing the cost/CYP down to $8.30 in 1981. With further sales increases and resulting revenues, the cost/CYP dropped to just over $7 in 1983. When the sales volume becomes large enough, CSM projects can achieve economies of scale, which greatly improves cost-efficiency. Fixed costs shrink as a proportion of total expenditures. Good project management goes hand-in-hand with increasing sales. Cost/CYP is a powerful tool, but some project strategies alter its meaning. Some projects have lowered net costs by selling products at high prices. This dilutes the social marketing credo of getting low-cost projects to those in need. When this occurs, cost/CYP undergoes an identity crisis, for it no longer measures a purely social objective.

  18. Production technology for high efficiency ion implanted solar cells

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, A. R.; Minnucci, J. A.; Greenwald, A. C.; Josephs, R. H.

    1978-01-01

    Ion implantation is being developed for high volume automated production of silicon solar cells. An implanter designed for solar cell processing and able to properly implant up to 300 4-inch wafers per hour is now operational. A machine to implant 180 sq m/hr of solar cell material has been designed. Implanted silicon solar cells with efficiencies exceeding 16% AM1 are now being produced and higher efficiencies are expected. Ion implantation and transient processing by pulsed electron beams are being integrated with electrostatic bonding to accomplish a simple method for large scale, low cost production of high efficiency solar cell arrays.

  19. The opportunities and challenges of large-scale molecular approaches to songbird neurobiology

    PubMed Central

    Mello, C.V.; Clayton, D.F.

    2014-01-01

    High-through put methods for analyzing genome structure and function are having a large impact in song-bird neurobiology. Methods include genome sequencing and annotation, comparative genomics, DNA microarrays and transcriptomics, and the development of a brain atlas of gene expression. Key emerging findings include the identification of complex transcriptional programs active during singing, the robust brain expression of non-coding RNAs, evidence of profound variations in gene expression across brain regions, and the identification of molecular specializations within song production and learning circuits. Current challenges include the statistical analysis of large datasets, effective genome curations, the efficient localization of gene expression changes to specific neuronal circuits and cells, and the dissection of behavioral and environmental factors that influence brain gene expression. The field requires efficient methods for comparisons with organisms like chicken, which offer important anatomical, functional and behavioral contrasts. As sequencing costs plummet, opportunities emerge for comparative approaches that may help reveal evolutionary transitions contributing to vocal learning, social behavior and other properties that make songbirds such compelling research subjects. PMID:25280907

  20. Solving large scale unit dilemma in electricity system by applying commutative law

    NASA Astrophysics Data System (ADS)

    Legino, Supriadi; Arianto, Rakhmat

    2018-03-01

    The conventional system, pooling resources with large centralized power plant interconnected as a network. provides a lot of advantages compare to the isolated one include optimizing efficiency and reliability. However, such a large plant need a huge capital. In addition, more problems emerged to hinder the construction of big power plant as well as its associated transmission lines. By applying commutative law of math, ab = ba, for all a,b €-R, the problem associated with conventional system as depicted above, can be reduced. The idea of having small unit but many power plants, namely “Listrik Kerakyatan,” abbreviated as LK provides both social and environmental benefit that could be capitalized by using proper assumption. This study compares the cost and benefit of LK to those of conventional system, using simulation method to prove that LK offers alternative solution to answer many problems associated with the large system. Commutative Law of Algebra can be used as a simple mathematical model to analyze whether the LK system as an eco-friendly distributed generation can be applied to solve various problems associated with a large scale conventional system. The result of simulation shows that LK provides more value if its plants operate in less than 11 hours as peaker power plant or load follower power plant to improve load curve balance of the power system. The result of simulation indicates that the investment cost of LK plant should be optimized in order to minimize the plant investment cost. This study indicates that the benefit of economies of scale principle does not always apply to every condition, particularly if the portion of intangible cost and benefit is relatively high.

  1. Low-cost electrodes for stable perovskite solar cells

    NASA Astrophysics Data System (ADS)

    Bastos, João P.; Manghooli, Sara; Jaysankar, Manoj; Tait, Jeffrey G.; Qiu, Weiming; Gehlhaar, Robert; De Volder, Michael; Uytterhoeven, Griet; Poortmans, Jef; Paetzold, Ulrich W.

    2017-06-01

    Cost-effective production of perovskite solar cells on an industrial scale requires the utilization of exclusively inexpensive materials. However, to date, highly efficient and stable perovskite solar cells rely on expensive gold electrodes since other metal electrodes are known to cause degradation of the devices. Finding a low-cost electrode that can replace gold and ensure both efficiency and long-term stability is essential for the success of the perovskite-based solar cell technology. In this work, we systematically compare three types of electrode materials: multi-walled carbon nanotubes (MWCNTs), alternative metals (silver, aluminum, and copper), and transparent oxides [indium tin oxide (ITO)] in terms of efficiency, stability, and cost. We show that multi-walled carbon nanotubes are the only electrode that is both more cost-effective and stable than gold. Devices with multi-walled carbon nanotube electrodes present remarkable shelf-life stability, with no decrease in the efficiency even after 180 h of storage in 77% relative humidity (RH). Furthermore, we demonstrate the potential of devices with multi-walled carbon nanotube electrodes to achieve high efficiencies. These developments are an important step forward to mass produce perovskite photovoltaics in a commercially viable way.

  2. Eco-friendly Energy Storage System: Seawater and Ionic Liquid Electrolyte.

    PubMed

    Kim, Jae-Kwang; Mueller, Franziska; Kim, Hyojin; Jeong, Sangsik; Park, Jeong-Sun; Passerini, Stefano; Kim, Youngsik

    2016-01-08

    As existing battery technologies struggle to meet the requirements for widespread use in the field of large-scale energy storage, novel concepts are urgently needed concerning batteries that have high energy densities, low costs, and high levels of safety. Here, a novel eco-friendly energy storage system (ESS) using seawater and an ionic liquid is proposed for the first time; this represents an intermediate system between a battery and a fuel cell, and is accordingly referred to as a hybrid rechargeable cell. Compared to conventional organic electrolytes, the ionic liquid electrolyte significantly enhances the cycle performance of the seawater hybrid rechargeable system, acting as a very stable interface layer between the Sn-C (Na storage) anode and the NASICON (Na3 Zr2 Si2 PO12) ceramic solid electrolyte, making this system extremely promising for cost-efficient and environmentally friendly large-scale energy storage. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Highly improved voltage efficiency of seawater battery by use of chloride ion capturing electrode

    NASA Astrophysics Data System (ADS)

    Kim, Kyoungho; Hwang, Soo Min; Park, Jeong-Sun; Han, Jinhyup; Kim, Junsoo; Kim, Youngsik

    2016-05-01

    Cost-effective and eco-friendly battery system with high energy density is highly desirable. Herein, we report a seawater battery with a high voltage efficiency, in which a chloride ion-capturing electrode (CICE) consisting of Ag foil is utilized as the cathode. The use of Ag as the cathode leads to a sharp decrease in the voltage gaps between charge and discharge curves, based on reversible redox reaction of Ag/AgCl (at ∼2.9 V vs. Na+/Na) in a seawater catholyte during cycling. The Ag/AgCl reaction proves to be highly reversible during battery cycling. The battery employing the Ag electrode shows excellent cycling performance with a high Coulombic efficiency (98.6-98.7%) and a highly improved voltage efficiency (90.3% compared to 73% for carbonaceous cathode) during 20 cycles (total 500 h). These findings demonstrate that seawater batteries using a CICE could be used as next-generation batteries for large-scale stationary energy storage plants.

  4. Universal Health Insurance in India: Ensuring Equity, Efficiency, and Quality

    PubMed Central

    Prinja, Shankar; Kaur, Manmeet; Kumar, Rajesh

    2012-01-01

    Indian health system is characterized by a vast public health infrastructure which lies underutilized, and a largely unregulated private market which caters to greater need for curative treatment. High out-of-pocket (OOP) health expenditures poses barrier to access for healthcare. Among those who get hospitalized, nearly 25% are pushed below poverty line by catastrophic impact of OOP healthcare expenditure. Moreover, healthcare costs are spiraling due to epidemiologic, demographic, and social transition. Hence, the need for risk pooling is imperative. The present article applies economic theories to various possibilities for providing risk pooling mechanism with the objective of ensuring equity, efficiency, and quality care. Asymmetry of information leads to failure of actuarially administered private health insurance (PHI). Large proportion of informal sector labor in India's workforce prevents major upscaling of social health insurance (SHI). Community health insurance schemes are difficult to replicate on a large scale. We strongly recommend institutionalization of tax-funded Universal Health Insurance Scheme (UHIS), with complementary role of PHI. The contextual factors for development of UHIS are favorable. SHI schemes should be merged with UHIS. Benefit package of this scheme should include preventive and in-patient curative care to begin with, and gradually include out-patient care. State-specific priorities should be incorporated in benefit package. Application of such an insurance system besides being essential to the goals of an effective health system provides opportunity to regulate private market, negotiate costs, and plan health services efficiently. Purchaser-provider split provides an opportunity to strengthen public sector by allowing providers to compete. PMID:23112438

  5. Improving Healthcare Quality in the United States: A New Approach.

    PubMed

    Nix, Kathryn A; O'Shea, John S

    2015-06-01

    Improving the quality of health care has been a focus of health reformers during the last 2 decades, yet meaningful and sustainable quality improvement has remained elusive in many ways. Although a number of individual institutions have made great strides toward more effective and efficient care, progress has not gone far enough on a national scale. Barriers to quality of care lie in fundamental, systemwide factors that impede large-scale change. Notable among these is the third-party financing arrangement that dominates the healthcare system. Long-term goals for healthcare reform should address this barrier to higher quality of care. A new model for healthcare financing that includes patient awareness of the cost of care will encourage better quality and reduced spending by engaging patients in the pursuit of value, aligning incentives for insurers to reduce costs with patients' desire to receive excellent care, and holding providers accountable for the quality and cost of the care they provide. Several new programs implemented under the Patient Protection and Affordable Care Act aim to catalyze improvement in the quality of care, but the law takes the wrong approach, directing incentives at providers only and maintaining a system that excludes patients from the search for high-value care.

  6. Scaling-Up Successfully: Pathways to Replication for Educational NGOs

    ERIC Educational Resources Information Center

    Jowett, Alice; Dyer, Caroline

    2012-01-01

    Non-government organisations (NGOs) are big players in international development, critical to the achievement of the Millennium Development Goals (MDGs) and constantly under pressure to "achieve more". Scaling-up their initiatives successfully and sustainably can be an efficient and cost effective way for NGOs to increase their impact across a…

  7. Vegetated treatment area (VTAs) efficiences for E. coli and nutrient removal on small-scale swine operations

    USDA-ARS?s Scientific Manuscript database

    As small-scale animal feeding operations work to manage their byproducts and avoid regulation, they need practical, cost-effective methods to reduce environmental impact. One such option is using vegetative treatment areas (VTAs) with perennial grasses to treat runoff; however, research is limited ...

  8. Aeroelastic Stability Investigations for Large-scale Vertical Axis Wind Turbines

    NASA Astrophysics Data System (ADS)

    Owens, B. C.; Griffith, D. T.

    2014-06-01

    The availability of offshore wind resources in coastal regions, along with a high concentration of load centers in these areas, makes offshore wind energy an attractive opportunity for clean renewable electricity production. High infrastructure costs such as the offshore support structure and operation and maintenance costs for offshore wind technology, however, are significant obstacles that need to be overcome to make offshore wind a more cost-effective option. A vertical-axis wind turbine (VAWT) rotor configuration offers a potential transformative technology solution that significantly lowers cost of energy for offshore wind due to its inherent advantages for the offshore market. However, several potential challenges exist for VAWTs and this paper addresses one of them with an initial investigation of dynamic aeroelastic stability for large-scale, multi-megawatt VAWTs. The aeroelastic formulation and solution method from the BLade Aeroelastic STability Tool (BLAST) for HAWT blades was employed to extend the analysis capability of a newly developed structural dynamics design tool for VAWTs. This investigation considers the effect of configuration geometry, material system choice, and number of blades on the aeroelastic stability of a VAWT, and provides an initial scoping for potential aeroelastic instabilities in large-scale VAWT designs.

  9. Large-scale modular biofiltration system for effective odor removal in a composting facility.

    PubMed

    Lin, Yueh-Hsien; Chen, Yu-Pei; Ho, Kuo-Ling; Lee, Tsung-Yih; Tseng, Ching-Ping

    2013-01-01

    Several different foul odors such as nitrogen-containing groups, sulfur-containing groups, and short-chain fatty-acids commonly emitted from composting facilities. In this study, an experimental laboratory-scale bioreactor was scaled up to build a large-scale modular biofiltration system that can process 34 m(3)min(-1)waste gases. This modular reactor system was proven effective in eliminating odors, with a 97% removal efficiency for 96 ppm ammonia, a 98% removal efficiency for 220 ppm amines, and a 100% removal efficiency of other odorous substances. The results of operational parameters indicate that this modular biofiltration system offers long-term operational stability. Specifically, a low pressure drop (<45 mmH2O m(-1)) was observed, indicating that the packing carrier in bioreactor units does not require frequent replacement. Thus, this modular biofiltration system can be used in field applications to eliminate various odors with compact working volume.

  10. How to Establish and Follow up a Large Prospective Cohort Study in the 21st Century--Lessons from UK COSMOS.

    PubMed

    Toledano, Mireille B; Smith, Rachel B; Brook, James P; Douglass, Margaret; Elliott, Paul

    2015-01-01

    Large-scale prospective cohort studies are invaluable in epidemiology, but they are increasingly difficult and costly to establish and follow-up. More efficient methods for recruitment, data collection and follow-up are essential if such studies are to remain feasible with limited public and research funds. Here, we discuss how these challenges were addressed in the UK COSMOS cohort study where fixed budget and limited time frame necessitated new approaches to consent and recruitment between 2009-2012. Web-based e-consent and data collection should be considered in large scale observational studies, as they offer a streamlined experience which benefits both participants and researchers and save costs. Commercial providers of register and marketing data, smartphones, apps, email, social media, and the internet offer innovative possibilities for identifying, recruiting and following up cohorts. Using examples from UK COSMOS, this article sets out the dos and don'ts for today's cohort studies and provides a guide on how best to take advantage of new technologies and innovative methods to simplify logistics and minimise costs. Thus a more streamlined experience to the benefit of both research participants and researchers becomes achievable.

  11. How to Establish and Follow up a Large Prospective Cohort Study in the 21st Century - Lessons from UK COSMOS

    PubMed Central

    Toledano, Mireille B.; Smith, Rachel B.; Brook, James P.; Douglass, Margaret; Elliott, Paul

    2015-01-01

    Large-scale prospective cohort studies are invaluable in epidemiology, but they are increasingly difficult and costly to establish and follow-up. More efficient methods for recruitment, data collection and follow-up are essential if such studies are to remain feasible with limited public and research funds. Here, we discuss how these challenges were addressed in the UK COSMOS cohort study where fixed budget and limited time frame necessitated new approaches to consent and recruitment between 2009-2012. Web-based e-consent and data collection should be considered in large scale observational studies, as they offer a streamlined experience which benefits both participants and researchers and save costs. Commercial providers of register and marketing data, smartphones, apps, email, social media, and the internet offer innovative possibilities for identifying, recruiting and following up cohorts. Using examples from UK COSMOS, this article sets out the dos and don’ts for today's cohort studies and provides a guide on how best to take advantage of new technologies and innovative methods to simplify logistics and minimise costs. Thus a more streamlined experience to the benefit of both research participants and researchers becomes achievable. PMID:26147611

  12. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  13. Iron-Air Rechargeable Battery: A Robust and Inexpensive Iron-Air Rechargeable Battery for Grid-Scale Energy Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-10-01

    GRIDS Project: USC is developing an iron-air rechargeable battery for large-scale energy storage that could help integrate renewable energy sources into the electric grid. Iron-air batteries have the potential to store large amounts of energy at low cost—iron is inexpensive and abundant, while oxygen is freely obtained from the air we breathe. However, current iron-air battery technologies have suffered from low efficiency and short life spans. USC is working to dramatically increase the efficiency of the battery by placing chemical additives on the battery’s iron-based electrode and restructuring the catalysts at the molecular level on the battery’s air-based electrode. Thismore » can help the battery resist degradation and increase life span. The goal of the project is to develop a prototype iron-air battery at significantly cost lower than today’s best commercial batteries.« less

  14. Socioeconomic aspects of neglected tropical diseases.

    PubMed

    Conteh, Lesong; Engels, Thomas; Molyneux, David H

    2010-01-16

    Although many examples of highly cost-effective interventions to control neglected tropical diseases exist, our understanding of the full economic effect that these diseases have on individuals, households, and nations needs to be improved to target interventions more effectively and equitably. We review data for the effect of neglected tropical diseases on a population's health and economy. We also present evidence on the costs, cost-effectiveness, and financing of strategies to monitor, control, or reduce morbidity and mortality associated with these diseases. We explore the potential for economies of scale and scope in terms of the costs and benefits of successfully delivering large-scale and integrated interventions. The low cost of neglected tropical disease control is driven by four factors: the commitment of pharmaceutical companies to provide free drugs; the scale of programmes; the opportunities for synergising delivery modes; and the often non-remunerated volunteer contribution of communities and teachers in drug distribution. Finally, we make suggestions for future economic research. Copyright 2010 Elsevier Ltd. All rights reserved.

  15. Efficient and equitable spatial allocation of renewable power plants at the country scale

    NASA Astrophysics Data System (ADS)

    Drechsler, Martin; Egerer, Jonas; Lange, Martin; Masurowski, Frank; Meyerhoff, Jürgen; Oehlmann, Malte

    2017-09-01

    Globally, the production of renewable energy is undergoing rapid growth. One of the most pressing issues is the appropriate allocation of renewable power plants, as the question of where to produce renewable electricity is highly controversial. Here we explore this issue through analysis of the efficient and equitable spatial allocation of wind turbines and photovoltaic power plants in Germany. We combine multiple methods, including legal analysis, economic and energy modelling, monetary valuation and numerical optimization. We find that minimum distances between renewable power plants and human settlements should be as small as is legally possible. Even small reductions in efficiency lead to large increases in equity. By considering electricity grid expansion costs, we find a more even allocation of power plants across the country than is the case when grid expansion costs are neglected.

  16. Large-scale multi-stage constructed wetlands for secondary effluents treatment in northern China: Carbon dynamics.

    PubMed

    Wu, Haiming; Fan, Jinlin; Zhang, Jian; Ngo, Huu Hao; Guo, Wenshan

    2018-02-01

    Multi-stage constructed wetlands (CWs) have been proved to be a cost-effective alternative in the treatment of various wastewaters for improving the treatment performance as compared with the conventional single-stage CWs. However, few long-term full-scale multi-stage CWs have been performed and evaluated for polishing effluents from domestic wastewater treatment plants (WWTP). This study investigated the seasonal and spatial dynamics of carbon and the effects of the key factors (input loading and temperature) in the large-scale seven-stage Wu River CW polishing domestic WWTP effluents in northern China. The results indicated a significant improvement in water quality. Significant seasonal and spatial variations of organics removal were observed in the Wu River CW with a higher COD removal efficiency of 64-66% in summer and fall. Obvious seasonal and spatial variations of CH 4 and CO 2 emissions were also found with the average CH 4 and CO 2 emission rates of 3.78-35.54 mg m -2 d -1 and 610.78-8992.71 mg m -2 d -1 , respectively, while the higher CH 4 and CO 2 emission flux was obtained in spring and summer. Seasonal air temperatures and inflow COD loading rates significantly affected organics removal and CH 4 emission, but they appeared to have a weak influence on CO 2 emission. Overall, this study suggested that large-scale Wu River CW might be a potential source of GHG, but considering the sustainability of the multi-stage CW, the inflow COD loading rate of 1.8-2.0 g m -2 d -1 and temperature of 15-20 °C may be the suitable condition for achieving the higher organics removal efficiency and lower greenhouse gases (GHG) emission in polishing the domestic WWTP effluent. The obtained knowledge of the carbon dynamics in large-scale Wu River CW will be helpful for understanding the carbon cycles, but also can provide useful field experience for the design, operation and management of multi-stage CW treatments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  18. Edge-SIFT: discriminative binary descriptor for scalable partial-duplicate mobile search.

    PubMed

    Zhang, Shiliang; Tian, Qi; Lu, Ke; Huang, Qingming; Gao, Wen

    2013-07-01

    As the basis of large-scale partial duplicate visual search on mobile devices, image local descriptor is expected to be discriminative, efficient, and compact. Our study shows that the popularly used histogram-based descriptors, such as scale invariant feature transform (SIFT) are not optimal for this task. This is mainly because histogram representation is relatively expensive to compute on mobile platforms and loses significant spatial clues, which are important for improving discriminative power and matching near-duplicate image patches. To address these issues, we propose to extract a novel binary local descriptor named Edge-SIFT from the binary edge maps of scale- and orientation-normalized image patches. By preserving both locations and orientations of edges and compressing the sparse binary edge maps with a boosting strategy, the final Edge-SIFT shows strong discriminative power with compact representation. Furthermore, we propose a fast similarity measurement and an indexing framework with flexible online verification. Hence, the Edge-SIFT allows an accurate and efficient image search and is ideal for computation sensitive scenarios such as a mobile image search. Experiments on a large-scale dataset manifest that the Edge-SIFT shows superior retrieval accuracy to Oriented BRIEF (ORB) and is superior to SIFT in the aspects of retrieval precision, efficiency, compactness, and transmission cost.

  19. A novel wild-type Saccharomyces cerevisiae strain TSH1 in scaling-up of solid-state fermentation of ethanol from sweet sorghum stalks.

    PubMed

    Du, Ran; Yan, Jianbin; Feng, Quanzhou; Li, Peipei; Zhang, Lei; Chang, Sandra; Li, Shizhong

    2014-01-01

    The rising demand for bioethanol, the most common alternative to petroleum-derived fuel used worldwide, has encouraged a feedstock shift to non-food crops to reduce the competition for resources between food and energy production. Sweet sorghum has become one of the most promising non-food energy crops because of its high output and strong adaptive ability. However, the means by which sweet sorghum stalks can be cost-effectively utilized for ethanol fermentation in large-scale industrial production and commercialization remains unclear. In this study, we identified a novel Saccharomyces cerevisiae strain, TSH1, from the soil in which sweet sorghum stalks were stored. This strain exhibited excellent ethanol fermentative capacity and ability to withstand stressful solid-state fermentation conditions. Furthermore, we gradually scaled up from a 500-mL flask to a 127-m3 rotary-drum fermenter and eventually constructed a 550-m3 rotary-drum fermentation system to establish an efficient industrial fermentation platform based on TSH1. The batch fermentations were completed in less than 20 hours, with up to 96 tons of crushed sweet sorghum stalks in the 550-m3 fermenter reaching 88% of relative theoretical ethanol yield (RTEY). These results collectively demonstrate that ethanol solid-state fermentation technology can be a highly efficient and low-cost solution for utilizing sweet sorghum, providing a feasible and economical means of developing non-food bioethanol.

  20. A Novel Wild-Type Saccharomyces cerevisiae Strain TSH1 in Scaling-Up of Solid-State Fermentation of Ethanol from Sweet Sorghum Stalks

    PubMed Central

    Feng, Quanzhou; Li, Peipei; Zhang, Lei; Chang, Sandra; Li, Shizhong

    2014-01-01

    The rising demand for bioethanol, the most common alternative to petroleum-derived fuel used worldwide, has encouraged a feedstock shift to non-food crops to reduce the competition for resources between food and energy production. Sweet sorghum has become one of the most promising non-food energy crops because of its high output and strong adaptive ability. However, the means by which sweet sorghum stalks can be cost-effectively utilized for ethanol fermentation in large-scale industrial production and commercialization remains unclear. In this study, we identified a novel Saccharomyces cerevisiae strain, TSH1, from the soil in which sweet sorghum stalks were stored. This strain exhibited excellent ethanol fermentative capacity and ability to withstand stressful solid-state fermentation conditions. Furthermore, we gradually scaled up from a 500-mL flask to a 127-m3 rotary-drum fermenter and eventually constructed a 550-m3 rotary-drum fermentation system to establish an efficient industrial fermentation platform based on TSH1. The batch fermentations were completed in less than 20 hours, with up to 96 tons of crushed sweet sorghum stalks in the 550-m3 fermenter reaching 88% of relative theoretical ethanol yield (RTEY). These results collectively demonstrate that ethanol solid-state fermentation technology can be a highly efficient and low-cost solution for utilizing sweet sorghum, providing a feasible and economical means of developing non-food bioethanol. PMID:24736641

  1. Scale-up of hydrophobin-assisted recombinant protein production in tobacco BY-2 suspension cells.

    PubMed

    Reuter, Lauri J; Bailey, Michael J; Joensuu, Jussi J; Ritala, Anneli

    2014-05-01

    Plant suspension cell cultures are emerging as an alternative to mammalian cells for production of complex recombinant proteins. Plant cell cultures provide low production cost, intrinsic safety and adherence to current regulations, but low yields and costly purification technology hinder their commercialization. Fungal hydrophobins have been utilized as fusion tags to improve yields and facilitate efficient low-cost purification by surfactant-based aqueous two-phase separation (ATPS) in plant, fungal and insect cells. In this work, we report the utilization of hydrophobin fusion technology in tobacco bright yellow 2 (BY-2) suspension cell platform and the establishment of pilot-scale propagation and downstream processing including first-step purification by ATPS. Green fluorescent protein-hydrophobin fusion (GFP-HFBI) induced the formation of protein bodies in tobacco suspension cells, thus encapsulating the fusion protein into discrete compartments. Cultivation of the BY-2 suspension cells was scaled up in standard stirred tank bioreactors up to 600 L production volume, with no apparent change in growth kinetics. Subsequently, ATPS was applied to selectively capture the GFP-HFBI product from crude cell lysate, resulting in threefold concentration, good purity and up to 60% recovery. The ATPS was scaled up to 20 L volume, without loss off efficiency. This study provides the first proof of concept for large-scale hydrophobin-assisted production of recombinant proteins in tobacco BY-2 cell suspensions. © 2013 Society for Experimental Biology, Association of Applied Biologists and John Wiley & Sons Ltd.

  2. Efficient harvesting of marine Chlorella vulgaris microalgae utilizing cationic starch nanoparticles by response surface methodology.

    PubMed

    Bayat Tork, Mahya; Khalilzadeh, Rasoul; Kouchakzadeh, Hasan

    2017-11-01

    Harvesting involves nearly thirty percent of total production cost of microalgae that needs to be done efficiently. Utilizing inexpensive and highly available biopolymer-based flocculants can be a solution for reducing the harvest costs. Herein, flocculation process of Chlorella vulgaris microalgae using cationic starch nanoparticles (CSNPs) was evaluated and optimized through the response surface methodology (RSM). pH, microalgae and CSNPs concentrations were considered as the main independent variables. Under the optimum conditions of microalgae concentration 0.75gdry weight/L, CSNPs concentration 7.1mgdry weight/L and pH 11.8, the maximum flocculation efficiency (90%) achieved. Twenty percent increase in flocculation efficiency observed with the use of CSNPs instead of the non-particulate starch which can be due to the more electrostatic interactions between the cationic nanoparticles and the microalgae. Therefore, the synthesized CSNPs can be employed as a convenient and economical flocculants for efficient harvest of Chlorella vulgaris microalgae at large scale. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Development of a millimetrically scaled biodiesel transesterification device that relies on droplet-based co-axial fluidics

    PubMed Central

    Yeh, S. I.; Huang, Y. C.; Cheng, C. H.; Cheng, C. M.; Yang, J. T.

    2016-01-01

    In this study, we investigated a fluidic system that adheres to new concepts of energy production. To improve efficiency, cost, and ease of manufacture, a millimetrically scaled device that employs a droplet-based co-axial fluidic system was devised to complete alkali-catalyzed transesterification for biodiesel production. The large surface-to-volume ratio of the droplet-based system, and the internal circulation induced inside the moving droplets, significantly enhanced the reaction rate of immiscible liquids used here – soybean oil and methanol. This device also decreased the molar ratio between methanol and oil to near the stoichiometric coefficients of a balanced chemical equation, which enhanced the total biodiesel volume produced, and decreased the costs of purification and recovery of excess methanol. In this work, the droplet-based co-axial fluidic system performed better than other methods of continuous-flow production. We achieved an efficiency that is much greater than that of reported systems. This study demonstrated the high potential of droplet-based fluidic chips for energy production. The small energy consumption and low cost of the highly purified biodiesel transesterification system described conforms to the requirements of distributed energy (inexpensive production on a moderate scale) in the world. PMID:27426677

  4. Stormbow: A Cloud-Based Tool for Reads Mapping and Expression Quantification in Large-Scale RNA-Seq Studies

    PubMed Central

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance

    2013-01-01

    RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets. PMID:25937948

  5. Stormbow: A Cloud-Based Tool for Reads Mapping and Expression Quantification in Large-Scale RNA-Seq Studies.

    PubMed

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance

    2013-01-01

    RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets.

  6. Cost-effectiveness analysis and innovation.

    PubMed

    Jena, Anupam B; Philipson, Tomas J

    2008-09-01

    While cost-effectiveness (CE) analysis has provided a guide to allocating often scarce resources spent on medical technologies, less emphasis has been placed on the effect of such criteria on the behavior of innovators who make health care technologies available in the first place. A better understanding of the link between innovation and cost-effectiveness analysis is particularly important given the large role of technological change in the growth in health care spending and the growing interest of explicit use of CE thresholds in leading technology adoption in several Westernized countries. We analyze CE analysis in a standard market context, and stress that a technology's cost-effectiveness is closely related to the consumer surplus it generates. Improved CE therefore often clashes with interventions to stimulate producer surplus, such as patents. We derive the inconsistency between technology adoption based on CE analysis and economic efficiency. Indeed, static efficiency, dynamic efficiency, and improved patient health may all be induced by the cost-effectiveness of the technology being at its worst level. As producer appropriation of the social surplus of an innovation is central to the dynamic efficiency that should guide CE adoption criteria, we exemplify how appropriation can be inferred from existing CE estimates. For an illustrative sample of technologies considered, we find that the median technology has an appropriation of about 15%. To the extent that such incentives are deemed either too low or too high compared to dynamically efficient levels, CE thresholds may be appropriately raised or lowered to improve dynamic efficiency.

  7. Benefit-Cost Analysis of Foot-and-Mouth Disease Vaccination at the Farm-Level in South Vietnam.

    PubMed

    Truong, Dinh Bao; Goutard, Flavie Luce; Bertagnoli, Stéphane; Delabouglise, Alexis; Grosbois, Vladimir; Peyre, Marisa

    2018-01-01

    This study aimed to analyze the financial impact of foot-and-mouth disease (FMD) outbreaks in cattle at the farm-level and the benefit-cost ratio (BCR) of biannual vaccination strategy to prevent and eradicate FMD for cattle in South Vietnam. Production data were collected from 49 small-scale dairy farms, 15 large-scale dairy farms, and 249 beef farms of Long An and Tay Ninh province using a questionaire. Financial data of FMD impacts were collected using participatory tools in 37 villages of Long An province. The net present value, i.e., the difference between the benefits (additional revenue and saved costs) and costs (additional costs and revenue foregone), of FMD vaccination in large-scale dairy farms was 2.8 times higher than in small-scale dairy farms and 20 times higher than in beef farms. The BCR of FMD vaccination over 1 year in large-scale dairy farms, small-scale dairy farms, and beef farms were 11.6 [95% confidence interval (95% CI) 6.42-16.45], 9.93 (95% CI 3.45-16.47), and 3.02 (95% CI 0.76-7.19), respectively. The sensitivity analysis showed that varying the vaccination cost had more effect on the BCR of cattle vaccination than varying the market price. This benefit-cost analysis of biannual vaccination strategy showed that investment in FMD prevention can be financially profitable, and therefore sustainable, for dairy farmers. For beef cattle, it is less certain that vaccination is profitable. Additional benefit-cost analysis study of vaccination strategies at the national-level would be required to evaluate and adapt the national strategy to achieve eradication of this disease in Vietnam.

  8. Cost-effective Expression and Purification of Antimicrobial and Host Defense Peptides in Escherichia coli

    PubMed Central

    Bommarius, B.; Jenssen, H.; Elliott, M.; Kindrachuk, J.; Pasupuleti, Mukesh; Gieren, H; Jaeger, K.-E.; Hancock, R.E. W.

    2010-01-01

    Cationic antimicrobial host defense peptides (HDPs) combat infection by directly killing a wide variety of microbes, and/or modulating host immunity. HDPs have great therapeutic potential against antibiotic-resistant bacteria, viruses and even parasites, but there are substantial roadblocks to their therapeutic application. High manufacturing costs associated with amino acid precursors have limited the delivery of inexpensive therapeutics through industrial-scale chemical synthesis. Conversely, the production of peptides in bacteria by recombinant DNA technology has been impeded by the antimicrobial activity of these peptides and their susceptibility to proteolytic degradation, while subsequent purification of recombinant peptides often requires multiple steps and has not been cost-effective. Here we have developed methodologies appropriate for large-scale industrial production of HDPs; in particular, we describe (i) a method, using fusions to SUMO, for producing high yields of intact recombinant HDPs in bacteria without significant toxicity; and (ii) a simplified 2-step purification method appropriate for industrial use. We have used this method to produce seven HDPs to date (IDR1, MX226, LL37, CRAMP, HHC-10, E5 and E6). Using this technology, pilot-scale fermentation (10 L) was performed to produce large quantities of biologically active cationic peptides. Together, these data indicate that this new method represents a cost-effective means to enable commercial enterprises to produce HDPs in large-scale under Good Laboratory Manufacturing Practice (GMP) conditions for therapeutic application in humans. PMID:20713107

  9. Potential of powdered activated mustard cake for decolorising raw sugar.

    PubMed

    Singh, Kaman; Bharose, Ram; Verma, Sudhir Kumar; Singh, Vimalesh Kumar

    2013-01-15

    Carbon decolorisation has become customary in the food processing industries; however, it is not economical. Extensive research has therefore been directed towards investigating potential substitutes for commercial activated carbons which might have the advantage of offering an effective, lower-cost replacement for existing bone char or coal-based granular activated carbon (GAC). The physical (bulk density and hardness), chemical (pH and mineral content) and adsorption characteristics (iodine test, molasses test and raw sugar decolorisation efficiency) of powdered activated mustard cake (PAMC) made from de-oiled mustard cake were determined and compared to commercial adsorbents. Although the colour removal efficiency of the PAMC is lower than that of commercial materials, it is cost effective and eco-friendly compared to the existing decolorisation/refining processes. To reduce the load on GAC/activated carbon/charcoal, PAMC could be used on an industrial scale. A decolorisation mechanism has been postulated on the basis of oxygen surface functionalities and surface charge of the PAMC and, accordingly, charge transfer interaction seems to be responsible for the decolorisation mechanism. In addition, a complex interplay of electrostatics and dispersive interaction seem to be involved during the decolorisation process. A low-cost agricultural waste product in the form of de-oiled mustard cake was converted to an efficient adsorbent, PAMC, for use in decolorising raw as well as coloured sugar solutions. The physical, chemical, adsorption characteristics and raw sugar decolorisation efficiency of PAMC were determined and compared to those of commercial adsorbents. The colour removal efficiency of the PAMC is lower than that of commercial materials but it is cost effective and eco-friendly as compared to existing decolorisation/refining processes. The availability of the raw material for the production of PAMC further demands its use on an industrial scale. Copyright © 2012 Society of Chemical Industry.

  10. Supermassive Black Hole Binaries in High Performance Massively Parallel Direct N-body Simulations on Large GPU Clusters

    NASA Astrophysics Data System (ADS)

    Spurzem, R.; Berczik, P.; Zhong, S.; Nitadori, K.; Hamada, T.; Berentzen, I.; Veles, A.

    2012-07-01

    Astrophysical Computer Simulations of Dense Star Clusters in Galactic Nuclei with Supermassive Black Holes are presented using new cost-efficient supercomputers in China accelerated by graphical processing cards (GPU). We use large high-accuracy direct N-body simulations with Hermite scheme and block-time steps, parallelised across a large number of nodes on the large scale and across many GPU thread processors on each node on the small scale. A sustained performance of more than 350 Tflop/s for a science run on using simultaneously 1600 Fermi C2050 GPUs is reached; a detailed performance model is presented and studies for the largest GPU clusters in China with up to Petaflop/s performance and 7000 Fermi GPU cards. In our case study we look at two supermassive black holes with equal and unequal masses embedded in a dense stellar cluster in a galactic nucleus. The hardening processes due to interactions between black holes and stars, effects of rotation in the stellar system and relativistic forces between the black holes are simultaneously taken into account. The simulation stops at the complete relativistic merger of the black holes.

  11. Learning to assign binary weights to binary descriptor

    NASA Astrophysics Data System (ADS)

    Huang, Zhoudi; Wei, Zhenzhong; Zhang, Guangjun

    2016-10-01

    Constructing robust binary local feature descriptors are receiving increasing interest due to their binary nature, which can enable fast processing while requiring significantly less memory than their floating-point competitors. To bridge the performance gap between the binary and floating-point descriptors without increasing the computational cost of computing and matching, optimal binary weights are learning to assign to binary descriptor for considering each bit might contribute differently to the distinctiveness and robustness. Technically, a large-scale regularized optimization method is applied to learn float weights for each bit of the binary descriptor. Furthermore, binary approximation for the float weights is performed by utilizing an efficient alternatively greedy strategy, which can significantly improve the discriminative power while preserve fast matching advantage. Extensive experimental results on two challenging datasets (Brown dataset and Oxford dataset) demonstrate the effectiveness and efficiency of the proposed method.

  12. Prioritizing Land and Sea Conservation Investments to Protect Coral Reefs

    PubMed Central

    Klein, Carissa J.; Ban, Natalie C.; Halpern, Benjamin S.; Beger, Maria; Game, Edward T.; Grantham, Hedley S.; Green, Alison; Klein, Travis J.; Kininmonth, Stuart; Treml, Eric; Wilson, Kerrie; Possingham, Hugh P.

    2010-01-01

    Background Coral reefs have exceptional biodiversity, support the livelihoods of millions of people, and are threatened by multiple human activities on land (e.g. farming) and in the sea (e.g. overfishing). Most conservation efforts occur at local scales and, when effective, can increase the resilience of coral reefs to global threats such as climate change (e.g. warming water and ocean acidification). Limited resources for conservation require that we efficiently prioritize where and how to best sustain coral reef ecosystems. Methodology/Principal Findings Here we develop the first prioritization approach that can guide regional-scale conservation investments in land- and sea-based conservation actions that cost-effectively mitigate threats to coral reefs, and apply it to the Coral Triangle, an area of significant global attention and funding. Using information on threats to marine ecosystems, effectiveness of management actions at abating threats, and the management and opportunity costs of actions, we calculate the rate of return on investment in two conservation actions in sixteen ecoregions. We discover that marine conservation almost always trumps terrestrial conservation within any ecoregion, but terrestrial conservation in one ecoregion can be a better investment than marine conservation in another. We show how these results could be used to allocate a limited budget for conservation and compare them to priorities based on individual criteria. Conclusions/Significance Previous prioritization approaches do not consider both land and sea-based threats or the socioeconomic costs of conserving coral reefs. A simple and transparent approach like ours is essential to support effective coral reef conservation decisions in a large and diverse region like the Coral Triangle, but can be applied at any scale and to other marine ecosystems. PMID:20814570

  13. Prioritizing land and sea conservation investments to protect coral reefs.

    PubMed

    Klein, Carissa J; Ban, Natalie C; Halpern, Benjamin S; Beger, Maria; Game, Edward T; Grantham, Hedley S; Green, Alison; Klein, Travis J; Kininmonth, Stuart; Treml, Eric; Wilson, Kerrie; Possingham, Hugh P

    2010-08-30

    Coral reefs have exceptional biodiversity, support the livelihoods of millions of people, and are threatened by multiple human activities on land (e.g. farming) and in the sea (e.g. overfishing). Most conservation efforts occur at local scales and, when effective, can increase the resilience of coral reefs to global threats such as climate change (e.g. warming water and ocean acidification). Limited resources for conservation require that we efficiently prioritize where and how to best sustain coral reef ecosystems. Here we develop the first prioritization approach that can guide regional-scale conservation investments in land- and sea-based conservation actions that cost-effectively mitigate threats to coral reefs, and apply it to the Coral Triangle, an area of significant global attention and funding. Using information on threats to marine ecosystems, effectiveness of management actions at abating threats, and the management and opportunity costs of actions, we calculate the rate of return on investment in two conservation actions in sixteen ecoregions. We discover that marine conservation almost always trumps terrestrial conservation within any ecoregion, but terrestrial conservation in one ecoregion can be a better investment than marine conservation in another. We show how these results could be used to allocate a limited budget for conservation and compare them to priorities based on individual criteria. Previous prioritization approaches do not consider both land and sea-based threats or the socioeconomic costs of conserving coral reefs. A simple and transparent approach like ours is essential to support effective coral reef conservation decisions in a large and diverse region like the Coral Triangle, but can be applied at any scale and to other marine ecosystems.

  14. A safe, effective, and facility compatible cleaning in place procedure for affinity resin in large-scale monoclonal antibody purification.

    PubMed

    Wang, Lu; Dembecki, Jill; Jaffe, Neil E; O'Mara, Brian W; Cai, Hui; Sparks, Colleen N; Zhang, Jian; Laino, Sarah G; Russell, Reb J; Wang, Michelle

    2013-09-20

    Cleaning-in-place (CIP) for column chromatography plays an important role in therapeutic protein production. A robust and efficient CIP procedure ensures product quality, improves column life time and reduces the cost of the purification processes, particularly for those using expensive affinity resins, such as MabSelect protein A resin. Cleaning efficiency, resin compatibility, and facility compatibility are the three major aspects to consider in CIP process design. Cleaning MabSelect resin with 50mM sodium hydroxide (NaOH) along with 1M sodium chloride is one of the most popular cleaning procedures used in biopharmaceutical industries. However, high concentration sodium chloride is a leading cause of corrosion in the stainless steel containers used in large scale manufacture. Corroded containers may potentially introduce metal contaminants into purified drug products. Therefore, it is challenging to apply this cleaning procedure into commercial manufacturing due to facility compatibility and drug safety concerns. This paper reports a safe, effective and environmental and facility-friendly cleaning procedure that is suitable for large scale affinity chromatography. An alternative salt (sodium sulfate) is used to prevent the stainless steel corrosion caused by sodium chloride. Sodium hydroxide and salt concentrations were optimized using a high throughput screening approach to achieve the best combination of facility compatibility, cleaning efficiency and resin stability. Additionally, benzyl alcohol is applied to achieve more effective microbial control. Based on the findings, the recommended optimum cleaning strategy is cleaning MabSelect resin with 25 mM NaOH, 0.25 M Na2SO4 and 1% benzyl alcohol solution every cycle, followed by a more stringent cleaning using 50 mM NaOH with 0.25 M Na2SO4 and 1% benzyl alcohol at the end of each manufacturing campaign. A resin life cycle study using the MabSelect affinity resin demonstrates that the new cleaning strategy prolongs resin life time and consistently delivers high purity drug products. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Inexpensive and Highly Reproducible Cloud-Based Variant Calling of 2,535 Human Genomes

    PubMed Central

    Shringarpure, Suyash S.; Carroll, Andrew; De La Vega, Francisco M.; Bustamante, Carlos D.

    2015-01-01

    Population scale sequencing of whole human genomes is becoming economically feasible; however, data management and analysis remains a formidable challenge for many research groups. Large sequencing studies, like the 1000 Genomes Project, have improved our understanding of human demography and the effect of rare genetic variation in disease. Variant calling on datasets of hundreds or thousands of genomes is time-consuming, expensive, and not easily reproducible given the myriad components of a variant calling pipeline. Here, we describe a cloud-based pipeline for joint variant calling in large samples using the Real Time Genomics population caller. We deployed the population caller on the Amazon cloud with the DNAnexus platform in order to achieve low-cost variant calling. Using our pipeline, we were able to identify 68.3 million variants in 2,535 samples from Phase 3 of the 1000 Genomes Project. By performing the variant calling in a parallel manner, the data was processed within 5 days at a compute cost of $7.33 per sample (a total cost of $18,590 for completed jobs and $21,805 for all jobs). Analysis of cost dependence and running time on the data size suggests that, given near linear scalability, cloud computing can be a cheap and efficient platform for analyzing even larger sequencing studies in the future. PMID:26110529

  16. Solution Adsorption Formation of a π-Conjugated Polymer/Graphene Composite for High-Performance Field-Effect Transistors.

    PubMed

    Liu, Yun; Hao, Wei; Yao, Huiying; Li, Shuzhou; Wu, Yuchen; Zhu, Jia; Jiang, Lei

    2018-01-01

    Semiconducting polymers with π-conjugated electronic structures have potential application in the large-scale printable fabrication of high-performance electronic and optoelectronic devices. However, owing to their poor environmental stability and high-cost synthesis, polymer semiconductors possess limited device implementation. Here, an approach for constructing a π-conjugated polymer/graphene composite material to circumvent these limitations is provided, and then this material is patterned into 1D arrays. Driven by the π-π interaction, several-layer polymers can be adsorbed onto the graphene planes. The low consumption of the high-cost semiconductor polymers and the mass production of graphene contribute to the low-cost fabrication of the π-conjugated polymer/graphene composite materials. Based on the π-conjugated system, a reduced π-π stacking distance between graphene and the polymer can be achieved, yielding enhanced charge-transport properties. Owing to the incorporation of graphene, the composite material shows improved thermal stability. More generally, it is believed that the construction of the π-conjugated composite shows clear possibility of integrating organic molecules and 2D materials into microstructure arrays for property-by-design fabrication of functional devices with large area, low cost, and high efficiency. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Multi-object medium resolution optical spectroscopy at the E-ELT

    NASA Astrophysics Data System (ADS)

    Spanò, Paolo; Bonifacio, Piercarlo

    2008-07-01

    We present the design of a compact medium resolution spectrograph (R~15,000-20,000), intended to operate on a 42m telescope in seeing-limited mode. Our design takes full advantage of some new technology optical components, like volume phase holographic (VPH) gratings. At variance with the choice of complex large echelle spectrographs, which have been the standard on 8m class telescopes, we selected an efficient VPH spectrograph with a limited beam diameter, in order to keep overall dimensions and costs low, using proven available technologies. To obtain such a resolution, we need to moderately slice the telescope image plane onto the spectrograph entrance slit (5-6 slices). Then, standard telescope AO-mode (GLAO, Ground Layer Adaptive Optics) can be used over a large field of view (~10 arcmin), without loosing efficiency. Multiplex capabilities can greatly increase the observing efficiency. A robotic pick-up mirror system can be implemented, within conventional environmental conditions (temperature, pressure, gravity, size), demanding only standard mechanical and optical tolerances. A modular approach allows us scaling multiplex capabilities on overall costs and available space.

  18. Manufacturing Cost Levelization Model – A User’s Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrow, William R.; Shehabi, Arman; Smith, Sarah Josephine

    The Manufacturing Cost Levelization Model is a cost-performance techno-economic model that estimates total large-scale manufacturing costs for necessary to produce a given product. It is designed to provide production cost estimates for technology researchers to help guide technology research and development towards an eventual cost-effective product. The model presented in this user’s guide is generic and can be tailored to the manufacturing of any product, including the generation of electricity (as a product). This flexibility, however, requires the user to develop the processes and process efficiencies that represents a full-scale manufacturing facility. The generic model is comprised of several modulesmore » that estimate variable costs (material, labor, and operating), fixed costs (capital & maintenance), financing structures (debt and equity financing), and tax implications (taxable income after equipment and building depreciation, debt interest payments, and expenses) of a notional manufacturing plant. A cash-flow method is used to estimate a selling price necessary for the manufacturing plant to recover its total cost of production. A levelized unit sales price ($ per unit of product) is determined by dividing the net-present value of the manufacturing plant’s expenses ($) by the net present value of its product output. A user defined production schedule drives the cash-flow method that determines the levelized unit price. In addition, an analyst can increase the levelized unit price to include a gross profit margin to estimate a product sales price. This model allows an analyst to understand the effect that any input variables could have on the cost of manufacturing a product. In addition, the tool is able to perform sensitivity analysis, which can be used to identify the key variables and assumptions that have the greatest influence on the levelized costs. This component is intended to help technology researchers focus their research attention on tasks that offer the greatest opportunities for cost reduction early in the research and development stages of technology invention.« less

  19. Large-Scale Low-Cost NGS Library Preparation Using a Robust Tn5 Purification and Tagmentation Protocol

    PubMed Central

    Hennig, Bianca P.; Velten, Lars; Racke, Ines; Tu, Chelsea Szu; Thoms, Matthias; Rybin, Vladimir; Besir, Hüseyin; Remans, Kim; Steinmetz, Lars M.

    2017-01-01

    Efficient preparation of high-quality sequencing libraries that well represent the biological sample is a key step for using next-generation sequencing in research. Tn5 enables fast, robust, and highly efficient processing of limited input material while scaling to the parallel processing of hundreds of samples. Here, we present a robust Tn5 transposase purification strategy based on an N-terminal His6-Sumo3 tag. We demonstrate that libraries prepared with our in-house Tn5 are of the same quality as those processed with a commercially available kit (Nextera XT), while they dramatically reduce the cost of large-scale experiments. We introduce improved purification strategies for two versions of the Tn5 enzyme. The first version carries the previously reported point mutations E54K and L372P, and stably produces libraries of constant fragment size distribution, even if the Tn5-to-input molecule ratio varies. The second Tn5 construct carries an additional point mutation (R27S) in the DNA-binding domain. This construct allows for adjustment of the fragment size distribution based on enzyme concentration during tagmentation, a feature that opens new opportunities for use of Tn5 in customized experimental designs. We demonstrate the versatility of our Tn5 enzymes in different experimental settings, including a novel single-cell polyadenylation site mapping protocol as well as ultralow input DNA sequencing. PMID:29118030

  20. Portable parallel stochastic optimization for the design of aeropropulsion components

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Rhodes, G. S.

    1994-01-01

    This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.

  1. Optimizing community case management strategies to achieve equitable reduction of childhood pneumonia mortality: An application of Equitable Impact Sensitive Tool (EQUIST) in five low- and middle-income countries.

    PubMed

    Waters, Donald; Theodoratou, Evropi; Campbell, Harry; Rudan, Igor; Chopra, Mickey

    2012-12-01

    The aim of this study was to populate the Equitable Impact Sensitive Tool (EQUIST) framework with all necessary data and conduct the first implementation of EQUIST in studying cost-effectiveness of community case management of childhood pneumonia in 5 low- and middle-income countries with relation to equity impact. Wealth quintile-specific data were gathered or modelled for all contributory determinants of the EQUIST framework, namely: under-five mortality rate, cost of intervention, intervention effectiveness, current coverage of intervention and relative disease distribution. These were then combined statistically to calculate the final outcome of the EQUIST model for community case management of childhood pneumonia: US$ per life saved, in several different approaches to scaling-up. The current 'mainstream' approach to scaling-up of interventions is never the most cost-effective. Community-case management appears to strongly support an 'equity-promoting' approach to scaling-up, displaying the highest levels of cost-effectiveness in interventions targeted at the poorest quintile of each study country, although absolute cost differences vary by context. The relationship between cost-effectiveness and equity impact is complex, with many determinants to consider. One important way to increase intervention cost-effectiveness in poorer quintiles is to improve the efficiency and quality of delivery. More data are needed in all areas to increase the accuracy of EQUIST-based estimates.

  2. Eyjafjallajökull and 9/11: The Impact of Large-Scale Disasters on Worldwide Mobility

    PubMed Central

    Woolley-Meza, Olivia; Grady, Daniel; Thiemann, Christian; Bagrow, James P.; Brockmann, Dirk

    2013-01-01

    Large-scale disasters that interfere with globalized socio-technical infrastructure, such as mobility and transportation networks, trigger high socio-economic costs. Although the origin of such events is often geographically confined, their impact reverberates through entire networks in ways that are poorly understood, difficult to assess, and even more difficult to predict. We investigate how the eruption of volcano Eyjafjallajökull, the September 11th terrorist attacks, and geographical disruptions in general interfere with worldwide mobility. To do this we track changes in effective distance in the worldwide air transportation network from the perspective of individual airports. We find that universal features exist across these events: airport susceptibilities to regional disruptions follow similar, strongly heterogeneous distributions that lack a scale. On the other hand, airports are more uniformly susceptible to attacks that target the most important hubs in the network, exhibiting a well-defined scale. The statistical behavior of susceptibility can be characterized by a single scaling exponent. Using scaling arguments that capture the interplay between individual airport characteristics and the structural properties of routes we can recover the exponent for all types of disruption. We find that the same mechanisms responsible for efficient passenger flow may also keep the system in a vulnerable state. Our approach can be applied to understand the impact of large, correlated disruptions in financial systems, ecosystems and other systems with a complex interaction structure between heterogeneous components. PMID:23950904

  3. Optimized Generator Designs for the DTU 10-MW Offshore Wind Turbine using GeneratorSE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, Latha; Maness, Michael; Dykes, Katherine

    Compared to land-based applications, offshore wind imposes challenges for the development of next generation wind turbine generator technology. Direct-drive generators are believed to offer high availability, efficiency, and reduced operation and maintenance requirements; however, previous research suggests difficulties in scaling to several megawatts or more in size. The resulting designs are excessively large and/or massive, which are major impediments to transportation logistics, especially for offshore applications. At the same time, geared wind turbines continue to sustain offshore market growth through relatively cheaper and lightweight generators. However, reliability issues associated with mechanical components in a geared system create significant operation andmore » maintenance costs, and these costs make up a large portion of overall system costs offshore. Thus, direct-drive turbines are likely to outnumber their gear-driven counterparts for this market, and there is a need to review the costs or opportunities of building machines with different types of generators and examining their competitiveness at the sizes necessary for the next generation of offshore wind turbines. In this paper, we use GeneratorSE, the National Renewable Energy Laboratory's newly developed systems engineering generator sizing tool to estimate mass, efficiency, and the costs of different generator technologies satisfying the electromagnetic, structural, and basic thermal design requirements for application in a very large-scale offshore wind turbine such as the Technical University of Denmark's (DTU) 10-MW reference wind turbine. For the DTU reference wind turbine, we use the previously mentioned criteria to optimize a direct-drive, radial flux, permanent-magnet synchronous generator; a direct-drive electrically excited synchronous generator; a medium-speed permanent-magnet generator; and a high-speed, doubly-fed induction generator. Preliminary analysis of leveled costs of energy indicate that for large turbines, the cost of permanent magnets and reliability issues associated with brushes in electrically excited machines are the biggest deterrents for building direct-drive systems. The advantage of medium-speed permanent-magnet machines over doubly-fed induction generators is evident, yet, variability in magnet prices and solutions to address reliability issues associated with gearing and brushes can change this outlook. This suggests the need to potentially pursue fundamentally new innovations in generator designs that help avoid high capital costs but still have significant reliability related to performance.« less

  4. Optimized Generator Designs for the DTU 10-MW Offshore Wind Turbine using GeneratorSE: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, Latha; Maness, Michael; Dykes, Katherine

    Compared to land-based applications, offshore wind imposes challenges for the development of next generation wind turbine generator technology. Direct-drive generators are believed to offer high availability, efficiency, and reduced operation and maintenance requirements; however, previous research suggests difficulties in scaling to several megawatts or more in size. The resulting designs are excessively large and/or massive, which are major impediments to transportation logistics, especially for offshore applications. At the same time, geared wind turbines continue to sustain offshore market growth through relatively cheaper and lightweight generators. However, reliability issues associated with mechanical components in a geared system create significant operation andmore » maintenance costs, and these costs make up a large portion of overall system costs offshore. Thus, direct-drive turbines are likely to outnumber their gear-driven counterparts for this market, and there is a need to review the costs or opportunities of building machines with different types of generators and examining their competitiveness at the sizes necessary for the next generation of offshore wind turbines. In this paper, we use GeneratorSE, the National Renewable Energy Laboratory's newly developed systems engineering generator sizing tool to estimate mass, efficiency, and the costs of different generator technologies satisfying the electromagnetic, structural, and basic thermal design requirements for application in a very large-scale offshore wind turbine such as the Technical University of Denmark's (DTU) 10-MW reference wind turbine. For the DTU reference wind turbine, we use the previously mentioned criteria to optimize a direct-drive, radial flux, permanent-magnet synchronous generator; a direct-drive electrically excited synchronous generator; a medium-speed permanent-magnet generator; and a high-speed, doubly-fed induction generator. Preliminary analysis of leveled costs of energy indicate that for large turbines, the cost of permanent magnets and reliability issues associated with brushes in electrically excited machines are the biggest deterrents for building direct-drive systems. The advantage of medium-speed permanent-magnet machines over doubly-fed induction generators is evident, yet, variability in magnet prices and solutions to address reliability issues associated with gearing and brushes can change this outlook. This suggests the need to potentially pursue fundamentally new innovations in generator designs that help avoid high capital costs but still have significant reliability related to performance.« less

  5. Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.

    PubMed

    Li, Yeqing; Liu, Wei; Huang, Junzhou

    2018-06-01

    Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.

  6. Integration and Improvement of Geophysical Root Biomass Measurements for Determining Carbon Credits

    NASA Astrophysics Data System (ADS)

    Boitet, J. I.

    2013-12-01

    Carbon trading schemes fundamentally rely on accurate subsurface carbon quantification in order for governing bodies to grant carbon credits inclusive of root biomass (What is Carbon Credit. 2013). Root biomass makes up a large chunk of the subsurface carbon and is difficult, labor intensive, and costly to measure. This paper stitches together the latest geophysical root measurement techniques into site-dependent recommendations for technique combinations and modifications that maximize large-scale root biomass measurement accuracy and efficiency. "Accuracy" is maximized when actual root biomass is closest to measured root biomass. "Efficiency" is maximized when time, labor, and cost of measurement is minimized. Several combinations have emerged which satisfy both criteria under different site conditions. Use of ground penetrating radar (GPR) and/or electrical resistivity tomography (ERT) allow for large tracts of land to be surveyed under appropriate conditions. Among other characteristics, GPR does best with detecting coarse roots in dry soil. ERT does best in detecting roots in moist soils, but is especially limited by electrode configuration (Mancuso, S. 2012). Integration of these two technologies into a baseline protocol based on site-specific characteristics, especially soil moisture and plants species heterogeneity, will drastically theoretically increase efficiency and accuracy of root biomass measurements. Modifications of current measurement protocols using these existing techniques will also theoretically lead to drastic improvements in both accuracy and efficiency. These modifications, such as efficient 3D imaging by adding an identical electrode array perpendicular to the first array used in the Pulled Array Continuous Electrical Profiling (PACEP) technique for ERT, should allow for more widespread application of these techniques for understanding root biomass. Where whole-site measurement is not feasible either due to financial, equipment, or physical limitations, measurements from randomly selected plots must be assumed representative of the entire system and scaled up. This scaling introduces error roughly inversely proportional to the number and size of plots measured. References Mancuso, S. (2012). Measuring roots: An updated approach Springer. What is carbon credit. (2013). Retrieved 7/20, 2013, from http://carbontradexchange.com/knowledge/what-is-carbon-credit

  7. Local curvature entropy-based 3D terrain representation using a comprehensive Quadtree

    NASA Astrophysics Data System (ADS)

    Chen, Qiyu; Liu, Gang; Ma, Xiaogang; Mariethoz, Gregoire; He, Zhenwen; Tian, Yiping; Weng, Zhengping

    2018-05-01

    Large scale 3D digital terrain modeling is a crucial part of many real-time applications in geoinformatics. In recent years, the improved speed and precision in spatial data collection make the original terrain data more complex and bigger, which poses challenges for data management, visualization and analysis. In this work, we presented an effective and comprehensive 3D terrain representation based on local curvature entropy and a dynamic Quadtree. The Level-of-detail (LOD) models of significant terrain features were employed to generate hierarchical terrain surfaces. In order to reduce the radical changes of grid density between adjacent LODs, local entropy of terrain curvature was regarded as a measure of subdividing terrain grid cells. Then, an efficient approach was presented to eliminate the cracks among the different LODs by directly updating the Quadtree due to an edge-based structure proposed in this work. Furthermore, we utilized a threshold of local entropy stored in each parent node of this Quadtree to flexibly control the depth of the Quadtree and dynamically schedule large-scale LOD terrain. Several experiments were implemented to test the performance of the proposed method. The results demonstrate that our method can be applied to construct LOD 3D terrain models with good performance in terms of computational cost and the maintenance of terrain features. Our method has already been deployed in a geographic information system (GIS) for practical uses, and it is able to support the real-time dynamic scheduling of large scale terrain models more easily and efficiently.

  8. A computationally efficient Bayesian sequential simulation approach for the assimilation of vast and diverse hydrogeophysical datasets

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus

    2016-04-01

    Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.

  9. Metabolic engineering of biosynthetic pathway for production of renewable biofuels.

    PubMed

    Singh, Vijai; Mani, Indra; Chaudhary, Dharmendra Kumar; Dhar, Pawan Kumar

    2014-02-01

    Metabolic engineering is an important area of research that involves editing genetic networks to overproduce a certain substance by the cells. Using a combination of genetic, metabolic, and modeling methods, useful substances have been synthesized in the past at industrial scale and in a cost-effective manner. Currently, metabolic engineering is being used to produce sufficient, economical, and eco-friendly biofuels. In the recent past, a number of efforts have been made towards engineering biosynthetic pathways for large scale and efficient production of biofuels from biomass. Given the adoption of metabolic engineering approaches by the biofuel industry, this paper reviews various approaches towards the production and enhancement of renewable biofuels such as ethanol, butanol, isopropanol, hydrogen, and biodiesel. We have also identified specific areas where more work needs to be done in the future.

  10. Managed care and the scale efficiency of US hospitals.

    PubMed

    Brown, H Shelton; Pagán, José A

    2006-12-01

    Managed care penetration has been partly responsible for slowing down increases in health care costs in recent years. This study uses a 1992-1996 Health Care Utilization Project sample of hospitals to analyze the relationship between managed care penetration in local insurance markets and hospital scale efficiency. After controlling for hospital and market area variables, we find that managed care insurance, particularly the preferred provider type, is associated with increases in hospital scale efficiency in tertiary cases. The results presented here are consistent with the view that managed care can lead to reductions in health cost inflation by controlling the diffusion of technology via improvements in the scale efficiency of hospitals.

  11. Flow cytometry for enrichment and titration in massively parallel DNA sequencing

    PubMed Central

    Sandberg, Julia; Ståhl, Patrik L.; Ahmadian, Afshin; Bjursell, Magnus K.; Lundeberg, Joakim

    2009-01-01

    Massively parallel DNA sequencing is revolutionizing genomics research throughout the life sciences. However, the reagent costs and labor requirements in current sequencing protocols are still substantial, although improvements are continuously being made. Here, we demonstrate an effective alternative to existing sample titration protocols for the Roche/454 system using Fluorescence Activated Cell Sorting (FACS) technology to determine the optimal DNA-to-bead ratio prior to large-scale sequencing. Our method, which eliminates the need for the costly pilot sequencing of samples during titration is capable of rapidly providing accurate DNA-to-bead ratios that are not biased by the quantification and sedimentation steps included in current protocols. Moreover, we demonstrate that FACS sorting can be readily used to highly enrich fractions of beads carrying template DNA, with near total elimination of empty beads and no downstream sacrifice of DNA sequencing quality. Automated enrichment by FACS is a simple approach to obtain pure samples for bead-based sequencing systems, and offers an efficient, low-cost alternative to current enrichment protocols. PMID:19304748

  12. Hydrothermal route to VO2 (B) nanorods: controlled synthesis and characterization

    NASA Astrophysics Data System (ADS)

    Song, Shaokun; Huang, Qiwei; Zhu, Wanting

    2017-10-01

    One-dimensional vanadium dioxides have attracted intensive attention owing to their distinctive structure and novel applications in catalysis, high energy lithium-ion batteries, chemical sensors/actuators and electrochemical devices etc. In this paper, large-scale VO2 (B) nanorods have been successfully synthesized via a versatile and environment friendly hydrothermal strategy using V2O5 as vanadium source and carbohydrates/alcohols as reductant. The obtained samples are characterized by XRD, FT-IR, TEM, and XPS techniques to investigate the effects of chemical parameters such as reductants, temperature, and time of synthesis on the structure and morphology of products. Results show that pure B phase VO2 with homogeneous nanorod-like morphology can be prepared easily at 180 °C for 3 days with glycerol as reluctant. Typically, the nanorod-like products are 0.5-1 μm long and 50 nm width. Furthermore, it is also confirmed that the products are consisted of VO2, corresponding to the B phase. More importantly, this novel approach is efficient, free of any harmful solvents and surfactants. Therefore, this efficient, green, and cost-saving route will have great potential in the large-scale fabrication of 1D VO2 (B) nanorods from the economic and environmental point of view.

  13. Diagnostic performance of an indirect enzyme-linked immunosorbent assay (ELISA) to detect bovine leukemia virus antibodies in bulk-tank milk samples.

    PubMed

    Nekouei, Omid; Durocher, Jean; Keefe, Greg

    2016-07-01

    This study assessed the diagnostic performance of a commercial ELISA for detecting bovine leukemia virus antibodies in bulk-tank milk samples from eastern Canada. Sensitivity and specificity of the test were estimated at 97.2% and 100%, respectively. The test was recommended as a cost-efficient tool for large-scale screening programs.

  14. Diagnostic performance of an indirect enzyme-linked immunosorbent assay (ELISA) to detect bovine leukemia virus antibodies in bulk-tank milk samples

    PubMed Central

    Nekouei, Omid; Durocher, Jean; Keefe, Greg

    2016-01-01

    This study assessed the diagnostic performance of a commercial ELISA for detecting bovine leukemia virus antibodies in bulk-tank milk samples from eastern Canada. Sensitivity and specificity of the test were estimated at 97.2% and 100%, respectively. The test was recommended as a cost-efficient tool for large-scale screening programs. PMID:27429469

  15. Improved Calibration Shows Images True Colors

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Innovative Imaging and Research, located at Stennis Space Center, used a single SBIR contract with the center to build a large-scale integrating sphere, capable of calibrating a whole array of cameras simultaneously, at a fraction of the usual cost for such a device. Through the use of LEDs, the company also made the sphere far more efficient than existing products and able to mimic sunlight.

  16. Cost of Operating Central Cancer Registries and Factors That Affect Cost: Findings From an Economic Evaluation of Centers for Disease Control and Prevention National Program of Cancer Registries.

    PubMed

    Tangka, Florence K L; Subramanian, Sujha; Beebe, Maggie Cole; Weir, Hannah K; Trebino, Diana; Babcock, Frances; Ewing, Jean

    2016-01-01

    The Centers for Disease Control and Prevention (CDC) evaluated the economics of the National Program of Cancer Registries to provide the CDC, the registries, and policy makers with the economics evidence-base to make optimal decisions about resource allocation. Cancer registry budgets are under increasing threat, and, therefore, systematic assessment of the cost will identify approaches to improve the efficiencies of this vital data collection operation and also justify the funding required to sustain registry operations. To estimate the cost of cancer registry operations and to assess the factors affecting the cost per case reported by National Program of Cancer Registries-funded central cancer registries. We developed a Web-based cost assessment tool to collect 3 years of data (2009-2011) from each National Program of Cancer Registries-funded registry for all actual expenditures for registry activities (including those funded by other sources) and factors affecting registry operations. We used a random-effects regression model to estimate the impact of various factors on cost per cancer case reported. The cost of reporting a cancer case varied across the registries. Central cancer registries that receive high-quality data from reporting sources (as measured by the percentage of records passing automatic edits) and electronic data submissions, and those that collect and report on a large volume of cases had significantly lower cost per case. The volume of cases reported had a large effect, with low-volume registries experiencing much higher cost per case than medium- or high-volume registries. Our results suggest that registries operate with substantial fixed or semivariable costs. Therefore, sharing fixed costs among low-volume contiguous state registries, whenever possible, and centralization of certain processes can result in economies of scale. Approaches to improve quality of data submitted and increasing electronic reporting can also reduce cost.

  17. Cost of Operating Central Cancer Registries and Factors That Affect Cost: Findings From an Economic Evaluation of Centers for Disease Control and Prevention National Program of Cancer Registries

    PubMed Central

    Tangka, Florence K. L.; Subramanian, Sujha; Beebe, Maggie Cole; Weir, Hannah K.; Trebino, Diana; Babcock, Frances; Ewing, Jean

    2016-01-01

    Context The Centers for Disease Control and Prevention evaluated the economics of the National Program of Cancer Registries to provide the Centers for Disease Control and Prevention, the registries, and policy makers with the economic evidence-base to make optimal decisions about resource allocation. Cancer registry budgets are under increasing threat, and, therefore, systematic assessment of the cost will identify approaches to improve the efficiencies of this vital data collection operation and also justify the funding required to sustain registry operations. Objectives To estimate the cost of cancer registry operations and to assess the factors affecting the cost per case reported by National Program of Cancer Registries–funded central cancer registries. Methods We developed a Web-based cost assessment tool to collect 3 years of data (2009-2011) from each National Program of Cancer Registries–funded registry for all actual expenditures for registry activities (including those funded by other sources) and factors affecting registry operations. We used a random-effects regression model to estimate the impact of various factors on cost per cancer case reported. Results The cost of reporting a cancer case varied across the registries. Central cancer registries that receive high-quality data from reporting sources (as measured by the percentage of records passing automatic edits) and electronic data submissions, and those that collect and report on a large volume of cases had significantly lower cost per case. The volume of cases reported had a large effect, with low-volume registries experiencing much higher cost per case than medium- or high-volume registries. Conclusions Our results suggest that registries operate with substantial fixed or semivariable costs. Therefore, sharing fixed costs among low-volume contiguous state registries, whenever possible, and centralization of certain processes can result in economies of scale. Approaches to improve quality of data submitted and increasing electronic reporting can also reduce cost. PMID:26642226

  18. Subtype-independent near full-length HIV-1 genome sequencing and assembly to be used in large molecular epidemiological studies and clinical management.

    PubMed

    Grossmann, Sebastian; Nowak, Piotr; Neogi, Ujjwal

    2015-01-01

    HIV-1 near full-length genome (HIV-NFLG) sequencing from plasma is an attractive multidimensional tool to apply in large-scale population-based molecular epidemiological studies. It also enables genotypic resistance testing (GRT) for all drug target sites allowing effective intervention strategies for control and prevention in high-risk population groups. Thus, the main objective of this study was to develop a simplified subtype-independent, cost- and labour-efficient HIV-NFLG protocol that can be used in clinical management as well as in molecular epidemiological studies. Plasma samples (n=30) were obtained from HIV-1B (n=10), HIV-1C (n=10), CRF01_AE (n=5) and CRF01_AG (n=5) infected individuals with minimum viral load >1120 copies/ml. The amplification was performed with two large amplicons of 5.5 kb and 3.7 kb, sequenced with 17 primers to obtain HIV-NFLG. GRT was validated against ViroSeq™ HIV-1 Genotyping System. After excluding four plasma samples with low-quality RNA, a total of 26 samples were attempted. Among them, NFLG was obtained from 24 (92%) samples with the lowest viral load being 3000 copies/ml. High (>99%) concordance was observed between HIV-NFLG and ViroSeq™ when determining the drug resistance mutations (DRMs). The N384I connection mutation was additionally detected by NFLG in two samples. Our high efficiency subtype-independent HIV-NFLG is a simple and promising approach to be used in large-scale molecular epidemiological studies. It will facilitate the understanding of the HIV-1 pandemic population dynamics and outline effective intervention strategies. Furthermore, it can potentially be applicable in clinical management of drug resistance by evaluating DRMs against all available antiretrovirals in a single assay.

  19. Concentrator photovoltaic module architectures with capabilities for capture and conversion of full global solar radiation

    PubMed Central

    Lee, Kyu-Tae; Yao, Yuan; He, Junwen; Fisher, Brent; Sheng, Xing; Lumb, Matthew; Xu, Lu; Anderson, Mikayla A.; Scheiman, David; Han, Seungyong; Kang, Yongseon; Gumus, Abdurrahman; Bahabry, Rabab R.; Lee, Jung Woo; Paik, Ungyu; Bronstein, Noah D.; Alivisatos, A. Paul; Meitl, Matthew; Burroughs, Scott; Hussain, Muhammad Mustafa; Lee, Jeong Chul; Nuzzo, Ralph G.; Rogers, John A.

    2016-01-01

    Emerging classes of concentrator photovoltaic (CPV) modules reach efficiencies that are far greater than those of even the highest performance flat-plate PV technologies, with architectures that have the potential to provide the lowest cost of energy in locations with high direct normal irradiance (DNI). A disadvantage is their inability to effectively use diffuse sunlight, thereby constraining widespread geographic deployment and limiting performance even under the most favorable DNI conditions. This study introduces a module design that integrates capabilities in flat-plate PV directly with the most sophisticated CPV technologies, for capture of both direct and diffuse sunlight, thereby achieving efficiency in PV conversion of the global solar radiation. Specific examples of this scheme exploit commodity silicon (Si) cells integrated with two different CPV module designs, where they capture light that is not efficiently directed by the concentrator optics onto large-scale arrays of miniature multijunction (MJ) solar cells that use advanced III–V semiconductor technologies. In this CPV+ scheme (“+” denotes the addition of diffuse collector), the Si and MJ cells operate independently on indirect and direct solar radiation, respectively. On-sun experimental studies of CPV+ modules at latitudes of 35.9886° N (Durham, NC), 40.1125° N (Bondville, IL), and 38.9072° N (Washington, DC) show improvements in absolute module efficiencies of between 1.02% and 8.45% over values obtained using otherwise similar CPV modules, depending on weather conditions. These concepts have the potential to expand the geographic reach and improve the cost-effectiveness of the highest efficiency forms of PV power generation. PMID:27930331

  20. Concentrator photovoltaic module architectures with capabilities for capture and conversion of full global solar radiation

    DOE PAGES

    Lee, Kyu-Tae; Yao, Yuan; He, Junwen; ...

    2016-12-05

    Emerging classes ofconcentrator photovoltaic (CPV) modules reach efficiencies that are far greater than those of even the highest performance flat-plate PV technologies, with architectures that have the potential to provide the lowest cost of energy in locations with high direct normal irradiance (DNI). A disadvantage is their inability to effectively use diffuse sunlight, thereby constraining widespread geographic deployment and limiting performance even under the most favorable DNI conditions. This study introduces a module design that integrates capabilities in flat-plate PV directly with the most sophisticated CPV technologies, for capture of both direct and diffuse sunlight, thereby achieving efficiency in PVmore » conversion of the global solar radiation. Specific examples of this scheme exploit commodity silicon (Si) cells integrated with two different CPV module designs, where they capture light that is not efficiently directed by the concentrator optics onto large-scale arrays of miniature multijunction (MJ) solar cells that use advanced III-V semiconductor technologies. In this CPV + scheme ("+" denotes the addition of diffuse collector), the Si and MJ cells operate independently on indirect and direct solar radiation, respectively. On-sun experimental studies of CPV + modules at latitudes of 35.9886° N (Durham, NC), 40.1125° N (Bondville, IL), and 38.9072° N (Washington, DC) show improvements in absolute module efficiencies of between 1.02% and 8.45% over values obtained using otherwise similar CPV modules, depending on weather conditions. These concepts have the potential to expand the geographic reach and improve the cost-effectiveness of the highest efficiency forms of PV power generation.« less

  1. Concentrator photovoltaic module architectures with capabilities for capture and conversion of full global solar radiation

    NASA Astrophysics Data System (ADS)

    Lee, Kyu-Tae; Yao, Yuan; He, Junwen; Fisher, Brent; Sheng, Xing; Lumb, Matthew; Xu, Lu; Anderson, Mikayla A.; Scheiman, David; Han, Seungyong; Kang, Yongseon; Gumus, Abdurrahman; Bahabry, Rabab R.; Lee, Jung Woo; Paik, Ungyu; Bronstein, Noah D.; Alivisatos, A. Paul; Meitl, Matthew; Burroughs, Scott; Mustafa Hussain, Muhammad; Lee, Jeong Chul; Nuzzo, Ralph G.; Rogers, John A.

    2016-12-01

    Emerging classes of concentrator photovoltaic (CPV) modules reach efficiencies that are far greater than those of even the highest performance flat-plate PV technologies, with architectures that have the potential to provide the lowest cost of energy in locations with high direct normal irradiance (DNI). A disadvantage is their inability to effectively use diffuse sunlight, thereby constraining widespread geographic deployment and limiting performance even under the most favorable DNI conditions. This study introduces a module design that integrates capabilities in flat-plate PV directly with the most sophisticated CPV technologies, for capture of both direct and diffuse sunlight, thereby achieving efficiency in PV conversion of the global solar radiation. Specific examples of this scheme exploit commodity silicon (Si) cells integrated with two different CPV module designs, where they capture light that is not efficiently directed by the concentrator optics onto large-scale arrays of miniature multijunction (MJ) solar cells that use advanced III-V semiconductor technologies. In this CPV+ scheme (“+” denotes the addition of diffuse collector), the Si and MJ cells operate independently on indirect and direct solar radiation, respectively. On-sun experimental studies of CPV+ modules at latitudes of 35.9886° N (Durham, NC), 40.1125° N (Bondville, IL), and 38.9072° N (Washington, DC) show improvements in absolute module efficiencies of between 1.02% and 8.45% over values obtained using otherwise similar CPV modules, depending on weather conditions. These concepts have the potential to expand the geographic reach and improve the cost-effectiveness of the highest efficiency forms of PV power generation.

  2. Concentrator photovoltaic module architectures with capabilities for capture and conversion of full global solar radiation.

    PubMed

    Lee, Kyu-Tae; Yao, Yuan; He, Junwen; Fisher, Brent; Sheng, Xing; Lumb, Matthew; Xu, Lu; Anderson, Mikayla A; Scheiman, David; Han, Seungyong; Kang, Yongseon; Gumus, Abdurrahman; Bahabry, Rabab R; Lee, Jung Woo; Paik, Ungyu; Bronstein, Noah D; Alivisatos, A Paul; Meitl, Matthew; Burroughs, Scott; Hussain, Muhammad Mustafa; Lee, Jeong Chul; Nuzzo, Ralph G; Rogers, John A

    2016-12-20

    Emerging classes of concentrator photovoltaic (CPV) modules reach efficiencies that are far greater than those of even the highest performance flat-plate PV technologies, with architectures that have the potential to provide the lowest cost of energy in locations with high direct normal irradiance (DNI). A disadvantage is their inability to effectively use diffuse sunlight, thereby constraining widespread geographic deployment and limiting performance even under the most favorable DNI conditions. This study introduces a module design that integrates capabilities in flat-plate PV directly with the most sophisticated CPV technologies, for capture of both direct and diffuse sunlight, thereby achieving efficiency in PV conversion of the global solar radiation. Specific examples of this scheme exploit commodity silicon (Si) cells integrated with two different CPV module designs, where they capture light that is not efficiently directed by the concentrator optics onto large-scale arrays of miniature multijunction (MJ) solar cells that use advanced III-V semiconductor technologies. In this CPV + scheme ("+" denotes the addition of diffuse collector), the Si and MJ cells operate independently on indirect and direct solar radiation, respectively. On-sun experimental studies of CPV + modules at latitudes of 35.9886° N (Durham, NC), 40.1125° N (Bondville, IL), and 38.9072° N (Washington, DC) show improvements in absolute module efficiencies of between 1.02% and 8.45% over values obtained using otherwise similar CPV modules, depending on weather conditions. These concepts have the potential to expand the geographic reach and improve the cost-effectiveness of the highest efficiency forms of PV power generation.

  3. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-08-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.

  4. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-01-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650

  5. Techno-economic analysis of the industrial production of a low-cost enzyme using E. coli: the case of recombinant β-glucosidase.

    PubMed

    Ferreira, Rafael da Gama; Azzoni, Adriano Rodrigues; Freitas, Sindelia

    2018-01-01

    The enzymatic conversion of lignocellulosic biomass into fermentable sugars is a promising approach for producing renewable fuels and chemicals. However, the cost and efficiency of the fungal enzyme cocktails that are normally employed in these processes remain a significant bottleneck. A potential route to increase hydrolysis yields and thereby reduce the hydrolysis costs would be to supplement the fungal enzymes with their lacking enzymatic activities, such as β-glucosidase. In this context, it is not clear from the literature whether recombinant E. coli could be a cost-effective platform for the production of some of these low-value enzymes, especially in the case of on-site production. Here, we present a conceptual design and techno-economic evaluation of the production of a low-cost industrial enzyme using recombinant E. coli . In a simulated baseline scenario for β-glucosidase demand in a hypothetical second-generation ethanol (2G) plant in Brazil, we found that the production cost (316 US$/kg) was higher than what is commonly assumed in the literature for fungal enzymes, owing especially to the facility-dependent costs (45%) and to consumables (23%) and raw materials (25%). Sensitivity analyses of process scale, inoculation volume, and volumetric productivity indicated that optimized conditions may promote a dramatic reduction in enzyme cost and also revealed the most relevant factors affecting production costs. Despite the considerable technical and economic uncertainties that surround 2G ethanol and the large-scale production of low-cost recombinant enzymes, this work sheds light on some relevant questions and supports future studies in this field. In particular, we conclude that process optimization, on many fronts, may strongly reduce the costs of E. coli recombinant enzymes, in the context of tailor-made enzymatic cocktails for 2G ethanol production.

  6. Development of a cost-effective production process for Halomonas levan.

    PubMed

    Erkorkmaz, Burak Adnan; Kırtel, Onur; Ateş Duru, Özlem; Toksoy Öner, Ebru

    2018-05-17

    Levan polysaccharide is an industrially important natural polymer with unique properties and diverse high-value applications. However, current bottlenecks associated with its large-scale production need to be overcome by innovative approaches leading to economically viable processes. Besides many mesophilic levan producers, halophilic Halomonas smyrnensis cultures hold distinctive industrial potential and, for the first time with this study, the advantage of halophilicity is used and conditions for non-sterile levan production were optimized. Levan productivity of Halomonas cultures in medium containing industrial sucrose from sugar beet and food industry by-product syrup, a total of ten sea, lake and rock salt samples from four natural salterns, as well as three different industrial-grade boron compounds were compared and the most suitable low-cost substitutes for sucrose, salt and boron were specified. Then, the effects of pH control, non-sterile conditions and different bioreactor modes (batch and fed-batch) were investigated. The development of a cost-effective production process was achieved with the highest yield (18.06 g/L) reported so far on this microbial system, as well as the highest theoretical bioconversion efficiency ever reported for levan-producing suspension cultures. Structural integrity and biocompatibility of the final product were also verified in vitro.

  7. Assessing the weighted multi-objective adaptive surrogate model optimization to derive large-scale reservoir operating rules with sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao

    2017-01-01

    The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.

  8. Extreme Cost Reductions with Multi-Megawatt Centralized Inverter Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwabe, Ulrich; Fishman, Oleg

    2015-03-20

    The objective of this project was to fully develop, demonstrate, and commercialize a new type of utility scale PV system. Based on patented technology, this includes the development of a truly centralized inverter system with capacities up to 100MW, and a high voltage, distributed harvesting approach. This system promises to greatly impact both the energy yield from large scale PV systems by reducing losses and increasing yield from mismatched arrays, as well as reduce overall system costs through very cost effective conversion and BOS cost reductions enabled by higher voltage operation.

  9. Efficient Storage Scheme of Covariance Matrix during Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Mao, D.; Yeh, T. J.

    2013-12-01

    During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.

  10. Enabling iron pyrite (FeS2) and related ternary pyrite compounds for high-performance solar energy applications

    NASA Astrophysics Data System (ADS)

    Caban Acevedo, Miguel

    The success of solar energy technologies depends not only on highly efficient solar-to-electrical energy conversion, charge storage or chemical fuel production, but also on dramatically reduced cost, to meet the future terawatt energy challenges we face. The enormous scale involved in the development of impactful solar energy technologies demand abundant and inexpensive materials, as well as energy-efficient and cost-effective processes. As a result, the investigation of semiconductor, catalyst and electrode materials made of earth-abundant and sustainable elements may prove to be of significant importance for the long-term adaptation of solar energy technologies on a larger scale. Among earth-abundant semiconductors, iron pyrite (cubic FeS2) has been considered the most promising solar energy absorber with the potential to achieve terawatt energy-scale deployment. Despite extensive synthetic progress and device efforts, the solar conversion efficiency of iron pyrite has remained below 3% since the 1990s, primarily due to a low open circuit voltage (V oc). The low photovoltage (Voc) of iron pyrite has puzzled scientists for decades and limited the development of cost-effective solar energy technologies based on this otherwise promising semiconductor. Here I report a comprehensive investigation of the syntheses and properties of iron pyrite materials, which reveals that the Voc of iron pyrite is limited by the ionization of a high density of intrinsic bulk defect states despite high density surface states and strong surface Fermi level pinning. Contrary to popular belief, bulk defects most-likely caused by intrinsic sulfur vacancies in iron pyrite must be controlled in order to enable this earth-abundant semiconductor for cost-effective and sustainable solar energy conversion. Lastly, the investigation of iron pyrite presented here lead to the discovery of ternary pyrite-type cobalt phosphosulfide (CoPS) as a highly-efficient earth-abundant catalyst material for electrochemical and solar energy driven hydrogen production.

  11. Patterning technology for solution-processed organic crystal field-effect transistors

    PubMed Central

    Li, Yun; Sun, Huabin; Shi, Yi; Tsukagoshi, Kazuhito

    2014-01-01

    Organic field-effect transistors (OFETs) are fundamental building blocks for various state-of-the-art electronic devices. Solution-processed organic crystals are appreciable materials for these applications because they facilitate large-scale, low-cost fabrication of devices with high performance. Patterning organic crystal transistors into well-defined geometric features is necessary to develop these crystals into practical semiconductors. This review provides an update on recentdevelopment in patterning technology for solution-processed organic crystals and their applications in field-effect transistors. Typical demonstrations are discussed and examined. In particular, our latest research progress on the spin-coating technique from mixture solutions is presented as a promising method to efficiently produce large organic semiconducting crystals on various substrates for high-performance OFETs. This solution-based process also has other excellent advantages, such as phase separation for self-assembled interfaces via one-step spin-coating, self-flattening of rough interfaces, and in situ purification that eliminates the impurity influences. Furthermore, recommendations for future perspectives are presented, and key issues for further development are discussed. PMID:27877656

  12. Performance Study of Salt Cavern Air Storage Based Non-Supplementary Fired Compressed Air Energy Storage System

    NASA Astrophysics Data System (ADS)

    Chen, Xiaotao; Song, Jie; Liang, Lixiao; Si, Yang; Wang, Le; Xue, Xiaodai

    2017-10-01

    Large-scale energy storage system (ESS) plays an important role in the planning and operation of smart grid and energy internet. Compressed air energy storage (CAES) is one of promising large-scale energy storage techniques. However, the high cost of the storage of compressed air and the low capacity remain to be solved. This paper proposes a novel non-supplementary fired compressed air energy storage system (NSF-CAES) based on salt cavern air storage to address the issues of air storage and the efficiency of CAES. Operating mechanisms of the proposed NSF-CAES are analysed based on thermodynamics principle. Key factors which has impact on the system storage efficiency are thoroughly explored. The energy storage efficiency of the proposed NSF-CAES system can be improved by reducing the maximum working pressure of the salt cavern and improving inlet air pressure of the turbine. Simulation results show that the electric-to-electric conversion efficiency of the proposed NSF-CAES can reach 63.29% with a maximum salt cavern working pressure of 9.5 MPa and 9 MPa inlet air pressure of the turbine, which is higher than the current commercial CAES plants.

  13. Cost-utility of First-line Disease-modifying Treatments for Relapsing-Remitting Multiple Sclerosis.

    PubMed

    Soini, Erkki; Joutseno, Jaana; Sumelahti, Marja-Liisa

    2017-03-01

    This study evaluated the cost-effectiveness of first-line treatments of relapsing-remitting multiple sclerosis (RRMS) (dimethyl fumarate [DMF] 240 mg PO BID, teriflunomide 14 mg once daily, glatiramer acetate 20 mg SC once daily, interferon [IFN]-β1a 44 µg TIW, IFN-β1b 250 µg EOD, and IFN-β1a 30 µg IM QW) and best supportive care (BSC) in the health care payer setting in Finland. The primary outcome was the modeled incremental cost-effectiveness ratio (ICER; €/quality-adjusted life-year [QALY] gained, 3%/y discounting). Markov cohort modeling with a 15-year time horizon was employed. During each 1-year modeling cycle, patients either maintained the Expanded Disability Status Scale (EDSS) score or experienced progression, developed secondary progressive MS (SPMS) or showed EDSS progression in SPMS, experienced relapse with/without hospitalization, experienced an adverse event (AE), or died. Patients׳ characteristics, RRMS progression probabilities, and standardized mortality ratios were derived from a registry of patients with MS in Finland. A mixed-treatment comparison (MTC) informed the treatment effects. Finnish EuroQol Five-Dimensional Questionnaire, Three-Level Version quality-of-life and direct-cost estimates associated with EDSS scores, relapses, and AEs were applied. Four approaches were used to assess the outcomes: cost-effectiveness plane and efficiency frontiers (relative value of efficient treatments); cost-effectiveness acceptability frontier, which demonstrated optimal treatment to maximize net benefit; Bayesian treatment ranking (BTR); and an impact investment assessment (IIA; a cost-benefit assessment), which increased the clinical interpretation and appeal of modeled outcomes in terms of absolute benefit gained with fixed drug-related budget. Robustness of results was tested extensively with sensitivity analyses. Based on the modeled results, teriflunomide was less costly, with greater QALYs, versus glatiramer acetate and the IFNs. Teriflunomide had the lowest ICER (24,081) versus BSC. DMF brought marginally more QALYs (0.089) than did teriflunomide, with greater costs over the 15 years. The ICER for DMF versus teriflunomide was 75,431. Teriflunomide had >50% cost-effectiveness probabilities with a willingness-to-pay threshold of <€77,416/QALY gained. According to BTR, teriflunomide was first-best among the disease-modifying therapies, with potential willingness-to-pay thresholds of up to €68,000/QALY gained. In the IIA, teriflunomide was associated with the longest incremental quality-adjusted survival and time without cane use. Generally, primary outcomes results were robust, based on the sensitivity analyses. The results were sensitive only to large changes in analysis perspective or mixed-treatment comparison. The results were sensitive only to large changes in analysis perspective or MTC. Based on the analyses, teriflunomide was cost-effective versus BSC or DMF with the common threshold values, was dominant versus other first-line RRMS treatments, and provided the greatest impact on investment. Teriflunomide is potentially the most cost-effective option among first-line treatments of RRMS in Finland. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Agricultural Inputs and Efficiency in Tanzania Small Scale Agriculture: A Comparative Analysis of Tobacco and Selected Food Crops

    PubMed Central

    Kidane, A.; Hepelwa, A.; Tingum, E.; Hu, T.W.

    2016-01-01

    In this study an attempt is made to compare the efficiency in tobacco leaf production with three other cereals – maize, ground nut and rice – commonly grown by Tanzanian small scale farmers. The paper reviews the prevalence of tobacco use in Africa with that of the developed world; while there was a decline in the latter there appears to be an increase in the former. The economic benefit and costs of tobacco production and consumption in Tanzania are also compared. Using a nationally representative large scale data we were able to observe that modern agricultural inputs allotted to tobacco was much higher than those allotted to maize, ground nut and rice. Using A Frontier Production approach, the study shows that the efficiency of tobacco, maize, groundnuts and rice were 75.3%, 68.5%, 64.5% and 46.5% respectively. Despite the infusion of massive agricultural input allotted to it, tobacco is still 75.3% efficient-tobacco farmers should have produced the same amount by utilizing only 75.3% of realized inputs. The relatively high efficiency in tobacco can only be explained by the large scale allocation of modern agricultural inputs such as fertilizer, better seeds, credit facility and easy access to market. The situation is likely to be reversed if more allocation of inputs were directed to basic food crops such as maize, rice and ground nuts. Tanzania’s policy of food security and poverty alleviation can only be achieved by allocating more modern inputs to basic necessities such as maize and rice. PMID:28124032

  15. Can rove beetles (Staphylinidae) be excluded in studies focusing on saproxylic beetles in central European beech forests?

    PubMed

    Parmain, G; Bouget, C; Müller, J; Horak, J; Gossner, M M; Lachat, T; Isacsson, G

    2015-02-01

    Monitoring saproxylic beetle diversity, though challenging, can help identifying relevant conservation sites or key drivers of forest biodiversity, and assessing the impact of forestry practices on biodiversity. Unfortunately, monitoring species assemblages is costly, mainly due to the time spent on identification. Excluding families which are rich in specimens and species but are difficult to identify is a frequent procedure used in ecological entomology to reduce the identification cost. The Staphylinidae (rove beetle) family is both one of the most frequently excluded and one of the most species-rich saproxylic beetle families. Using a large-scale beetle and environmental dataset from 238 beech stands across Europe, we evaluated the effects of staphylinid exclusion on results in ecological forest studies. Simplified staphylinid-excluded assemblages were found to be relevant surrogates for whole assemblages. The species richness and composition of saproxylic beetle assemblages both with and without staphylinids responded congruently to landscape, climatic and stand gradients, even when the assemblages included a high proportion of staphylinid species. At both local and regional scales, the species richness as well as the species composition of staphylinid-included and staphylinid-excluded assemblages were highly positively correlated. Ranking of sites according to their biodiversity level, which either included or excluded Staphylinidae in species richness, also gave congruent results. From our results, species assemblages omitting staphylinids can be taken as efficient surrogates for complete assemblages in large scale biodiversity monitoring studies.

  16. Genome Partitioner: A web tool for multi-level partitioning of large-scale DNA constructs for synthetic biology applications

    PubMed Central

    Del Medico, Luca; Christen, Heinz; Christen, Beat

    2017-01-01

    Recent advances in lower-cost DNA synthesis techniques have enabled new innovations in the field of synthetic biology. Still, efficient design and higher-order assembly of genome-scale DNA constructs remains a labor-intensive process. Given the complexity, computer assisted design tools that fragment large DNA sequences into fabricable DNA blocks are needed to pave the way towards streamlined assembly of biological systems. Here, we present the Genome Partitioner software implemented as a web-based interface that permits multi-level partitioning of genome-scale DNA designs. Without the need for specialized computing skills, biologists can submit their DNA designs to a fully automated pipeline that generates the optimal retrosynthetic route for higher-order DNA assembly. To test the algorithm, we partitioned a 783 kb Caulobacter crescentus genome design. We validated the partitioning strategy by assembling a 20 kb test segment encompassing a difficult to synthesize DNA sequence. Successful assembly from 1 kb subblocks into the 20 kb segment highlights the effectiveness of the Genome Partitioner for reducing synthesis costs and timelines for higher-order DNA assembly. The Genome Partitioner is broadly applicable to translate DNA designs into ready to order sequences that can be assembled with standardized protocols, thus offering new opportunities to harness the diversity of microbial genomes for synthetic biology applications. The Genome Partitioner web tool can be accessed at https://christenlab.ethz.ch/GenomePartitioner. PMID:28531174

  17. Incentives for vertical integration in healthcare: the effect of reimbursement systems.

    PubMed

    Byrne, M M; Ashton, C M

    1999-01-01

    In the United States, many healthcare organizations are being transformed into large integrated delivery systems, even though currently available empirical evidence does not provide strong or unequivocal support for or against vertical integration. Unfortunately, the manager cannot delay organizational changes until further research has been completed, especially when further research is not likely to reveal a single, correct solution for the diverse healthcare systems in existence. Managers must therefore carefully evaluate the expected effects of integration on their individual organizations. Vertical integration may be appropriate if conditions facing the healthcare organization provide opportunities for efficiency gains through reorganization strategies. Managers must consider (1) how changes in the healthcare market have affected the dynamics of production efficiency and transaction costs; (2) the likelihood that integration strategies will achieve increases in efficiency or reductions in transaction costs; and (3) how vertical integration will affect other costs, and whether the benefits gained will outweigh additional costs and efficiency losses. This article presents reimbursement systems as an example of how recent changes in the industry may have changed the dynamics and efficiency of production. Evaluation of the effects of vertical integration should allow for reasonable adjustment time, but obviously unsuccessful strategies should not be followed or maintained.

  18. Top-k similar graph matching using TraM in biological networks.

    PubMed

    Amin, Mohammad Shafkat; Finley, Russell L; Jamil, Hasan M

    2012-01-01

    Many emerging database applications entail sophisticated graph-based query manipulation, predominantly evident in large-scale scientific applications. To access the information embedded in graphs, efficient graph matching tools and algorithms have become of prime importance. Although the prohibitively expensive time complexity associated with exact subgraph isomorphism techniques has limited its efficacy in the application domain, approximate yet efficient graph matching techniques have received much attention due to their pragmatic applicability. Since public domain databases are noisy and incomplete in nature, inexact graph matching techniques have proven to be more promising in terms of inferring knowledge from numerous structural data repositories. In this paper, we propose a novel technique called TraM for approximate graph matching that off-loads a significant amount of its processing on to the database making the approach viable for large graphs. Moreover, the vector space embedding of the graphs and efficient filtration of the search space enables computation of approximate graph similarity at a throw-away cost. We annotate nodes of the query graphs by means of their global topological properties and compare them with neighborhood biased segments of the datagraph for proper matches. We have conducted experiments on several real data sets, and have demonstrated the effectiveness and efficiency of the proposed method

  19. Highly Efficient Flexible Perovskite Solar Cells with Antireflection and Self-Cleaning Nanostructures.

    PubMed

    Tavakoli, Mohammad Mahdi; Tsui, Kwong-Hoi; Zhang, Qianpeng; He, Jin; Yao, Yan; Li, Dongdong; Fan, Zhiyong

    2015-10-27

    Flexible thin film solar cells have attracted a great deal of attention as mobile power sources and key components for building-integrated photovoltaics, due to their light weight and flexible features in addition to compatibility with low-cost roll-to-roll fabrication processes. Among many thin film materials, organometallic perovskite materials are emerging as highly promising candidates for high efficiency thin film photovoltaics; however, the performance, scalability, and reliability of the flexible perovskite solar cells still have large room to improve. Herein, we report highly efficient, flexible perovskite solar cells fabricated on ultrathin flexible glasses. In such a device structure, the flexible glass substrate is highly transparent and robust, with low thermal expansion coefficient, and perovskite thin film was deposited with a thermal evaporation method that showed large-scale uniformity. In addition, a nanocone array antireflection film was attached to the front side of the glass substrate in order to improve the optical transmittance and to achieve a water-repelling effect at the same time. It was found that the fabricated solar cells have reasonable bendability, with 96% of the initial value remaining after 200 bending cycles, and the power conversion efficiency was improved from 12.06 to 13.14% by using the antireflection film, which also demonstrated excellent superhydrophobicity.

  20. Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.

    PubMed

    Chen, Shizhi; Yang, Xiaodong; Tian, Yingli

    2015-09-01

    A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.

  1. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  2. Electrode architectures for efficient electronic and ionic transport pathways in high power lithium ion batteries

    NASA Astrophysics Data System (ADS)

    Faulkner, Ankita Shah

    As the demand for clean energy sources increases, large investments have supported R&D programs aimed at developing high power lithium ion batteries for electric vehicles, military, grid storage and space applications. State of the art lithium ion technology cannot meet power demands for these applications due to high internal resistances in the cell. These resistances are mainly comprised of ionic and electronic resistance in the electrode and electrolyte. Recently, much attention has been focused on the use of nanoscale lithium ion active materials on the premise that these materials shorten the diffusion length of lithium ions and increase the surface area for electrochemical charge transfer. While, nanomaterials have allowed significant improvements in the power density of the cell, they are not a complete solution for commercial batteries. Due to their large surface area, they introduce new challenges such as a poor electrode packing densities, high electrolyte reactivity, and expensive synthesis procedures. Since greater than 70% of the cost of the electric vehicle is due to the cost of the battery, a cost-efficient battery design is most critical. To address the limitations of nanomaterials, efficient transport pathways must be engineered in the bulk electrode. As a part of nanomanufacturing research being conducted the Center for High-rate Nanomanufacturing at Northeastern University, the first aim of the proposed work is to develop electrode architectures that enhance electronic and ionic transport pathways in large and small area lithium ion electrodes. These architectures will utilize the unique electronic and mechanical properties of carbon nanotubes to create robust electrode scaffolding that improves electrochemical charge transfer. Using extensive physical and electrochemical characterization, the second aim is to investigate the effect of electrode parameters on electrochemical performance and evaluate the performance against standard commercial electrodes. These parameters include surface morphology, electrode composition, electrode density, and operating temperature. Finally, the third aim is to investigate commercial viability of the electrode architecture. This will be accomplished by developing pouch cell prototypes using a high-rate and low cost scale-up process. Through this work, we aim to realize a commercially viable high-power electrode technology.

  3. Efficiency of U.S. Dialysis Centers: An Updated Examination of Facility Characteristics That Influence Production of Dialysis Treatments

    PubMed Central

    Shreay, Sanatan; Ma, Martin; McCluskey, Jill; Mittelhammer, Ron C; Gitlin, Matthew; Stephens, J Mark

    2014-01-01

    Objective To explore the relative efficiency of dialysis facilities in the United States and identify factors that are associated with efficiency in the production of dialysis treatments. Data Sources/Study Setting Medicare cost report data from 4,343 free-standing dialysis facilities in the United States that offered in-center hemodialysis in 2010. Study Design A cross-sectional, facility-level retrospective database analysis, utilizing data envelopment analysis (DEA) to estimate facility efficiency. Data Collection/Extraction Methods Treatment data and cost and labor inputs of dialysis treatments were obtained from 2010 Medicare Renal Cost Reports. Demographic data were obtained from the 2010 U.S. Census. Principal Findings Only 26.6 percent of facilities were technically efficient. Neither the intensity of market competition nor the profit status of the facility had a significant effect on efficiency. Facilities that were members of large chains were less likely to be efficient. Cost and labor savings due to changes in drug protocols had little effect on overall dialysis center efficiency. Conclusions The majority of free-standing dialysis facilities in the United States were functioning in a technically inefficient manner. As payment systems increasingly employ capitation and bundling provisions, these institutions will need to evaluate their efficiency to remain competitive. PMID:24237043

  4. The Effects of Diagnostic Definitions in Claims Data on Healthcare Cost Estimates: Evidence from a Large-Scale Panel Data Analysis of Diabetes Care in Japan.

    PubMed

    Fukuda, Haruhisa; Ikeda, Shunya; Shiroiwa, Takeru; Fukuda, Takashi

    2016-10-01

    Inaccurate estimates of diabetes-related healthcare costs can undermine the efficiency of resource allocation for diabetes care. The quantification of these costs using claims data may be affected by the method for defining diagnoses. The aims were to use panel data analysis to estimate diabetes-related healthcare costs and to comparatively evaluate the effects of diagnostic definitions on cost estimates. Monthly panel data analysis of Japanese claims data. The study included a maximum of 141,673 patients with type 2 diabetes who received treatment between 2005 and 2013. Additional healthcare costs associated with diabetes and diabetes-related complications were estimated for various diagnostic definition methods using fixed-effects panel data regression models. The average follow-up period per patient ranged from 49.4 to 52.3 months. The number of patients identified as having type 2 diabetes varied widely among the diagnostic definition methods, ranging from 14,743 patients to 141,673 patients. The fixed-effects models showed that the additional costs per patient per month associated with diabetes ranged from US$180 [95 % confidence interval (CI) 178-181] to US$223 (95 % CI 221-224). When the diagnostic definition excluded rule-out diagnoses, the diabetes-related complications associated with higher additional healthcare costs were ischemic heart disease with surgery (US$13,595; 95 % CI 13,568-13,622), neuropathy/extremity disease with surgery (US$4594; 95 % CI 3979-5208), and diabetic nephropathy with dialysis (US$3689; 95 % CI 3667-3711). Diabetes-related healthcare costs are sensitive to diagnostic definition methods. Determining appropriate diagnostic definitions can further advance healthcare cost research for diabetes and its applications in healthcare policies.

  5. Rubber-based carbon electrode materials derived from dumped tires for efficient sodium-ion storage.

    PubMed

    Wu, Zhen-Yue; Ma, Chao; Bai, Yu-Lin; Liu, Yu-Si; Wang, Shi-Feng; Wei, Xiao; Wang, Kai-Xue; Chen, Jie-Sheng

    2018-04-03

    The development of sustainable and low cost electrode materials for sodium-ion batteries has attracted considerable attention. In this work, a carbon composite material decorated with in situ generated ZnS nanoparticles has been prepared via a simple pyrolysis of the rubber powder from dumped tires. Upon being used as an anode material for sodium-ion batteries, the carbon composite shows a high reversible capacity and rate capability. A capacity as high as 267 mA h g-1 is still retained after 100 cycles at a current density of 50 mA g-1. The well dispersed ZnS nanoparticles in carbon significantly enhance the electrochemical performance. The carbon composites derived from the rubber powder are proposed as promising electrode materials for low-cost, large-scale energy storage devices. This work provides a new and effective method for the reuse of dumped tires, contributing to the recycling of valuable waste resources.

  6. Taking ART to Scale: Determinants of the Cost and Cost-Effectiveness of Antiretroviral Therapy in 45 Clinical Sites in Zambia

    PubMed Central

    Marseille, Elliot; Giganti, Mark J.; Mwango, Albert; Chisembele-Taylor, Angela; Mulenga, Lloyd; Over, Mead; Kahn, James G.; Stringer, Jeffrey S. A.

    2012-01-01

    Background We estimated the unit costs and cost-effectiveness of a government ART program in 45 sites in Zambia supported by the Centre for Infectious Disease Research Zambia (CIDRZ). Methods We estimated per person-year costs at the facility level, and support costs incurred above the facility level and used multiple regression to estimate variation in these costs. To estimate ART effectiveness, we compared mortality in this Zambian population to that of a cohort of rural Ugandan HIV patients receiving co-trimoxazole (CTX) prophylaxis. We used micro-costing techniques to estimate incremental unit costs, and calculated cost-effectiveness ratios with a computer model which projected results to 10 years. Results The program cost $69.7 million for 125,436 person-years of ART, or $556 per ART-year. Compared to CTX prophylaxis alone, the program averted 33.3 deaths or 244.5 disability adjusted life-years (DALYs) per 100 person-years of ART. In the base-case analysis, the net cost per DALY averted was $833 compared to CTX alone. More than two-thirds of the variation in average incremental total and on-site cost per patient-year of treatment is explained by eight determinants, including the complexity of the patient-case load, the degree of adherence among the patients, and institutional characteristics including, experience, scale, scope, setting and sector. Conclusions and Significance The 45 sites exhibited substantial variation in unit costs and cost-effectiveness and are in the mid-range of cost-effectiveness when compared to other ART programs studied in southern Africa. Early treatment initiation, large scale, and hospital setting, are associated with statistically significantly lower costs, while others (rural location, private sector) are associated with shifting cost from on- to off-site. This study shows that ART programs can be significantly less costly or more cost-effective when they exploit economies of scale and scope, and initiate patients at higher CD4 counts. PMID:23284843

  7. Taking ART to scale: determinants of the cost and cost-effectiveness of antiretroviral therapy in 45 clinical sites in Zambia.

    PubMed

    Marseille, Elliot; Giganti, Mark J; Mwango, Albert; Chisembele-Taylor, Angela; Mulenga, Lloyd; Over, Mead; Kahn, James G; Stringer, Jeffrey S A

    2012-01-01

    We estimated the unit costs and cost-effectiveness of a government ART program in 45 sites in Zambia supported by the Centre for Infectious Disease Research Zambia (CIDRZ). We estimated per person-year costs at the facility level, and support costs incurred above the facility level and used multiple regression to estimate variation in these costs. To estimate ART effectiveness, we compared mortality in this Zambian population to that of a cohort of rural Ugandan HIV patients receiving co-trimoxazole (CTX) prophylaxis. We used micro-costing techniques to estimate incremental unit costs, and calculated cost-effectiveness ratios with a computer model which projected results to 10 years. The program cost $69.7 million for 125,436 person-years of ART, or $556 per ART-year. Compared to CTX prophylaxis alone, the program averted 33.3 deaths or 244.5 disability adjusted life-years (DALYs) per 100 person-years of ART. In the base-case analysis, the net cost per DALY averted was $833 compared to CTX alone. More than two-thirds of the variation in average incremental total and on-site cost per patient-year of treatment is explained by eight determinants, including the complexity of the patient-case load, the degree of adherence among the patients, and institutional characteristics including, experience, scale, scope, setting and sector. The 45 sites exhibited substantial variation in unit costs and cost-effectiveness and are in the mid-range of cost-effectiveness when compared to other ART programs studied in southern Africa. Early treatment initiation, large scale, and hospital setting, are associated with statistically significantly lower costs, while others (rural location, private sector) are associated with shifting cost from on- to off-site. This study shows that ART programs can be significantly less costly or more cost-effective when they exploit economies of scale and scope, and initiate patients at higher CD4 counts.

  8. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  9. Precise Perforation and Scalable Production of Si Particles from Low-Grade Sources for High-Performance Lithium Ion Battery Anodes.

    PubMed

    Zong, Linqi; Jin, Yan; Liu, Chang; Zhu, Bin; Hu, Xiaozhen; Lu, Zhenda; Zhu, Jia

    2016-11-09

    Alloy anodes, particularly silicon, have been intensively pursued as one of the most promising anode materials for the next generation lithium-ion battery primarily because of high specific capacity (>4000 mAh/g) and elemental abundance. In the past decade, various nanostructures with porosity or void space designs have been demonstrated to be effective to accommodate large volume expansion (∼300%) and to provide stable solid electrolyte interphase (SEI) during electrochemical cycling. However, how to produce these building blocks with precise morphology control at large scale and low cost remains a challenge. In addition, most of nanostructured silicon suffers from poor Coulombic efficiency due to a large surface area and Li ion trapping at the surface coating. Here we demonstrate a unique nanoperforation process, combining modified ball milling, annealing, and acid treating, to produce porous Si with precise and continuous porosity control (from 17% to 70%), directly from low cost metallurgical silicon source (99% purity, ∼ $1/kg). The produced porous Si coated with graphene by simple ball milling can deliver a reversible specific capacity of 1250 mAh/g over 1000 cycles at the rate of 1C, with Coulombic efficiency of first cycle over 89.5%. The porous networks also provide efficient ion and electron pathways and therefore enable excellent rate performance of 880 mAh/g at the rate of 5C. Being able to produce particles with precise porosity control through scalable processes from low-grade materials, it is expected that this nanoperforation may play a role in the next generation lithium ion battery anodes, as well as many other potential applications such as optoelectronics and thermoelectrics.

  10. Hollow structured carbon-supported nickel cobaltite nanoparticles as an efficient bifunctional electrocatalyst for the oxygen reduction and evolution reaction

    DOE PAGES

    Wang, Jie; Han, Lili; Lin, Ruoqian; ...

    2016-01-05

    Here, the exploration of efficient electrocatalysts for both the oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) is essential for fuel cells and metal-air batteries. In this study, we developed 3D hollow-structured NiCo 2O 4/C nanoparticles with interconnected pores as bifunctional electrocatalysts, which are transformed from solid NiCo 2 alloy nanoparticles through the Kirkendall effect. The unique hollow structure of NiCo 2O 4 nanoparticles increases the number of active sites and improves contact with the electrolyte to result in excellent ORR and OER performances. In addition, the hollow-structured NiCo 2O 4/C nanoparticles exhibit superior long-term stability for both themore » ORR and OER compared to commercial Pt/C. The template- and surfactant-free synthetic strategy could be used for the low-cost and large-scale synthesis of hollow-structured materials, which would facilitate the screening of high-efficiency catalysts for energy conversion.« less

  11. Meta-Heuristics in Short Scale Construction: Ant Colony Optimization and Genetic Algorithm.

    PubMed

    Schroeders, Ulrich; Wilhelm, Oliver; Olaru, Gabriel

    2016-01-01

    The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored user-defined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function.

  12. Meta-Heuristics in Short Scale Construction: Ant Colony Optimization and Genetic Algorithm

    PubMed Central

    Schroeders, Ulrich; Wilhelm, Oliver; Olaru, Gabriel

    2016-01-01

    The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored user-defined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function. PMID:27893845

  13. A Parallel Multiclassification Algorithm for Big Data Using an Extreme Learning Machine.

    PubMed

    Duan, Mingxing; Li, Kenli; Liao, Xiangke; Li, Keqin

    2018-06-01

    As data sets become larger and more complicated, an extreme learning machine (ELM) that runs in a traditional serial environment cannot realize its ability to be fast and effective. Although a parallel ELM (PELM) based on MapReduce to process large-scale data shows more efficient learning speed than identical ELM algorithms in a serial environment, some operations, such as intermediate results stored on disks and multiple copies for each task, are indispensable, and these operations create a large amount of extra overhead and degrade the learning speed and efficiency of the PELMs. In this paper, an efficient ELM based on the Spark framework (SELM), which includes three parallel subalgorithms, is proposed for big data classification. By partitioning the corresponding data sets reasonably, the hidden layer output matrix calculation algorithm, matrix decomposition algorithm, and matrix decomposition algorithm perform most of the computations locally. At the same time, they retain the intermediate results in distributed memory and cache the diagonal matrix as broadcast variables instead of several copies for each task to reduce a large amount of the costs, and these actions strengthen the learning ability of the SELM. Finally, we implement our SELM algorithm to classify large data sets. Extensive experiments have been conducted to validate the effectiveness of the proposed algorithms. As shown, our SELM achieves an speedup on a cluster with ten nodes, and reaches a speedup with 15 nodes, an speedup with 20 nodes, a speedup with 25 nodes, a speedup with 30 nodes, and a speedup with 35 nodes.

  14. Advances in polycrystalline thin-film photovoltaics for space applications

    NASA Technical Reports Server (NTRS)

    Lanning, Bruce R.; Armstrong, Joseph H.; Misra, Mohan S.

    1994-01-01

    Polycrystalline, thin-film photovoltaics represent one of the few (if not the only) renewable power sources which has the potential to satisfy the demanding technical requirements for future space applications. The demand in space is for deployable, flexible arrays with high power-to-weight ratios and long-term stability (15-20 years). In addition, there is also the demand that these arrays be produced by scalable, low-cost, high yield, processes. An approach to significantly reduce costs and increase reliability is to interconnect individual cells series via monolithic integration. Both CIS and CdTe semiconductor films are optimum absorber materials for thin-film n-p heterojunction solar cells, having band gaps between 0.9-1.5 ev and demonstrated small area efficiencies, with cadmium sulfide window layers, above 16.5 percent. Both CIS and CdTe polycrystalline thin-film cells have been produced on a laboratory scale by a variety of physical and chemical deposition methods, including evaporation, sputtering, and electrodeposition. Translating laboratory processes which yield these high efficiency, small area cells into the design of a manufacturing process capable of producing 1-sq ft modules, however, requires a quantitative understanding of each individual step in the process and its (each step) effect on overall module performance. With a proper quantification and understanding of material transport and reactivity for each individual step, manufacturing process can be designed that is not 'reactor-specific' and can be controlled intelligently with the design parameters of the process. The objective of this paper is to present an overview of the current efforts at MMC to develop large-scale manufacturing processes for both CIS and CdTe thin-film polycrystalline modules. CIS cells/modules are fabricated in a 'substrate configuration' by physical vapor deposition techniques and CdTe cells/modules are fabricated in a 'superstrate configuration' by wet chemical methods. Both laser and mechanical scribing operations are used to monolithically integrate (series interconnect) the individual cells into modules. Results will be presented at the cell and module development levels with a brief description of the test methods used to qualify these devices for space applications. The approach and development efforts are directed towards large-scale manufacturability of established thin-film, polycrystalline processing methods for large area modules with less emphasis on maximizing small area efficiencies.

  15. GT-WGS: an efficient and economic tool for large-scale WGS analyses based on the AWS cloud service.

    PubMed

    Wang, Yiqi; Li, Gen; Ma, Mark; He, Fazhong; Song, Zhuo; Zhang, Wei; Wu, Chengkun

    2018-01-19

    Whole-genome sequencing (WGS) plays an increasingly important role in clinical practice and public health. Due to the big data size, WGS data analysis is usually compute-intensive and IO-intensive. Currently it usually takes 30 to 40 h to finish a 50× WGS analysis task, which is far from the ideal speed required by the industry. Furthermore, the high-end infrastructure required by WGS computing is costly in terms of time and money. In this paper, we aim to improve the time efficiency of WGS analysis and minimize the cost by elastic cloud computing. We developed a distributed system, GT-WGS, for large-scale WGS analyses utilizing the Amazon Web Services (AWS). Our system won the first prize on the Wind and Cloud challenge held by Genomics and Cloud Technology Alliance conference (GCTA) committee. The system makes full use of the dynamic pricing mechanism of AWS. We evaluate the performance of GT-WGS with a 55× WGS dataset (400GB fastq) provided by the GCTA 2017 competition. In the best case, it only took 18.4 min to finish the analysis and the AWS cost of the whole process is only 16.5 US dollars. The accuracy of GT-WGS is 99.9% consistent with that of the Genome Analysis Toolkit (GATK) best practice. We also evaluated the performance of GT-WGS performance on a real-world dataset provided by the XiangYa hospital, which consists of 5× whole-genome dataset with 500 samples, and on average GT-WGS managed to finish one 5× WGS analysis task in 2.4 min at a cost of $3.6. WGS is already playing an important role in guiding therapeutic intervention. However, its application is limited by the time cost and computing cost. GT-WGS excelled as an efficient and affordable WGS analyses tool to address this problem. The demo video and supplementary materials of GT-WGS can be accessed at https://github.com/Genetalks/wgs_analysis_demo .

  16. Development of a Field Demonstration for Cost-Effective Low-Grade Heat Recovery and Use Technology Designed to Improve Efficiency and Reduce Water Usage Rates for a Coal-Fired Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noble, Russell; Dombrowski, K.; Bernau, M.

    Coal-based power generation systems provide reliable, low-cost power to the domestic energy sector. These systems consume large amounts of fuel and water to produce electricity and are the target of pending regulations that may require reductions in water use and improvements in thermal efficiency. While efficiency of coal-based generation has improved over time, coal power plants often do not utilize the low-grade heat contained in the flue gas and require large volumes of water for the steam cycle make-up, environmental controls, and for process cooling and heating. Low-grade heat recovery is particularly challenging for coal-fired applications, due in large partmore » to the condensation of acid as the flue gas cools and the resulting potential corrosion of the heat recovery materials. Such systems have also not been of significant interest as recent investments on coal power plants have primarily been for environmental controls due to more stringent regulations. Also, in many regions, fuel cost is still a pass-through to the consumer, reducing the motivation for efficiency improvements. Therefore, a commercial system combining low-grade heat-recovery technologies and associated end uses to cost effectively improve efficiency and/or reduce water consumption has not yet been widely applied. However, pressures from potential new regulations and from water shortages may drive new interest, particularly in the U.S. In an effort to address this issue, the U.S. Department of Energy (DOE) has sought to identify and promote technologies to achieve this goal.« less

  17. Final Report: The Influence of Novel Behavioral Strategies in Promoting the Diffusion of Solar Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillingham, Kenneth; Bollinger, Bryan

    This is the final report for a systematic, evidence-based project using an unprecedented series of large-scale field experiments to examine the effectiveness and cost-effectiveness of novel approaches to reduce the soft costs of solar residential photovoltaics. The approaches were based around grassroots marketing campaigns called ‘Solarize’ campaigns, that were designed to lower costs and increase adoption of solar technology. This study quantified the effectiveness and cost-effectiveness of the Solarize programs and tested new approaches to further improve the model.

  18. High-performance flat-panel solar thermoelectric generators with high thermal concentration

    NASA Astrophysics Data System (ADS)

    Kraemer, Daniel; Poudel, Bed; Feng, Hsien-Ping; Caylor, J. Christopher; Yu, Bo; Yan, Xiao; Ma, Yi; Wang, Xiaowei; Wang, Dezhi; Muto, Andrew; McEnaney, Kenneth; Chiesa, Matteo; Ren, Zhifeng; Chen, Gang

    2011-07-01

    The conversion of sunlight into electricity has been dominated by photovoltaic and solar thermal power generation. Photovoltaic cells are deployed widely, mostly as flat panels, whereas solar thermal electricity generation relying on optical concentrators and mechanical heat engines is only seen in large-scale power plants. Here we demonstrate a promising flat-panel solar thermal to electric power conversion technology based on the Seebeck effect and high thermal concentration, thus enabling wider applications. The developed solar thermoelectric generators (STEGs) achieved a peak efficiency of 4.6% under AM1.5G (1 kW m-2) conditions. The efficiency is 7-8 times higher than the previously reported best value for a flat-panel STEG, and is enabled by the use of high-performance nanostructured thermoelectric materials and spectrally-selective solar absorbers in an innovative design that exploits high thermal concentration in an evacuated environment. Our work opens up a promising new approach which has the potential to achieve cost-effective conversion of solar energy into electricity.

  19. Using Unplanned Fires to Help Suppressing Future Large Fires in Mediterranean Forests

    PubMed Central

    Regos, Adrián; Aquilué, Núria; Retana, Javier; De Cáceres, Miquel; Brotons, Lluís

    2014-01-01

    Despite the huge resources invested in fire suppression, the impact of wildfires has considerably increased across the Mediterranean region since the second half of the 20th century. Modulating fire suppression efforts in mild weather conditions is an appealing but hotly-debated strategy to use unplanned fires and associated fuel reduction to create opportunities for suppression of large fires in future adverse weather conditions. Using a spatially-explicit fire–succession model developed for Catalonia (Spain), we assessed this opportunistic policy by using two fire suppression strategies that reproduce how firefighters in extreme weather conditions exploit previous fire scars as firefighting opportunities. We designed scenarios by combining different levels of fire suppression efficiency and climatic severity for a 50-year period (2000–2050). An opportunistic fire suppression policy induced large-scale changes in fire regimes and decreased the area burnt under extreme climate conditions, but only accounted for up to 18–22% of the area to be burnt in reference scenarios. The area suppressed in adverse years tended to increase in scenarios with increasing amounts of area burnt during years dominated by mild weather. Climate change had counterintuitive effects on opportunistic fire suppression strategies. Climate warming increased the incidence of large fires under uncontrolled conditions but also indirectly increased opportunities for enhanced fire suppression. Therefore, to shift fire suppression opportunities from adverse to mild years, we would require a disproportionately large amount of area burnt in mild years. We conclude that the strategic planning of fire suppression resources has the potential to become an important cost-effective fuel-reduction strategy at large spatial scale. We do however suggest that this strategy should probably be accompanied by other fuel-reduction treatments applied at broad scales if large-scale changes in fire regimes are to be achieved, especially in the wider context of climate change. PMID:24727853

  20. Using unplanned fires to help suppressing future large fires in Mediterranean forests.

    PubMed

    Regos, Adrián; Aquilué, Núria; Retana, Javier; De Cáceres, Miquel; Brotons, Lluís

    2014-01-01

    Despite the huge resources invested in fire suppression, the impact of wildfires has considerably increased across the Mediterranean region since the second half of the 20th century. Modulating fire suppression efforts in mild weather conditions is an appealing but hotly-debated strategy to use unplanned fires and associated fuel reduction to create opportunities for suppression of large fires in future adverse weather conditions. Using a spatially-explicit fire-succession model developed for Catalonia (Spain), we assessed this opportunistic policy by using two fire suppression strategies that reproduce how firefighters in extreme weather conditions exploit previous fire scars as firefighting opportunities. We designed scenarios by combining different levels of fire suppression efficiency and climatic severity for a 50-year period (2000-2050). An opportunistic fire suppression policy induced large-scale changes in fire regimes and decreased the area burnt under extreme climate conditions, but only accounted for up to 18-22% of the area to be burnt in reference scenarios. The area suppressed in adverse years tended to increase in scenarios with increasing amounts of area burnt during years dominated by mild weather. Climate change had counterintuitive effects on opportunistic fire suppression strategies. Climate warming increased the incidence of large fires under uncontrolled conditions but also indirectly increased opportunities for enhanced fire suppression. Therefore, to shift fire suppression opportunities from adverse to mild years, we would require a disproportionately large amount of area burnt in mild years. We conclude that the strategic planning of fire suppression resources has the potential to become an important cost-effective fuel-reduction strategy at large spatial scale. We do however suggest that this strategy should probably be accompanied by other fuel-reduction treatments applied at broad scales if large-scale changes in fire regimes are to be achieved, especially in the wider context of climate change.

  1. A procedural method for the efficient implementation of full-custom VLSI designs

    NASA Technical Reports Server (NTRS)

    Belk, P.; Hickey, N.

    1987-01-01

    An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.

  2. The strategic use of forward contracts: Applications in power markets

    NASA Astrophysics Data System (ADS)

    Lien, Jeffrey Scott

    This dissertation develops three theoretical models that analyze forward trading by firms with market power. The models are discussed in the context of recently restructured power markets, but the results can be applied more generally. The first model considers the profitability of large firms in markets with limited economies of scale and free entry. When large firms apply their market power, small firms benefit from the high prices without incurring the costs of restricted output. When entry is considered, and profit opportunity is determined by the cost of entry, this asymmetry creates the "curse of market power;" the long-run profits of large firms are reduced because of their market power. I suggest ways that large power producers can cope with the curse of market power, including the sale of long-term forward contracts. Past research has shown that forward contracts can demonstrate commitment to aggressive behavior to a competing duopolist. I add explicitly modeled entry to this literature, and make the potential entrants the audience of the forward sale. The existence of a forward market decreases equilibrium entry, increases the profits of large firms, and enhances economic efficiency. In the second model, a consumer representative, such as a state government or regulated distribution utility, bargains in the forward market on behalf of end-consumers who cannot organize together in the spot market. The ability to organize in forward markets allows consumers to encourage economic efficiency. When multiple producers are considered, I find that the ability to offer contracts also increases consumer surplus by decreasing the producers' profits. In some specifications of the model, consumers are able to capture the full gains from trade. The third model of this dissertation considers the ability of a large producer to take advantage of anonymity by randomly alternating between forward sales and forward purchases. The large producer uses its market power to always obtain favorable settlement on its forward transactions. Since other participants in the market cannot anticipate the large producer's eventual spot market behavior they cannot effectively arbitrage between markets. I find that forward transaction anonymity leads to spot price destabilization and cost inefficiency.

  3. Scalable graphene production from ethanol decomposition by microwave argon plasma torch

    NASA Astrophysics Data System (ADS)

    Melero, C.; Rincón, R.; Muñoz, J.; Zhang, G.; Sun, S.; Perez, A.; Royuela, O.; González-Gago, C.; Calzada, M. D.

    2018-01-01

    A fast, efficient and simple method is presented for the production of high quality graphene on a large scale by using an atmospheric pressure plasma-based technique. This technique allows to obtain high quality graphene in powder in just one step, without the use of neither metal catalysts and nor specific substrate during the process. Moreover, the cost for graphene production is significantly reduced since the ethanol used as carbon source can be obtained from the fermentation of agricultural industries. The process provides an additional benefit contributing to the revalorization of waste in the production of a high-value added product like graphene. Thus, this work demonstrates the features of plasma technology as a low cost, efficient, clean and environmentally friendly route for production of high-quality graphene.

  4. Development of the Mathematics of Learning Curve Models for Evaluating Small Modular Reactor Economics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, T. J.

    2014-02-01

    The cost of nuclear power is a straightforward yet complicated topic. It is straightforward in that the cost of nuclear power is a function of the cost to build the nuclear power plant, the cost to operate and maintain it, and the cost to provide fuel for it. It is complicated in that some of those costs are not necessarily known, introducing uncertainty into the analysis. For large light water reactor (LWR)-based nuclear power plants, the uncertainty is mainly contained within the cost of construction. The typical costs of operations and maintenance (O&M), as well as fuel, are well knownmore » based on the current fleet of LWRs. However, the last currently operating reactor to come online was Watts Bar 1 in May 1996; thus, the expected construction costs for gigawatt (GW)-class reactors in the United States are based on information nearly two decades old. Extrapolating construction, O&M, and fuel costs from GW-class LWRs to LWR-based small modular reactors (SMRs) introduces even more complication. The per-installed-kilowatt construction costs for SMRs are likely to be higher than those for the GW-class reactors based on the property of the economy of scale. Generally speaking, the economy of scale is the tendency for overall costs to increase slower than the overall production capacity. For power plants, this means that doubling the power production capacity would be expected to cost less than twice as much. Applying this property in the opposite direction, halving the power production capacity would be expected to cost more than half as much. This can potentially make the SMRs less competitive in the electricity market against the GW-class reactors, as well as against other power sources such as natural gas and subsidized renewables. One factor that can potentially aid the SMRs in achieving economic competitiveness is an economy of numbers, as opposed to the economy of scale, associated with learning curves. The basic concept of the learning curve is that the more a new process is repeated, the more efficient the process can be made. Assuming that efficiency directly relates to cost means that the more a new process is repeated successfully and efficiently, the less costly the process can be made. This factor ties directly into the factory fabrication and modularization aspect of the SMR paradigm—manufacturing serial, standardized, identical components for use in nuclear power plants can allow the SMR industry to use the learning curves to predict and optimize deployment costs.« less

  5. Allogeneic cell therapy bioprocess economics and optimization: downstream processing decisions.

    PubMed

    Hassan, Sally; Simaria, Ana S; Varadaraju, Hemanthram; Gupta, Siddharth; Warren, Kim; Farid, Suzanne S

    2015-01-01

    To develop a decisional tool to identify the most cost effective process flowsheets for allogeneic cell therapies across a range of production scales. A bioprocess economics and optimization tool was built to assess competing cell expansion and downstream processing (DSP) technologies. Tangential flow filtration was generally more cost-effective for the lower cells/lot achieved in planar technologies and fluidized bed centrifugation became the only feasible option for handling large bioreactor outputs. DSP bottlenecks were observed at large commercial lot sizes requiring multiple large bioreactors. The DSP contribution to the cost of goods/dose ranged between 20-55%, and 50-80% for planar and bioreactor flowsheets, respectively. This analysis can facilitate early decision-making during process development.

  6. Bioinspired Ultralight Inorganic Aerogel for Highly Efficient Air Filtration and Oil-Water Separation.

    PubMed

    Zhang, Yong-Gang; Zhu, Ying-Jie; Xiong, Zhi-Chao; Wu, Jin; Chen, Feng

    2018-04-18

    Inorganic aerogels have been attracting great interest owing to their distinctive structures and properties. However, the practical applications of inorganic aerogels are greatly restricted by their high brittleness and high fabrication cost. Herein, inspired by the cancellous bone, we have developed a novel kind of hydroxyapatite (HAP) nanowire-based inorganic aerogel with excellent elasticity, which is highly porous (porosity ≈ 99.7%), ultralight (density 8.54 mg/cm 3 , which is about 0.854% of water density), and highly adiabatic (thermal conductivity 0.0387 W/m·K). Significantly, the as-prepared HAP nanowire aerogel can be used as the highly efficient air filter with high PM 2.5 filtration efficiency. In addition, the HAP nanowire aerogel is also an ideal candidate for continuous oil-water separation, which can be used as a smart switch to separate oil from water continuously. Compared with organic aerogels, the as-prepared HAP nanowire aerogel is biocompatible, environmentally friendly, and low-cost. Moreover, the synthetic method reported in this work can be scaled up for large-scale production of HAP nanowires, free from the use of organic solvents. Therefore, the as-prepared new kind of HAP nanowire aerogel is promising for the applications in various fields.

  7. Life cycle energy use, costs, and greenhouse gas emission of broiler farms in different production systems in Iran-a case study of Alborz province.

    PubMed

    Pishgar-Komleh, Seyyed Hassan; Akram, Asadollah; Keyhani, Alireza; van Zelm, Rosalie

    2017-07-01

    In order to achieve sustainable development in agriculture, it is necessary to quantify and compare the energy, economic, and environmental aspects of products. This paper studied the energy, economic, and greenhouse gas (GHG) emission patterns in broiler chicken farms in the Alborz province of Iran. We studied the effect of the broiler farm size as different production systems on the energy, economic, and environmental indices. Energy use efficiency (EUE) and benefit-cost ratio (BCR) were 0.16 and 1.11, respectively. Diesel fuel and feed contributed the most in total energy inputs, while feed and chicks were the most important inputs in economic analysis. GHG emission calculations showed that production of 1000 birds produces 19.13 t CO 2-eq and feed had the highest share in total GHG emission. Total GHG emissions based on different functional units were 8.5 t CO 2-eq per t of carcass and 6.83 kg CO 2-eq per kg live weight. Results of farm size effect on EUE revealed that large farms had better energy management. For BCR, there was no significant difference between farms. Lower total GHG emissions were reported for large farms, caused by better management of inputs and fewer bird losses. Large farms with more investment had more efficient equipment, resulting in a decrease of the input consumption. In view of our study, it is recommended to support the small-scale broiler industry by providing subsidies to promote the use of high-efficiency equipment. To decrease the amount of energy usage and GHG emissions, replacing heaters (which use diesel fuel) with natural gas heaters can be considered. In addition to the above recommendations, the use of energy saving light bulbs may reduce broiler farm electricity consumption.

  8. Removal of APIs and bacteria from hospital wastewater by MBR plus O(3), O(3) + H(2)O(2), PAC or ClO(2).

    PubMed

    Nielsen, U; Hastrup, C; Klausen, M M; Pedersen, B M; Kristensen, G H; Jansen, J L C; Bak, S N; Tuerk, J

    2013-01-01

    The objective of this study has been to develop technologies that can reduce the content of active pharmaceutical ingredients (APIs) and bacteria from hospital wastewater. The results from the laboratory- and pilot-scale testings showed that efficient removal of the vast majority of APIs could be achieved by a membrane bioreactor (MBR) followed by ozone, ozone + hydrogen peroxide or powdered activated carbon (PAC). Chlorine dioxide (ClO(2)) was significantly less effective. MBR + PAC (450 mg/l) was the most efficient technology, while the most cost-efficient technology was MBR + ozone (156 mg O(3)/l applied over 20 min). With MBR an efficient removal of Escherichia coli and enterococci was measured, and no antibiotic resistant bacteria were detected in the effluent. With MBR + ozone and MBR + PAC also the measured effluent concentrations of APIs (e.g. ciprofloxacin, sulfamethoxazole and sulfamethizole) were below available predicted no-effect concentrations (PNEC) for the marine environment without dilution. Iodinated contrast media were also reduced significantly (80-99% for iohexol, iopromide and ioversol and 40-99% for amidotrizoateacid). A full-scale MBR treatment plant with ozone at a hospital with 900 beds is estimated to require an investment cost of €1.6 mill. and an operating cost of €1/m(3) of treated water.

  9. What Determines HIV Prevention Costs at Scale? Evidence from the Avahan Programme in India

    PubMed Central

    Chandrashekar, Sudhashree; Shetty, Govindraj; Vickerman, Peter; Bradley, Janet; Alary, Michel; Moses, Stephen; Vassall, Anna

    2016-01-01

    Abstract Expanding essential health services through non‐government organisations (NGOs) is a central strategy for achieving universal health coverage in many low‐income and middle‐income countries. Human immunodeficiency virus (HIV) prevention services for key populations are commonly delivered through NGOs and have been demonstrated to be cost‐effective and of substantial global public health importance. However, funding for HIV prevention remains scarce, and there are growing calls internationally to improve the efficiency of HIV prevention programmes as a key strategy to reach global HIV targets. To date, there is limited evidence on the determinants of costs of HIV prevention delivered through NGOs; and thus, policymakers have little guidance in how best to design programmes that are both effective and efficient. We collected economic costs from the Indian Avahan initiative, the largest HIV prevention project conducted globally, during the first 4 years of its implementation. We use a fixed‐effect panel estimator and a random‐intercept model to investigate the determinants of average cost. We find that programme design choices such as NGO scale, the extent of community involvement, the way in which support is offered to NGOs and how clinical services are organised substantially impact average cost in a grant‐based payment setting. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd. PMID:26763652

  10. Early Years Centres for Pre-School Children with Primary Language Difficulties: What Do They Cost, and are They Cost-Effective?

    ERIC Educational Resources Information Center

    Law, J.; Dockrell, J. E.; Castelnuovo, E.; Williams, K.; Seeff, B.; Normand, C.

    2006-01-01

    Background: High levels of early language difficulties raise practical issues about the efficient and effective means of meeting children's needs. Persistent language difficulties place significant financial pressures on health and education services. This has led to large investment in intervention in the early years; yet, little is known about…

  11. Evaluation of target efficiencies for solid-liquid separation steps in biofuels production.

    PubMed

    Kochergin, Vadim; Miller, Keith

    2011-01-01

    Development of liquid biofuels has entered a new phase of large scale pilot demonstration. A number of plants that are in operation or under construction face the task of addressing the engineering challenges of creating a viable plant design, scaling up and optimizing various unit operations. It is well-known that separation technologies account for 50-70% of both capital and operating cost. Additionally, reduction of environmental impact creates technological challenges that increase project cost without adding to the bottom line. Different technologies vary in terms of selection of unit operations; however, solid-liquid separations are likely to be a major contributor to the overall project cost. Despite the differences in pretreatment approaches, similar challenges arise for solid-liquid separation unit operations. A typical process for ethanol production from biomass includes several solid-liquid separation steps, depending on which particular stream is targeted for downstream processing. The nature of biomass-derived materials makes it either difficult or uneconomical to accomplish complete separation in a single step. Therefore, setting realistic efficiency targets for solid-liquid separations is an important task that influences overall process recovery and economics. Experimental data will be presented showing typical characteristics for pretreated cane bagasse at various stages of processing into cellulosic ethanol. Results of generic material balance calculations will be presented to illustrate the influence of separation target efficiencies on overall process recoveries and characteristics of waste streams.

  12. Growth and development of Arabidopsis thaliana under single-wavelength red and blue laser light.

    PubMed

    Ooi, Amanda; Wong, Aloysius; Ng, Tien Khee; Marondedze, Claudius; Gehring, Christoph; Ooi, Boon S

    2016-09-23

    Indoor horticulture offers a sensible solution for sustainable food production and is becoming increasingly widespread. However, it incurs high energy and cost due to the use of artificial lighting such as high-pressure sodium lamps, fluorescent light or increasingly, the light-emitting diodes (LEDs). The energy efficiency and light quality of currently available horticultural lighting is suboptimal, and therefore less than ideal for sustainable and cost-effective large-scale plant production. Here, we demonstrate the use of high-powered single-wavelength lasers for indoor horticulture. They are highly energy-efficient and can be remotely guided to the site of plant growth, thus reducing on-site heat accumulation. Furthermore, laser beams can be tailored to match the absorption profiles of different plant species. We have developed a prototype laser growth chamber and demonstrate that plants grown under laser illumination can complete a full growth cycle from seed to seed with phenotypes resembling those of plants grown under LEDs reported previously. Importantly, the plants have lower expression of proteins diagnostic for light and radiation stress. The phenotypical, biochemical and proteome data show that the single-wavelength laser light is suitable for plant growth and therefore, potentially able to unlock the advantages of this next generation lighting technology for highly energy-efficient horticulture.

  13. Growth and development of Arabidopsis thaliana under single-wavelength red and blue laser light

    PubMed Central

    Ooi, Amanda; Wong, Aloysius; Ng, Tien Khee; Marondedze, Claudius; Gehring, Christoph; Ooi, Boon S.

    2016-01-01

    Indoor horticulture offers a sensible solution for sustainable food production and is becoming increasingly widespread. However, it incurs high energy and cost due to the use of artificial lighting such as high-pressure sodium lamps, fluorescent light or increasingly, the light-emitting diodes (LEDs). The energy efficiency and light quality of currently available horticultural lighting is suboptimal, and therefore less than ideal for sustainable and cost-effective large-scale plant production. Here, we demonstrate the use of high-powered single-wavelength lasers for indoor horticulture. They are highly energy-efficient and can be remotely guided to the site of plant growth, thus reducing on-site heat accumulation. Furthermore, laser beams can be tailored to match the absorption profiles of different plant species. We have developed a prototype laser growth chamber and demonstrate that plants grown under laser illumination can complete a full growth cycle from seed to seed with phenotypes resembling those of plants grown under LEDs reported previously. Importantly, the plants have lower expression of proteins diagnostic for light and radiation stress. The phenotypical, biochemical and proteome data show that the single-wavelength laser light is suitable for plant growth and therefore, potentially able to unlock the advantages of this next generation lighting technology for highly energy-efficient horticulture. PMID:27659906

  14. Coordination chemistry in magnesium battery electrolytes: how ligands affect their performance.

    PubMed

    Shao, Yuyan; Liu, Tianbiao; Li, Guosheng; Gu, Meng; Nie, Zimin; Engelhard, Mark; Xiao, Jie; Lv, Dongping; Wang, Chongmin; Zhang, Ji-Guang; Liu, Jun

    2013-11-04

    Magnesium battery is potentially a safe, cost-effective, and high energy density technology for large scale energy storage. However, the development of magnesium battery has been hindered by the limited performance and the lack of fundamental understandings of electrolytes. Here, we present a study in understanding coordination chemistry of Mg(BH₄)₂ in ethereal solvents. The O donor denticity, i.e. ligand strength of the ethereal solvents which act as ligands to form solvated Mg complexes, plays a significant role in enhancing coulombic efficiency of the corresponding solvated Mg complex electrolytes. A new electrolyte is developed based on Mg(BH₄)₂, diglyme and LiBH₄. The preliminary electrochemical test results show that the new electrolyte demonstrates a close to 100% coulombic efficiency, no dendrite formation, and stable cycling performance for Mg plating/stripping and Mg insertion/de-insertion in a model cathode material Mo₆S₈ Chevrel phase.

  15. Modeling of plasma in a hybrid electric propulsion for small satellites

    NASA Astrophysics Data System (ADS)

    Jugroot, Manish; Christou, Alex

    2016-09-01

    As space flight becomes more available and reliable, space-based technology is allowing for smaller and more cost-effective satellites to be produced. Working in large swarms, many small satellites can provide additional capabilities while reducing risk. These satellites require efficient, long term propulsion for manoeuvres, orbit maintenance and de-orbiting. The high exhaust velocity and propellant efficiency of electric propulsion makes it ideally suited for low thrust missions. The two dominant types of electric propulsion, namely ion thrusters and Hall thrusters, excel in different mission types. In this work, a novel electric hybrid propulsion design is modelled to enhance understanding of key phenomena and evaluate performance. Specifically, the modelled hybrid thruster seeks to overcome issues with existing Ion and Hall thruster designs. Scaling issues and optimization of the design will be discussed and will investigate a conceptual design of a hybrid spacecraft plasma engine.

  16. Coordination Chemistry in magnesium battery electrolytes: how ligands affect their performance

    DOE PAGES

    Shao, Yuyan; Liu, Tianbiao L.; Li, Guosheng; ...

    2013-11-04

    Magnesium battery is potentially a safe, cost-effective, and high energy density technology for large scale energy storage. However, the development of magnesium battery has been hindered by the limited performance and the lack of fundamental understandings of electrolytes. Here, we present a coordination chemistry study of Mg(BH 4) 2 in ethereal solvents. The O donor denticity, i.e. ligand strength of the ethereal solvents which act as ligands to form solvated Mg complexes, plays a significant role in enhancing coulombic efficiency of the corresponding solvated Mg complex electrolytes. A new and safer electrolyte is developed based on Mg(BH4)2, diglyme and optimizedmore » LiBH4 additive. The new electrolyte demonstrates 100% coulombic efficiency, no dendrite formation, and stable cycling performance with the cathode capacity retention of ~90% for 300 cycles in a prototype magnesium battery.« less

  17. Comparison of L1000 and Affymetrix Microarray for In Vitro Concentration-Response Gene Expression Profiling (SOT)

    EPA Science Inventory

    Advances in high-throughput screening technologies and in vitro systems have opened doors for cost-efficient evaluation of chemical effects on a diversity of biological endpoints. However, toxicogenomics platforms remain too costly to evaluate large libraries of chemicals in conc...

  18. Visual versus mechanised leucocyte differential counts: costing and evaluation of traditional and Hemalog D methods.

    PubMed

    Hudson, M J; Green, A E

    1980-11-01

    Visual differential counts were examined for efficiency, cost effectiveness, and staff acceptability within our laboratory. A comparison with the Hemalog D system was attempted. The advantages and disadvantages of each system are enumerated and discussed in the context of a large general hospital.

  19. Estimating the Effects of Module Area on Thin-Film Photovoltaic System Costs: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horowitz, Kelsey A; Fu, Ran; Silverman, Timothy J

    We investigate the potential effects of module area on the cost and performance of photovoltaic systems. Applying a bottom-up methodology, we analyzed the costs associated with thin-film modules and systems as a function of module area. We calculate a potential for savings of up to 0.10 dollars/W and 0.13 dollars/W in module manufacturing costs for CdTe and CIGS respectively, with large area modules. We also find that an additional 0.04 dollars/W savings in balance-of-systems costs may be achieved. Sensitivity of the dollar/W cost savings to module efficiency, manufacturing yield, and other parameters is presented. Lifetime energy yield must also bemore » maintained to realize reductions in the levelized cost of energy; the effects of module size on energy yield for monolithic thin-film modules are not yet well understood. Finally, we discuss possible non-cost barriers to adoption of large area modules.« less

  20. Economic evaluation of public-private mix for tuberculosis care and control, India. Part II. Cost and cost-effectiveness.

    PubMed

    Pantoja, A; Lönnroth, K; Lal, S S; Chauhan, L S; Uplekar, M; Padma, M R; Unnikrishnan, K P; Rajesh, J; Kumar, P; Sahu, S; Wares, F; Floyd, K

    2009-06-01

    Bangalore City, India. To assess the cost and cost-effectiveness of public-private mix (PPM) for tuberculosis (TB) care and control when implemented on a large scale. DOTS implementation under the Revised National TB Control Programme (RNTCP) began in 1999, PPM was introduced in mid-2001 and a second phase of intensified PPM began in 2003. Data on the costs and effects of TB treatment from 1999 to 2005 were collected and used to compare the two distinct phases of PPM with a scenario of no PPM. Costs were assessed in 2005 $US for public and private providers, patients and patient attendants. Sources of data included expenditure records, medical records, interviews with staff and patient surveys. Effectiveness was measured as the number of cases successfully treated. When PPM was implemented, total provider costs increased in proportion to the number of successfully treated TB cases. The average cost per patient treated from the provider perspective when PPM was implemented was stable, at US$69, in the intensified phase compared with US$71 pre-PPM. PPM resulted in the shift of an estimated 7200 patients from non-DOTS to DOTS treatment over 5 years. PPM implementation substantially reduced costs to patients, such that the average societal cost per patient successfully treated fell from US$154 to US$132 in the 4 years following the initiation of PPM. Implementation of PPM on a large scale in an urban setting can be cost-effective, and considerably reduces the financial burden of TB for patients.

  1. Power-to-heat in adiabatic compressed air energy storage power plants for cost reduction and increased flexibility

    NASA Astrophysics Data System (ADS)

    Dreißigacker, Volker

    2018-04-01

    The development of new technologies for large-scale electricity storage is a key element in future flexible electricity transmission systems. Electricity storage in adiabatic compressed air energy storage (A-CAES) power plants offers the prospect of making a substantial contribution to reach this goal. This concept allows efficient, local zero-emission electricity storage on the basis of compressed air in underground caverns. The compression and expansion of air in turbomachinery help to balance power generation peaks that are not demand-driven on the one hand and consumption-induced load peaks on the other. For further improvements in cost efficiencies and flexibility, system modifications are necessary. Therefore, a novel concept regarding the integration of an electrical heating component is investigated. This modification allows increased power plant flexibilities and decreasing component sizes due to the generated high temperature heat with simultaneously decreasing total round trip efficiencies. For an exemplarily A-CAES case simulation studies regarding the electrical heating power and thermal energy storage sizes were conducted to identify the potentials in cost reduction of the central power plant components and the loss in round trip efficiency.

  2. Classification and prediction of toxicity of chemicals using an automated phenotypic profiling of Caenorhabditis elegans.

    PubMed

    Gao, Shan; Chen, Weiyang; Zeng, Yingxin; Jing, Haiming; Zhang, Nan; Flavel, Matthew; Jois, Markandeya; Han, Jing-Dong J; Xian, Bo; Li, Guojun

    2018-04-18

    Traditional toxicological studies have relied heavily on various animal models to understand the effect of various compounds in a biological context. Considering the great cost, complexity and time involved in experiments using higher order organisms. Researchers have been exploring alternative models that avoid these disadvantages. One example of such a model is the nematode Caenorhabditis elegans. There are some advantages of C. elegans, such as small size, short life cycle, well defined genome, ease of maintenance and efficient reproduction. As these benefits allow large scale studies to be initiated with relative ease, the problem of how to efficiently capture, organize and analyze the resulting large volumes of data must be addressed. We have developed a new method for quantitative screening of chemicals using C. elegans. 33 features were identified for each chemical treatment. The compounds with different toxicities were shown to alter the phenotypes of C. elegans in distinct and detectable patterns. We found that phenotypic profiling revealed conserved functions to classify and predict the toxicity of different chemicals. Our results demonstrate the power of phenotypic profiling in C. elegans under different chemical environments.

  3. Extreme-Scale De Novo Genome Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Georganas, Evangelos; Hofmeyr, Steven; Egan, Rob

    De novo whole genome assembly reconstructs genomic sequence from short, overlapping, and potentially erroneous DNA segments and is one of the most important computations in modern genomics. This work presents HipMER, a high-quality end-to-end de novo assembler designed for extreme scale analysis, via efficient parallelization of the Meraculous code. Genome assembly software has many components, each of which stresses different components of a computer system. This chapter explains the computational challenges involved in each step of the HipMer pipeline, the key distributed data structures, and communication costs in detail. We present performance results of assembling the human genome and themore » large hexaploid wheat genome on large supercomputers up to tens of thousands of cores.« less

  4. Low-Cost and Large-Area Electronics, Roll-to-Roll Processing and Beyond

    NASA Astrophysics Data System (ADS)

    Wiesenhütter, Katarzyna; Skorupa, Wolfgang

    In the following chapter, the authors conduct a literature survey of current advances in state-of-the-art low-cost, flexible electronics. A new emerging trend in the design of modern semiconductor devices dedicated to scaling-up, rather than reducing, their dimensions is presented. To realize volume manufacturing, alternative semiconductor materials with superior performance, fabricated by innovative processing methods, are essential. This review provides readers with a general overview of the material and technology evolution in the area of macroelectronics. Herein, the term macroelectronics (MEs) refers to electronic systems that can cover a large area of flexible media. In stark contrast to well-established micro- and nano-scale semiconductor devices, where property improvement is associated with downscaling the dimensions of the functional elements, in macroelectronic systems their overall size defines the ultimate performance (Sun and Rogers in Adv. Mater. 19:1897-1916, 2007). The major challenges of large-scale production are discussed. Particular attention has been focused on describing advanced, short-term heat treatment approaches, which offer a range of advantages compared to conventional annealing methods. There is no doubt that large-area, flexible electronic systems constitute an important research topic for the semiconductor industry. The ability to fabricate highly efficient macroelectronics by inexpensive processes will have a significant impact on a range of diverse technology sectors. A new era "towards semiconductor volume manufacturing…" has begun.

  5. Development of large-area monolithically integrated silicon-film photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Rand, J. A.; Cotter, J. E.; Ingram, A. E.; Ruffins, T. R.; Shreve, K. P.; Hall, R. B.; Barnett, A. M.

    1993-06-01

    This report describes work to develop Silicon-Film (trademark) Product 3 into a low-cost, stable solar cell for large-scale terrestrial power applications. The Product 3 structure is a thin (less than 100 micron) polycrystalline layer of silicon on a durable, insulating, ceramic substrate. The insulating substrate allows the silicon layer to be isolated and metallized to form a monolithically interconnected array of solar cells. High efficiency is achievable with the use of light trapping and a passivated back surface. The long-term goal for the product is a 1200 sq cm, 18%-efficient, monolithic array. The short-term objectives are to improve material quality and to fabricate 100 sq cm monolithically interconnected solar cell arrays. Low minority-carrier diffusion length in the silicon film and series resistance in the interconnected device structure are presently limiting device performance. Material quality is continually improving through reduced impurity contamination. Metallization schemes, such as a solder-dipped interconnection process, have been developed that will allow low-cost production processing and minimize R(sub s) effects. Test data for a nine-cell device (16 sq cm) indicated a V(sub oc) of 3.72 V. These first-reported monolithically interconnected multicrystalline silicon-on-ceramic devices show low shunt conductance (less than 0.1 mA/sq cm) due to limited conduction through the ceramic and no process-related metallization shunts.

  6. STE thrust chamber technology: Main injector technology program and nozzle Advanced Development Program (ADP)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The purpose of the STME Main Injector Program was to enhance the technology base for the large-scale main injector-combustor system of oxygen-hydrogen booster engines in the areas of combustion efficiency, chamber heating rates, and combustion stability. The initial task of the Main Injector Program, focused on analysis and theoretical predictions using existing models, was complemented by the design, fabrication, and test at MSFC of a subscale calorimetric, 40,000-pound thrust class, axisymmetric thrust chamber operating at approximately 2,250 psi and a 7:1 expansion ratio. Test results were used to further define combustion stability bounds, combustion efficiency, and heating rates using a large injector scale similar to the Pratt & Whitney (P&W) STME main injector design configuration including the tangential entry swirl coaxial injection elements. The subscale combustion data was used to verify and refine analytical modeling simulation and extend the database range to guide the design of the large-scale system main injector. The subscale injector design incorporated fuel and oxidizer flow area control features which could be varied; this allowed testing of several design points so that the STME conditions could be bracketed. The subscale injector design also incorporated high-reliability and low-cost fabrication techniques such as a one-piece electrical discharged machined (EDMed) interpropellant plate. Both subscale and large-scale injectors incorporated outer row injector elements with scarfed tip features to allow evaluation of reduced heating rates to the combustion chamber.

  7. Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong

    2017-12-01

    The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.

  8. Evaluation of biochar powder on oxygen supply efficiency and global warming potential during mainstream large-scale aerobic composting.

    PubMed

    He, Xueqin; Chen, Longjian; Han, Lujia; Liu, Ning; Cui, Ruxiu; Yin, Hongjie; Huang, Guangqun

    2017-12-01

    This study investigated the effects of biochar powder on oxygen supply efficiency and global warming potential (GWP) in the large-scale aerobic composting pattern which includes cyclical forced-turning with aeration at the bottom of composting tanks in China. A 55-day large-scale aerobic composting experiment was conducted in two different groups without and with 10% biochar powder addition (by weight). The results show that biochar powder improves the holding ability of oxygen, and the duration time (O 2 >5%) is around 80%. The composting process with above pattern significantly reduce CH 4 and N 2 O emissions compared to the static or turning-only styles. Considering the average GWP of the BC group was 19.82% lower than that of the CK group, it suggests that rational addition of biochar powder has the potential to reduce the energy consumption of turning, improve effectiveness of the oxygen supply, and reduce comprehensive greenhouse effects. Copyright © 2017. Published by Elsevier Ltd.

  9. k-neighborhood Decentralization: A Comprehensive Solution to Index the UMLS for Large Scale Knowledge Discovery

    PubMed Central

    Xiang, Yang; Lu, Kewei; James, Stephen L.; Borlawsky, Tara B.; Huang, Kun; Payne, Philip R.O.

    2011-01-01

    The Unified Medical Language System (UMLS) is the largest thesaurus in the biomedical informatics domain. Previous works have shown that knowledge constructs comprised of transitively-associated UMLS concepts are effective for discovering potentially novel biomedical hypotheses. However, the extremely large size of the UMLS becomes a major challenge for these applications. To address this problem, we designed a k-neighborhood Decentralization Labeling Scheme (kDLS) for the UMLS, and the corresponding method to effectively evaluate the kDLS indexing results. kDLS provides a comprehensive solution for indexing the UMLS for very efficient large scale knowledge discovery. We demonstrated that it is highly effective to use kDLS paths to prioritize disease-gene relations across the whole genome, with extremely high fold-enrichment values. To our knowledge, this is the first indexing scheme capable of supporting efficient large scale knowledge discovery on the UMLS as a whole. Our expectation is that kDLS will become a vital engine for retrieving information and generating hypotheses from the UMLS for future medical informatics applications. PMID:22154838

  10. k-Neighborhood decentralization: a comprehensive solution to index the UMLS for large scale knowledge discovery.

    PubMed

    Xiang, Yang; Lu, Kewei; James, Stephen L; Borlawsky, Tara B; Huang, Kun; Payne, Philip R O

    2012-04-01

    The Unified Medical Language System (UMLS) is the largest thesaurus in the biomedical informatics domain. Previous works have shown that knowledge constructs comprised of transitively-associated UMLS concepts are effective for discovering potentially novel biomedical hypotheses. However, the extremely large size of the UMLS becomes a major challenge for these applications. To address this problem, we designed a k-neighborhood Decentralization Labeling Scheme (kDLS) for the UMLS, and the corresponding method to effectively evaluate the kDLS indexing results. kDLS provides a comprehensive solution for indexing the UMLS for very efficient large scale knowledge discovery. We demonstrated that it is highly effective to use kDLS paths to prioritize disease-gene relations across the whole genome, with extremely high fold-enrichment values. To our knowledge, this is the first indexing scheme capable of supporting efficient large scale knowledge discovery on the UMLS as a whole. Our expectation is that kDLS will become a vital engine for retrieving information and generating hypotheses from the UMLS for future medical informatics applications. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Concentrating light in Cu(In,Ga)Se2 solar cells

    NASA Astrophysics Data System (ADS)

    Schmid, M.; Yin, G.; Song, M.; Duan, S.; Heidmann, B.; Sancho-Martinez, D.; Kämmer, S.; Köhler, T.; Manley, P.; Lux-Steiner, M. Ch.

    2016-09-01

    Light concentration has proven beneficial for solar cells, most notably for highly efficient but expensive absorber materials using high concentrations and large scale optics. Here we investigate light concentration for cost efficient thinfilm solar cells which show nano- or microtextured absorbers. Our absorber material of choice is Cu(In,Ga)Se2 (CIGSe) which has a proven stabilized record efficiency of 22.6% and which - despite being a polycrystalline thin-film material - is very tolerant to environmental influences. Taking a nanoscale approach, we concentrate light in the CIGSe absorber layer by integrating photonic nanostructures made from dielectric materials. The dielectric nanostructures give rise to resonant modes and field localization in their vicinity. Thus when inserted inside or adjacent to the absorber layer, absorption and efficiency enhancement are observed. In contrast to this internal absorption enhancement, external enhancement is exploited in the microscale approach: mm-sized lenses can be used to concentrate light onto CIGSe solar cells with lateral dimensions reduced down to the micrometer range. These micro solar cells come with the benefit of improved heat dissipation compared to the large scale concentrators and promise compact high efficiency devices. Both approaches of light concentration allow for reduction in material consumption by restricting the absorber dimension either vertically (ultra-thin absorbers for dielectric nanostructures) or horizontally (micro absorbers for concentrating lenses) and have significant potential for efficiency enhancement.

  12. Concentrating light in Cu(In,Ga)Se2 solar cells

    NASA Astrophysics Data System (ADS)

    Schmid, Martina; Yin, Guanchao; Song, Min; Duan, Shengkai; Heidmann, Berit; Sancho-Martinez, Diego; Kämmer, Steven; Köhler, Tristan; Manley, Phillip; Lux-Steiner, Martha Ch.

    2017-01-01

    Light concentration has proven beneficial for solar cells, most notably for highly efficient but expensive absorber materials using high concentrations and large scale optics. Here, we investigate the light concentration for cost-efficient thin-film solar cells that show nano- or microtextured absorbers. Our absorber material of choice is Cu(In,Ga)Se2 (CIGSe), which has a proven stabilized record efficiency of 22.6% and which-despite being a polycrystalline thin-film material-is very tolerant to environmental influences. Taking a nanoscale approach, we concentrate light in the CIGSe absorber layer by integrating photonic nanostructures made from dielectric materials. The dielectric nanostructures give rise to resonant modes and field localization in their vicinity. Thus, when inserted inside or adjacent to the absorber layer, absorption and efficiency enhancement are observed. In contrast to this internal absorption enhancement, external enhancement is exploited in the microscaled approach: mm-sized lenses can be used to concentrate light onto CIGSe solar cells with lateral dimensions reduced down to the micrometer range. These micro solar cells come with the benefit of improved heat dissipation compared with the large scale concentrators and promise compact high-efficiency devices. Both approaches of light concentration allow for reduction in material consumption by restricting the absorber dimension either vertically (ultrathin absorbers for dielectric nanostructures) or horizontally (microabsorbers for concentrating lenses) and have significant potential for efficiency enhancement.

  13. Low-Cost Nested-MIMO Array for Large-Scale Wireless Sensor Applications.

    PubMed

    Zhang, Duo; Wu, Wen; Fang, Dagang; Wang, Wenqin; Cui, Can

    2017-05-12

    In modern communication and radar applications, large-scale sensor arrays have increasingly been used to improve the performance of a system. However, the hardware cost and circuit power consumption scale linearly with the number of sensors, which makes the whole system expensive and power-hungry. This paper presents a low-cost nested multiple-input multiple-output (MIMO) array, which is capable of providing O ( 2 N 2 ) degrees of freedom (DOF) with O ( N ) physical sensors. The sensor locations of the proposed array have closed-form expressions. Thus, the aperture size and number of DOF can be predicted as a function of the total number of sensors. Additionally, with the help of time-sequence-phase-weighting (TSPW) technology, only one receiver channel is required for sampling the signals received by all of the sensors, which is conducive to reducing the hardware cost and power consumption. Numerical simulation results demonstrate the effectiveness and superiority of the proposed array.

  14. Low-Cost Nested-MIMO Array for Large-Scale Wireless Sensor Applications

    PubMed Central

    Zhang, Duo; Wu, Wen; Fang, Dagang; Wang, Wenqin; Cui, Can

    2017-01-01

    In modern communication and radar applications, large-scale sensor arrays have increasingly been used to improve the performance of a system. However, the hardware cost and circuit power consumption scale linearly with the number of sensors, which makes the whole system expensive and power-hungry. This paper presents a low-cost nested multiple-input multiple-output (MIMO) array, which is capable of providing O(2N2) degrees of freedom (DOF) with O(N) physical sensors. The sensor locations of the proposed array have closed-form expressions. Thus, the aperture size and number of DOF can be predicted as a function of the total number of sensors. Additionally, with the help of time-sequence-phase-weighting (TSPW) technology, only one receiver channel is required for sampling the signals received by all of the sensors, which is conducive to reducing the hardware cost and power consumption. Numerical simulation results demonstrate the effectiveness and superiority of the proposed array. PMID:28498329

  15. Incorporating economies of scale in the cost estimation in economic evaluation of PCV and HPV vaccination programmes in the Philippines: a game changer?

    PubMed

    Suwanthawornkul, Thanthima; Praditsitthikorn, Naiyana; Kulpeng, Wantanee; Haasis, Manuel Alexander; Guerrero, Anna Melissa; Teerawattananon, Yot

    2018-01-01

    Many economic evaluations ignore economies of scale in their cost estimation, which means that cost parameters are assumed to have a linear relationship with the level of production. Economies of scale is the situation when the average total cost of producing a product decreases with increasing volume caused by reducing the variable costs due to more efficient operation. This study investigates the significance of applying the economies of scale concept: the saving in costs gained by an increased level of production in economic evaluation of pneumococcal conjugate vaccines (PCV) and human papillomavirus (HPV) vaccinations. The fixed and variable costs of providing partial (20% coverage) and universal (100% coverage) vaccination programs in the Philippines were estimated using various methods, including costs of conducting questionnaire survey, focus-group discussion, and analysis of secondary data. Costing parameters were utilised as inputs for the two economic evaluation models for PCV and HPV. Incremental cost-effectiveness ratios (ICERs) and 5-year budget impacts with and without applying economies of scale to the costing parameters for partial and universal coverage were compared in order to determine the effect of these different costing approaches. The program costs of the partial coverage for the two immunisation programs were not very different when applying and not applying the economies of scale concept. Nevertheless, the program costs for universal coverage were 0.26 and 0.32 times lower when applying economies of scale compared to not applying economies of scale for the pneumococcal and human papillomavirus vaccinations, respectively. ICERs varied by up to 98% for pneumococcal vaccinations, whereas the change in ICERs in the human papillomavirus vaccination depended on both the costs of cervical cancer screening and the vaccination program. This results in a significant difference in the 5-year budget impact, accounting for 30 and 40% of reduction in the 5-year budget impact for the pneumococcal and human papillomavirus vaccination programs. This study demonstrated the feasibility and importance of applying economies of scale in the cost estimation in economic evaluation, which would lead to different conclusions in terms of value for money regarding the interventions, particularly with population-wide interventions such as vaccination programs. The economies of scale approach to costing is recommended for the creation of methodological guidelines for conducting economic evaluations.

  16. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  17. Collecting verbal autopsies: improving and streamlining data collection processes using electronic tablets.

    PubMed

    Flaxman, Abraham D; Stewart, Andrea; Joseph, Jonathan C; Alam, Nurul; Alam, Sayed Saidul; Chowdhury, Hafizur; Mooney, Meghan D; Rampatige, Rasika; Remolador, Hazel; Sanvictores, Diozele; Serina, Peter T; Streatfield, Peter Kim; Tallo, Veronica; Murray, Christopher J L; Hernandez, Bernardo; Lopez, Alan D; Riley, Ian Douglas

    2018-02-01

    There is increasing interest in using verbal autopsy to produce nationally representative population-level estimates of causes of death. However, the burden of processing a large quantity of surveys collected with paper and pencil has been a barrier to scaling up verbal autopsy surveillance. Direct electronic data capture has been used in other large-scale surveys and can be used in verbal autopsy as well, to reduce time and cost of going from collected data to actionable information. We collected verbal autopsy interviews using paper and pencil and using electronic tablets at two sites, and measured the cost and time required to process the surveys for analysis. From these cost and time data, we extrapolated costs associated with conducting large-scale surveillance with verbal autopsy. We found that the median time between data collection and data entry for surveys collected on paper and pencil was approximately 3 months. For surveys collected on electronic tablets, this was less than 2 days. For small-scale surveys, we found that the upfront costs of purchasing electronic tablets was the primary cost and resulted in a higher total cost. For large-scale surveys, the costs associated with data entry exceeded the cost of the tablets, so electronic data capture provides both a quicker and cheaper method of data collection. As countries increase verbal autopsy surveillance, it is important to consider the best way to design sustainable systems for data collection. Electronic data capture has the potential to greatly reduce the time and costs associated with data collection. For long-term, large-scale surveillance required by national vital statistical systems, electronic data capture reduces costs and allows data to be available sooner.

  18. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    NASA Astrophysics Data System (ADS)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-05-01

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.

  19. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  20. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-04

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  1. An experimental strategy validated to design cost-effective culture media based on response surface methodology.

    PubMed

    Navarrete-Bolaños, J L; Téllez-Martínez, M G; Miranda-López, R; Jiménez-Islas, H

    2017-07-03

    For any fermentation process, the production cost depends on several factors, such as the genetics of the microorganism, the process condition, and the culture medium composition. In this work, a guideline for the design of cost-efficient culture media using a sequential approach based on response surface methodology is described. The procedure was applied to analyze and optimize a culture medium of registered trademark and a base culture medium obtained as a result of the screening analysis from different culture media used to grow the same strain according to the literature. During the experiments, the procedure quantitatively identified an appropriate array of micronutrients to obtain a significant yield and find a minimum number of culture medium ingredients without limiting the process efficiency. The resultant culture medium showed an efficiency that compares favorably with the registered trademark medium at a 95% lower cost as well as reduced the number of ingredients in the base culture medium by 60% without limiting the process efficiency. These results demonstrated that, aside from satisfying the qualitative requirements, an optimum quantity of each constituent is needed to obtain a cost-effective culture medium. Study process variables for optimized culture medium and scaling-up production for the optimal values are desirable.

  2. Levodopa modulates small-world architecture of functional brain networks in Parkinson's disease.

    PubMed

    Berman, Brian D; Smucny, Jason; Wylie, Korey P; Shelton, Erika; Kronberg, Eugene; Leehey, Maureen; Tregellas, Jason R

    2016-11-01

    PD is associated with disrupted connectivity to a large number of distributed brain regions. How the disease alters the functional topological organization of the brain, however, remains poorly understood. Furthermore, how levodopa modulates network topology in PD is largely unknown. The objective of this study was to use resting-state functional MRI and graph theory to determine how small-world architecture is altered in PD and affected by levodopa administration. Twenty-one PD patients and 20 controls underwent functional MRI scanning. PD patients were scanned off medication and 1 hour after 200 mg levodopa. Imaging data were analyzed using 226 nodes comprising 10 intrinsic brain networks. Correlation matrices were generated for each subject and converted into cost-thresholded, binarized adjacency matrices. Cost-integrated whole-brain global and local efficiencies were compared across groups and tested for relationships with disease duration and severity. Data from 2 patients and 4 controls were excluded because of excess motion. Patients off medication showed no significant changes in global efficiency and overall local efficiency, but in a subnetwork analysis did show increased local efficiency in executive (P = 0.006) and salience (P = 0.018) networks. Levodopa significantly decreased local efficiency (P = 0.039) in patients except within the subcortical network, in which it significantly increased local efficiency (P = 0.007). Levodopa modulates global and local efficiency measures of small-world topology in PD, suggesting that degeneration of nigrostriatal neurons in PD may be associated with a large-scale network reorganization and that levodopa tends to normalize the disrupted network topology in PD. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  3. Green infrastructure and its catchment-scale effects: an emerging science

    PubMed Central

    Golden, Heather E.; Hoghooghi, Nahal

    2018-01-01

    Urbanizing environments alter the hydrological cycle by redirecting stream networks for stormwater and wastewater transmission and increasing impermeable surfaces. These changes thereby accelerate the runoff of water and its constituents following precipitation events, alter evapotranspiration processes, and indirectly modify surface precipitation patterns. Green infrastructure, or low-impact development (LID), can be used as a standalone practice or in concert with gray infrastructure (traditional stormwater management approaches) for cost-efficient, decentralized stormwater management. The growth in LID over the past several decades has resulted in a concomitant increase in research evaluating LID efficiency and effectiveness, but mostly at localized scales. There is a clear research need to quantify how LID practices affect water quantity (i.e., runoff and discharge) and quality at the scale of catchments. In this overview, we present the state of the science of LID research at the local scale, considerations for scaling this research to catchments, recent advances and findings in scaling the effects of LID practices on water quality and quantity at catchment scales, and the use of models as novel tools for these scaling efforts. PMID:29682288

  4. Green infrastructure and its catchment-scale effects: an emerging science.

    PubMed

    Golden, Heather E; Hoghooghi, Nahal

    2018-01-01

    Urbanizing environments alter the hydrological cycle by redirecting stream networks for stormwater and wastewater transmission and increasing impermeable surfaces. These changes thereby accelerate the runoff of water and its constituents following precipitation events, alter evapotranspiration processes, and indirectly modify surface precipitation patterns. Green infrastructure, or low-impact development (LID), can be used as a standalone practice or in concert with gray infrastructure (traditional stormwater management approaches) for cost-efficient, decentralized stormwater management. The growth in LID over the past several decades has resulted in a concomitant increase in research evaluating LID efficiency and effectiveness, but mostly at localized scales. There is a clear research need to quantify how LID practices affect water quantity (i.e., runoff and discharge) and quality at the scale of catchments. In this overview, we present the state of the science of LID research at the local scale, considerations for scaling this research to catchments, recent advances and findings in scaling the effects of LID practices on water quality and quantity at catchment scales, and the use of models as novel tools for these scaling efforts.

  5. Non-Epitaxial Thin-Film Indium Phosphide Photovoltaics: Growth, Devices, and Cost Analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Maxwell S.

    In recent years, the photovoltaic market has grown significantly as module prices have continued to come down. Continued growth of the field requires higher efficiency modules at lower manufacturing costs. In particular, higher efficiencies reduce the area needed for a given power output, thus reducing the downstream balance of systems costs that scale with area such as mounting frames, installation, and soft costs. Cells and modules made from III-V materials have the highest demonstrated efficiencies to date but are not yet at the cost level of other thin film technologies, which has limited their large-scale deployment. There is a need for new materials growth, processing and fabrication techniques to address this major shortcoming of III-V semiconductors. Chapters 2 and 3 explore growth of InP on non-epitaxial Mo substrates by MOCVD and CSS, respectively. The results from these studies demonstrate that InP optoelectronic quality is maintained even by growth on non-epitaxial metal substrates. Structural characterization by SEM and XRD show stoichiometric InP can be grown in complete thin films on Mo. Photoluminescence measurements show peak energies and widths to be similar to those of reference wafers of similar doping concentrations. In chapter 4 the TF-VLS growth technique is introduced and cells fabricated from InP produced by this technique are characterized. The TF-VLS method results in lateral grain sizes of >500 mum and exhibits superior optoelectronic quality. First generation devices using a n-TiO2 window layer along with p-type TF-VLS grown InP have reached ˜12.1% power conversion efficiency under 1 sun illumination with VOC of 692 mV, JSC of 26.9 mA/cm2, and FF of 65%. The cells are fabricated using all non-epitaxial processing. Optical measurements show the InP in these cells have the potential to support a higher VOC of ˜795 mV, which can be achieved by improved device design. Chapter 5 describes a cost analysis of a manufacturing process using an InP cell as the active layer in a monolithically integrated module. Importantly, TF-VLS growth avoids the hobbles of traditional growth: the epitaxial wafer substrate, low utilization efficiency of expensive metalorganic precursors, and high capital depreciation costs due to low throughput. Production costs are projected to be 0.76/W(DC) for the benchmark case of 12% efficient modules and would decrease to 0.40/W(DC) for the long-term potential case of 24% efficient modules.

  6. Value of Information Analysis of Multiparameter Tests for Chemotherapy in Early Breast Cancer: The OPTIMA Prelim Trial.

    PubMed

    Hall, Peter S; Smith, Alison; Hulme, Claire; Vargas-Palacios, Armando; Makris, Andreas; Hughes-Davies, Luke; Dunn, Janet A; Bartlett, John M S; Cameron, David A; Marshall, Andrea; Campbell, Amy; Macpherson, Iain R; Dan Rea; Francis, Adele; Earl, Helena; Morgan, Adrienne; Stein, Robert C; McCabe, Christopher

    2017-12-01

    Precision medicine is heralded as offering more effective treatments to smaller targeted patient populations. In breast cancer, adjuvant chemotherapy is standard for patients considered as high-risk after surgery. Molecular tests may identify patients who can safely avoid chemotherapy. To use economic analysis before a large-scale clinical trial of molecular testing to confirm the value of the trial and help prioritize between candidate tests as randomized comparators. Women with surgically treated breast cancer (estrogen receptor-positive and lymph node-positive or tumor size ≥30 mm) were randomized to standard care (chemotherapy for all) or test-directed care using Oncotype DX™. Additional testing was undertaken using alternative tests: MammaPrint TM , PAM-50 (Prosigna TM ), MammaTyper TM , IHC4, and IHC4-AQUA™ (NexCourse Breast™). A probabilistic decision model assessed the cost-effectiveness of all tests from a UK perspective. Value of information analysis determined the most efficient publicly funded ongoing trial design in the United Kingdom. There was an 86% probability of molecular testing being cost-effective, with most tests producing cost savings (range -£1892 to £195) and quality-adjusted life-year gains (range 0.17-0.20). There were only small differences in costs and quality-adjusted life-years between tests. Uncertainty was driven by long-term outcomes. Value of information demonstrated value of further research into all tests, with Prosigna currently being the highest priority for further research. Molecular tests are likely to be cost-effective, but an optimal test is yet to be identified. Health economics modeling to inform the design of a randomized controlled trial looking at diagnostic technology has been demonstrated to be feasible as a method for improving research efficiency. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. Cost-effectiveness Analysis with Influence Diagrams.

    PubMed

    Arias, M; Díez, F J

    2015-01-01

    Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.

  8. Very large scale heterogeneous integration (VLSHI) and wafer-level vacuum packaging for infrared bolometer focal plane arrays

    NASA Astrophysics Data System (ADS)

    Forsberg, Fredrik; Roxhed, Niclas; Fischer, Andreas C.; Samel, Björn; Ericsson, Per; Hoivik, Nils; Lapadatu, Adriana; Bring, Martin; Kittilsland, Gjermund; Stemme, Göran; Niklaus, Frank

    2013-09-01

    Imaging in the long wavelength infrared (LWIR) range from 8 to 14 μm is an extremely useful tool for non-contact measurement and imaging of temperature in many industrial, automotive and security applications. However, the cost of the infrared (IR) imaging components has to be significantly reduced to make IR imaging a viable technology for many cost-sensitive applications. This paper demonstrates new and improved fabrication and packaging technologies for next-generation IR imaging detectors based on uncooled IR bolometer focal plane arrays. The proposed technologies include very large scale heterogeneous integration for combining high-performance, SiGe quantum-well bolometers with electronic integrated read-out circuits and CMOS compatible wafer-level vacuum packing. The fabrication and characterization of bolometers with a pitch of 25 μm × 25 μm that are arranged on read-out-wafers in arrays with 320 × 240 pixels are presented. The bolometers contain a multi-layer quantum well SiGe thermistor with a temperature coefficient of resistance of -3.0%/K. The proposed CMOS compatible wafer-level vacuum packaging technology uses Cu-Sn solid-liquid interdiffusion (SLID) bonding. The presented technologies are suitable for implementation in cost-efficient fabless business models with the potential to bring about the cost reduction needed to enable low-cost IR imaging products for industrial, security and automotive applications.

  9. Are larger dental practices more efficient? An analysis of dental services production.

    PubMed Central

    Lipscomb, J; Douglass, C W

    1986-01-01

    Whether cost-efficiency in dental services production increases with firm size is investigated through application of an activity analysis production function methodology to data from a national survey of dental practices. Under this approach, service delivery in a dental practice is modeled as a linear programming problem that acknowledges distinct input-output relationships for each service. These service-specific relationships are then combined to yield projections of overall dental practice productivity, subject to technical and organizational constraints. The activity analysis reported here represents arguably the most detailed evaluation yet of the relationship between dental practice size and cost-efficiency, controlling for such confounding factors as fee and service-mix differences across firms. We conclude that cost-efficiency does increase with practice size, over the range from solo to four-dentist practices. Largely because of data limitations, we were unable to test satisfactorily for scale economies in practices with five or more dentists. Within their limits, our findings are generally consistent with results from the neoclassical production function literature. From the standpoint of consumer welfare, the critical question raised (but not resolved) here is whether these apparent production efficiencies of group practice are ultimately translated by the market into lower fees, shorter queues, or other nonprice benefits. PMID:3102404

  10. Large scale Brownian dynamics of confined suspensions of rigid particles

    NASA Astrophysics Data System (ADS)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar

    2017-12-01

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose height above the wall is set by a combination of thermal noise and active flows. We find the existence of two populations of active particles, slower ones closer to the bottom and faster ones above them, and demonstrate that our method provides quantitative accuracy even with relatively coarse resolutions of the particle geometry.

  11. The large-scale removal of mammalian invasive alien species in Northern Europe.

    PubMed

    Robertson, Peter A; Adriaens, Tim; Lambin, Xavier; Mill, Aileen; Roy, Sugoto; Shuttleworth, Craig M; Sutton-Croft, Mike

    2017-02-01

    Numerous examples exist of successful mammalian invasive alien species (IAS) eradications from small islands (<10 km 2 ), but few from more extensive areas. We review 15 large-scale removals (mean area 2627 km 2 ) from Northern Europe since 1900, including edible dormouse, muskrat, coypu, Himalayan porcupine, Pallas' and grey squirrels and American mink, each primarily based on daily checking of static traps. Objectives included true eradication or complete removal to a buffer zone, as distinct from other programmes that involved local control to limit damage or spread. Twelve eradication/removal programmes (80%) were successful. Cost increased with and was best predicted by area, while the cost per unit area decreased; the number of individual animals removed did not add significantly to the model. Doubling the area controlled reduced cost per unit area by 10%, but there was no evidence that cost effectiveness had increased through time. Compared with small islands, larger-scale programmes followed similar patterns of effort in relation to area. However, they brought challenges when defining boundaries and consequent uncertainties around costs, the definition of their objectives, confirmation of success and different considerations for managing recolonisation. Novel technologies or increased use of volunteers may reduce costs. Rapid response to new incursions is recommended as best practice rather than large-scale control to reduce the environmental, financial and welfare costs. © 2016 Crown copyright. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. © 2016 Crown copyright. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.

  12. Distributed Bees Algorithm Parameters Optimization for a Cost Efficient Target Allocation in Swarms of Robots

    PubMed Central

    Jevtić, Aleksandar; Gutiérrez, Álvaro

    2011-01-01

    Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the Distributed Bees Algorithm (DBA), previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA’s control parameters by means of a Genetic Algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots’ distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce. PMID:22346677

  13. Catalytic, conductive, and transparent platinum nanofiber webs for FTO-free dye-sensitized solar cells.

    PubMed

    Kim, Jongwook; Kang, Jonghyun; Jeong, Uiyoung; Kim, Heesuk; Lee, Hyunjung

    2013-04-24

    We report a multifunctional platinium nanofiber (PtNF) web that can act as a catalyst layer in dye-sensitized solar cell (DSSC) to simultaneously function as a transparent counter electrode (CE), i.e., without the presence of an indium-doped tin oxide (ITO) or fluorine-doped tin oxide (FTO) glass. This PtNF web can be easily produced by electrospinning, which is highly cost-effective and suitable for large-area industrial-scale production. Electrospun PtNFs are straight and have a length of a few micrometers, with a common diameter of 40-70 nm. Each nanofiber is composed of compact, crystalline Pt grains and they are well-fused and highly interconnected, which should be helpful to provide an efficient conductive network for free electron transport and a large surface area for electrocatalytic behavior. A PtNF web is served as a counter electrode in DSSC and the photovoltaic performance increases up to a power efficiency of 6.0%. It reaches up to 83% of that in a conventional DSSC using a Pt-coated FTO glass as a counter electrode. Newly designed DSSCs containing PtNF webs display highly stable photoelectric conversion efficiencies, and excellent catalytic, conductive, and transparent properties, as well as long-term stability. Also, while the DSSC function is retained, the fabrication cost is reduced by eliminating the transparent conducting layer on the counter electrode. The presented method of fabricating DSSCs based on a PtNF web can be extended to other electrocatalytic optoelectronic devices that combine superior catalytic activity with high conductivity and transparency.

  14. Too Big, Too Small, or Just Right? Cost-Efficiency of Environmental Inspection Services in Connecticut.

    PubMed

    Cohen, Jeffrey P; Checko, Patricia J

    2017-12-01

    To assess optimal activity size/mix of Connecticut local public health jurisdictions, through estimating economies of scale/scope/specialization for environmental inspections/services. Connecticut's 74 local health jurisdictions (LHJs) must provide environmental health services, but their efficiency or reasons for wide cost variation are unknown. The public health system is decentralized, with variation in organizational structure/size. We develop/compile a longitudinal dataset covering all 74 LHJs, annually from 2005 to 2012. We estimate a public health services/inspections cost function, where inputs are translated into outputs. We consider separate estimates of economies of scale/scope/specialization for four mandated inspection types. We obtain data from Connecticut Department of Public Health databases, reports, and other publicly available sources. There has been no known previous utilization of this combined dataset. On average, regional districts, municipal departments, and part-time LHJs are performing fewer than the efficient number of inspections. The full-time municipal departments and regional districts are more efficient but still not at the minimum efficient scale. The regional districts' elasticities of scale are larger, implying they are more efficient than municipal health departments. Local health jurisdictions may enhance efficiency by increasing inspections and/or sharing some services. © Health Research and Educational Trust.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William D; Johansen, Hans; Evans, Katherine J

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  16. Assessment of the costs, risks and benefits of selected integrated policy options to adapt to flood and drought in the water and agricultural sectors of the Warta River Basin

    NASA Astrophysics Data System (ADS)

    Sendzimir, Jan; Dubel, Anna; Linnerooth-Bayer, Joanne; Damurski, Jakub; Schroeter, Dagmar

    2014-05-01

    Historically large reservoirs have been the dominant strategy to counter flood and drought risk in Europe. However, a number of smaller-scale approaches have emerged as alternative strategies. To compare the cost effectiveness of reservoirs and these alternatives, we calculated the Investment & maintenance costs in terms of (euros) /m3 water stored or annual runoff reduced for five different strategies: large reservoirs (1.68 euros), large on-farm ponds (5.88 euros), small on-farm ponds (558.00 euros), shelterbelts (6.86 euros), switching to conservation tillage (-9.20 euros). The most cost effective measure for reducing runoff is switching to conservation tillage practices because this switch reduces machinery and labor costs in addition to reducing water runoff. Although shelterbelts that reduce annual runoff cannot be directly compared to ponds and reservoirs that store water, our estimates show that they likely compare favorably as a natural water retention measure, especially when taking account of their co-benefits in terms of erosion control, biodiversity and pollination. Another useful result is our demonstration of the economies of scale among reservoirs and ponds for storing water. Small ponds are two orders of magnitude more costly to construct and maintain as a flood and drought prevention measure than large reservoirs. Here, again, there are large co-benefits that should be factored into the cost-benefit equation, including especially the value of small ponds in promoting corridors for migration. This analysis shows the importance of carrying out more extensive cost-benefit estimates across on-farm and off-farm measures for tackling drought and flood risk in the context of a changing climate. While concrete recommendations for supporting water retention measures will depend on a more detailed investigation of their costs and benefits, this research highlights the potential of natural water retention measures as a complement to conventional investments in large reservoirs.

  17. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  18. Optimization of Computational Performance and Accuracy in 3-D Transient CFD Model for CFB Hydrodynamics Predictions

    NASA Astrophysics Data System (ADS)

    Rampidis, I.; Nikolopoulos, A.; Koukouzas, N.; Grammelis, P.; Kakaras, E.

    2007-09-01

    This work aims to present a pure 3-D CFD model, accurate and efficient, for the simulation of a pilot scale CFB hydrodynamics. The accuracy of the model was investigated as a function of the numerical parameters, in order to derive an optimum model setup with respect to computational cost. The necessity of the in depth examination of hydrodynamics emerges by the trend to scale up CFBCs. This scale up brings forward numerous design problems and uncertainties, which can be successfully elucidated by CFD techniques. Deriving guidelines for setting a computational efficient model is important as the scale of the CFBs grows fast, while computational power is limited. However, the optimum efficiency matter has not been investigated thoroughly in the literature as authors were more concerned for their models accuracy and validity. The objective of this work is to investigate the parameters that influence the efficiency and accuracy of CFB computational fluid dynamics models, find the optimum set of these parameters and thus establish this technique as a competitive method for the simulation and design of industrial, large scale beds, where the computational cost is otherwise prohibitive. During the tests that were performed in this work, the influence of turbulence modeling approach, time and space density and discretization schemes were investigated on a 1.2 MWth CFB test rig. Using Fourier analysis dominant frequencies were extracted in order to estimate the adequate time period for the averaging of all instantaneous values. The compliance with the experimental measurements was very good. The basic differences between the predictions that arose from the various model setups were pointed out and analyzed. The results showed that a model with high order space discretization schemes when applied on a coarse grid and averaging of the instantaneous scalar values for a 20 sec period, adequately described the transient hydrodynamic behaviour of a pilot CFB while the computational cost was kept low. Flow patterns inside the bed such as the core-annulus flow and the transportation of clusters were at least qualitatively captured.

  19. Delivering pediatric HIV care in resource-limited settings: cost considerations in an expanded response.

    PubMed

    Tolle, Michael A; Phelps, B Ryan; Desmond, Chris; Sugandhi, Nandita; Omeogu, Chinyere; Jamieson, David; Ahmed, Saeed; Reuben, Elan; Muhe, Lulu; Kellerman, Scott E

    2013-11-01

    If children are to be protected from HIV, the expansion of PMTCT programs must be complemented by increased provision of paediatric treatment. This is expensive, yet there are humanitarian, equity and children's rights arguments to justify the prioritization of treating HIV-infected children. In the context of limited budgets, inefficiencies cost lives, either through lower coverage or less effective services. With the goal of informing the design and expansion of efficient paediatric treatment programs able to utilize to greatest effect the available resources allocated to the treatment of HIV-infected children, this article reviews what is known about cost drivers in paediatric HIV interventions, and makes suggestions for improving efficiency in paediatric HIV programming. High-impact interventions known to deliver disproportional returns on investment are highlighted and targeted for immediate scale-up. Progress will carry a cost - increased funding, as well as additional data on intervention costs and outcomes, will be required if universal access of HIV-infected children to treatment is to be achieved and sustained.

  20. Toward the lowest energy consumption and emission in biofuel production: combination of ideal reactors and robust hosts.

    PubMed

    Xu, Ke; Lv, Bo; Huo, Yi-Xin; Li, Chun

    2018-04-01

    Rising feedstock costs, low crude oil prices, and other macroeconomic factors have threatened biofuel fermentation industries. Energy-efficient reactors, which provide controllable and stable biological environment, are important for the large-scale production of renewable and sustainable biofuels, and their optimization focus on the reduction of energy consumption and waste gas emission. The bioreactors could either be aerobic or anaerobic, and photobioreactors were developed for the culture of algae or microalgae. Due to the cost of producing large-volume bioreactors, various modeling strategies were developed for bioreactor design. The achievement of ideal biofuel reactor relies on not only the breakthrough of reactor design, but also the creation of super microbial factories with highest productivity and metabolic pathway flux. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. LanzaTech- Capturing Carbon. Fueling Growth.

    ScienceCinema

    NONE

    2018-01-16

    LanzaTech will design a gas fermentation system that will significantly improve the rate at which methane gas is delivered to a biocatalyst. Current gas fermentation processes are not cost effective compared to other gas-to-liquid technologies because they are too slow for large-scale production. If successful, LanzaTech's system will process large amounts of methane at a high rate, reducing the energy inputs and costs associated with methane conversion.

  2. Prepreg and Melt Infiltration Technology Developed for Affordable, Robust Manufacturing of Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Singh, Mrityunjay; Petko, Jeannie F.

    2004-01-01

    Affordable fiber-reinforced ceramic matrix composites with multifunctional properties are critically needed for high-temperature aerospace and space transportation applications. These materials have various applications in advanced high-efficiency and high-performance engines, airframe and propulsion components for next-generation launch vehicles, and components for land-based systems. A number of these applications require materials with specific functional characteristics: for example, thick component, hybrid layups for environmental durability and stress management, and self-healing and smart composite matrices. At present, with limited success and very high cost, traditional composite fabrication technologies have been utilized to manufacture some large, complex-shape components of these materials. However, many challenges still remain in developing affordable, robust, and flexible manufacturing technologies for large, complex-shape components with multifunctional properties. The prepreg and melt infiltration (PREMI) technology provides an affordable and robust manufacturing route for low-cost, large-scale production of multifunctional ceramic composite components.

  3. A sunny future: expert elicitation of China's solar photovoltaic technologies

    NASA Astrophysics Data System (ADS)

    Lam, Long T.; Branstetter, Lee; Azevedo, Inês L.

    2018-03-01

    China has emerged as the global manufacturing center for solar photovoltaic (PV) products. Chinese firms have entered all stages of the supply chain, producing most of the installed solar modules around the world. Meanwhile, production costs are at record lows. The decisions that Chinese solar producers make today will influence the path for the solar industry and its role towards de-carbonization of global energy systems in the years to come. However, to date, there have been no assessments of the future costs and efficiency of solar PV systems produced by the Chinese PV industry. We perform an expert elicitation to assess the technological and non-technological factors that led to the success of China’s silicon PV industry as well as likely future costs and performance. Experts evaluated key metrics such as efficiency, costs, and commercial viability of 17 silicon and non-silicon solar PV technologies by 2030. Silicon-based technologies will continue to be the mainstream product for large-scale electricity generation application in the near future, with module efficiency reaching as high as 23% and production cost as low as 0.24/W. The levelized cost of electricity for solar will be around 34/MWh, allowing solar PV to be competitive with traditional energy resources like coal. The industry’s future developments may be affected by overinvestment, overcapacity, and singular short-term focus.

  4. Visual versus mechanised leucocyte differential counts: costing and evaluation of traditional and Hemalog D methods.

    PubMed Central

    Hudson, M J; Green, A E

    1980-01-01

    Visual differential counts were examined for efficiency, cost effectiveness, and staff acceptability within our laboratory. A comparison with the Hemalog D system was attempted. The advantages and disadvantages of each system are enumerated and discussed in the context of a large general hospital. PMID:7440760

  5. Utilizing the ultrasensitive Schistosoma up-converting phosphor lateral flow circulating anodic antigen (UCP-LF CAA) assay for sample pooling-strategies.

    PubMed

    Corstjens, Paul L A M; Hoekstra, Pytsje T; de Dood, Claudia J; van Dam, Govert J

    2017-11-01

    Methodological applications of the high sensitivity genus-specific Schistosoma CAA strip test, allowing detection of single worm active infections (ultimate sensitivity), are discussed for efficient utilization in sample pooling strategies. Besides relevant cost reduction, pooling of samples rather than individual testing can provide valuable data for large scale mapping, surveillance, and monitoring. The laboratory-based CAA strip test utilizes luminescent quantitative up-converting phosphor (UCP) reporter particles and a rapid user-friendly lateral flow (LF) assay format. The test includes a sample preparation step that permits virtually unlimited sample concentration with urine, reaching ultimate sensitivity (single worm detection) at 100% specificity. This facilitates testing large urine pools from many individuals with minimal loss of sensitivity and specificity. The test determines the average CAA level of the individuals in the pool thus indicating overall worm burden and prevalence. When requiring test results at the individual level, smaller pools need to be analysed with the pool-size based on expected prevalence or when unknown, on the average CAA level of a larger group; CAA negative pools do not require individual test results and thus reduce the number of tests. Straightforward pooling strategies indicate that at sub-population level the CAA strip test is an efficient assay for general mapping, identification of hotspots, determination of stratified infection levels, and accurate monitoring of mass drug administrations (MDA). At the individual level, the number of tests can be reduced i.e. in low endemic settings as the pool size can be increased as opposed to prevalence decrease. At the sub-population level, average CAA concentrations determined in urine pools can be an appropriate measure indicating worm burden. Pooling strategies allowing this type of large scale testing are feasible with the various CAA strip test formats and do not affect sensitivity and specificity. It allows cost efficient stratified testing and monitoring of worm burden at the sub-population level, ideally for large-scale surveillance generating hard data for performance of MDA programs and strategic planning when moving towards transmission-stop and elimination.

  6. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    PubMed

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  7. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    PubMed Central

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  8. The economics of soil C sequestration

    NASA Astrophysics Data System (ADS)

    Alexander, P.; Paustian, K.; Smith, P.; Moran, D.

    2014-12-01

    Carbon is a critical component of soil vitality and of our ability to produce food. Carbon sequestered in soils also provides a further regulating ecosystem service, valued as the avoided damage from global climate change. We consider the demand and supply attributes that underpin and constrain the emergence of a market value for this vital global ecosystem service: markets being what economists regard as the most efficient institutions for allocating scarce resources to the supply and consumption of valuable goods. This paper considers how a potentially large global supply of soil carbon sequestration is reduced by economic and behavioural constraints that impinge on the emergence of markets, and alternative public policies that can efficiently transact demand for the service from private and public sector agents. In essence this is a case of significant market failure. In the design of alternative policy options we consider whether soil carbon mitigation is actually cost-effective relative to other measures in agriculture and elsewhere in the economy, and the nature of behavioural incentives that hinder policy options. We suggest that reducing cost and uncertainties of mitigation through soil-based measures is crucial for improving uptake. Monitoring and auditing processes will also be required to eventually facilitate wide-scale adoption of these measures.

  9. Impact of PACS in hospital management

    NASA Astrophysics Data System (ADS)

    Hur, Gham; Cha, Soon-Joo; Kim, Yong H.; Hwang, Yoon J.; Kim, Soo Y.

    2002-05-01

    Since the low-cost, NT-based, full PACS was successfully implemented in a large-scale hospital at the end of 1999, many hospital administrators have rushed to purchase the system competitively. It is now a worldwide trend to implement the technology, but Korea has several unique environments for the fast spread of the full PACS. Since hospitals in Korea operate inpatient and outpatient clinics in the same building and use identical OCS, full integration of PACS with the OCS was relatively easy and highly efficient. The simple governing structures of the hospitals also made the decision-making process short and effective. In addition, the national health insurance reimbursement policy that started pay in the beginning of 2000 has also played a catalytic role for the swift propagation of PACS. The recent appearance of the affordable PACS gave hospital administrators the opportunity to learn and understand the role of digital imaging in the areas that are directly related to the efficiency and quality of medical services, as well as cost containment. Furthermore, PACS provided them with windows to the 'all-digital hospital,' which will lead them to realign policies in the management of the hospitals in order to compete successfully in the fast-changing world of health care.

  10. Toward of a highly integrated probe for improving wireless network quality

    NASA Astrophysics Data System (ADS)

    Ding, Fei; Song, Aiguo; Wu, Zhenyang; Pan, Zhiwen; You, Xiaohu

    2016-10-01

    Quality of service and customer perception is the focus of the telecommunications industry. This paper proposes a low-cost approach to the acquisition of terminal data, collected from LTE networks with the application of a soft probe, based on the Java language. The soft probe includes support for fast call in the form of a referenced library, and can be integrated into various Android-based applications to automatically monitor any exception event in the network. Soft probe-based acquisition of terminal data has the advantages of low cost and can be applied on large scale. Experiment shows that a soft probe can efficiently obtain terminal network data. With this method, the quality of service of LTE networks can be determined from acquired wireless data. This work contributes to efficient network optimization, and the analysis of abnormal network events.

  11. Outreach at Washington State University: a case study in costs and attendance

    NASA Astrophysics Data System (ADS)

    Bernhardt, Elizabeth A.; Bollen, Viktor; Bersano, Thomas M.; Mossman, Sean M.

    2016-09-01

    Making effective and efficient use of outreach resources can be difficult for student groups in smaller rural communities. Washington State University's OSA/SPIE student chapter desires well attended yet cost-effective ways to educate and inform the public. We designed outreach activities focused on three different funding levels: low upfront cost, moderate continuing costs, and high upfront cost with low continuing costs. By featuring our activities at well attended events, such as a pre-football game event, or by advertising a headlining activity, such as a laser maze, we take advantage of large crowds to create a relaxed learning atmosphere. Moreover, participants enjoy casual learning while waiting for a main event. Choosing a particular funding level and associating with well-attended events makes outreach easier. While there are still many challenges to outreach, such as motivating volunteers or designing outreach programs, we hope overcoming two large obstacles will lead to future outreach success.

  12. Large-scale bioenergy production: how to resolve sustainability trade-offs?

    NASA Astrophysics Data System (ADS)

    Humpenöder, Florian; Popp, Alexander; Bodirsky, Benjamin Leon; Weindl, Isabelle; Biewald, Anne; Lotze-Campen, Hermann; Dietrich, Jan Philipp; Klein, David; Kreidenweis, Ulrich; Müller, Christoph; Rolinski, Susanne; Stevanovic, Miodrag

    2018-02-01

    Large-scale 2nd generation bioenergy deployment is a key element of 1.5 °C and 2 °C transformation pathways. However, large-scale bioenergy production might have negative sustainability implications and thus may conflict with the Sustainable Development Goal (SDG) agenda. Here, we carry out a multi-criteria sustainability assessment of large-scale bioenergy crop production throughout the 21st century (300 EJ in 2100) using a global land-use model. Our analysis indicates that large-scale bioenergy production without complementary measures results in negative effects on the following sustainability indicators: deforestation, CO2 emissions from land-use change, nitrogen losses, unsustainable water withdrawals and food prices. One of our main findings is that single-sector environmental protection measures next to large-scale bioenergy production are prone to involve trade-offs among these sustainability indicators—at least in the absence of more efficient land or water resource use. For instance, if bioenergy production is accompanied by forest protection, deforestation and associated emissions (SDGs 13 and 15) decline substantially whereas food prices (SDG 2) increase. However, our study also shows that this trade-off strongly depends on the development of future food demand. In contrast to environmental protection measures, we find that agricultural intensification lowers some side-effects of bioenergy production substantially (SDGs 13 and 15) without generating new trade-offs—at least among the sustainability indicators considered here. Moreover, our results indicate that a combination of forest and water protection schemes, improved fertilization efficiency, and agricultural intensification would reduce the side-effects of bioenergy production most comprehensively. However, although our study includes more sustainability indicators than previous studies on bioenergy side-effects, our study represents only a small subset of all indicators relevant for the SDG agenda. Based on this, we argue that the development of policies for regulating externalities of large-scale bioenergy production should rely on broad sustainability assessments to discover potential trade-offs with the SDG agenda before implementation.

  13. Improving the large scale purification of the HIV microbicide, griffithsin.

    PubMed

    Fuqua, Joshua L; Wanga, Valentine; Palmer, Kenneth E

    2015-02-22

    Griffithsin is a broad spectrum antiviral lectin that inhibits viral entry and maturation processes through binding clusters of oligomannose glycans on viral envelope glycoproteins. An efficient, scaleable manufacturing process for griffithsin active pharmaceutical ingredient (API) is essential for particularly cost-sensitive products such as griffithsin -based topical microbicides for HIV-1 prevention in resource poor settings. Our previously published purification method used ceramic filtration followed by two chromatography steps, resulting in a protein recovery of 30%. Our objective was to develop a scalable purification method for griffithsin expressed in Nicotiana benthamiana plants that would increase yield, reduce production costs, and simplify manufacturing techniques. Considering the future need to transfer griffithsin manufacturing technology to resource poor areas, we chose to focus modifying the purification process, paying particular attention to introducing simple, low-cost, and scalable procedures such as use of temperature, pH, ion concentration, and filtration to enhance product recovery. We achieved >99% pure griffithsin API by generating the initial green juice extract in pH 4 buffer, heating the extract to 55°C, incubating overnight with a bentonite MgCl2 mixture, and final purification with Capto™ multimodal chromatography. Griffithsin extracted with this protocol maintains activity comparable to griffithsin purified by the previously published method and we are able to recover a substantially higher yield: 88 ± 5% of griffithsin from the initial extract. The method was scaled to produce gram quantities of griffithsin with high yields, low endotoxin levels, and low purification costs maintained. The methodology developed to purify griffithsin introduces and develops multiple tools for purification of recombinant proteins from plants at an industrial scale. These tools allow for robust cost-effective production and purification of griffithsin. The methodology can be readily scaled to the bench top or industry and process components can be used for purification of additional proteins based on biophysical characteristics.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manohar, AK; Yang, CG; Malkhandi, S

    Iron-based alkaline rechargeable batteries have the potential of meeting the needs of large-scale electrical energy storage because of their low-cost, robustness and eco-friendliness. However, the widespread commercial deployment of iron-based batteries has been limited by the low charging efficiency and the poor discharge rate capability of the iron electrode. In this study, we have demonstrated iron electrodes containing bismuth oxide and iron sulfide with a charging efficiency of 92% and capable of being discharged at the 3C rate. Such a high value of charging efficiency combined with the ability to discharge at high rates is being reported for the firstmore » time. The bismuth oxide additive led to the in situ formation of elemental bismuth and a consequent increase in the overpotential for the hydrogen evolution reaction leading to an increase in the charging efficiency. We observed that the sulfide ions added to the electrolyte and iron sulfide added to the electrode mitigated-electrode passivation and allowed for continuous discharge at high rates. At the 3C discharge rate, a utilization of 0.2 Ah/g was achieved. The performance level of the rechargeable iron electrode demonstrated here is attractive for designing economically-viable large-scale energy storage systems based on alkaline nickel-iron and iron-air batteries. (C) 2013 The Electrochemical Society. All rights reserved.« less

  15. A stochastic thermostat algorithm for coarse-grained thermomechanical modeling of large-scale soft matters: Theory and application to microfilaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tong; Gu, YuanTong, E-mail: yuantong.gu@qut.edu.au

    As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grainedmore » level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.« less

  16. Scaling up to address data science challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, Joanne R.

    Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less

  17. Scaling up to address data science challenges

    DOE PAGES

    Wendelberger, Joanne R.

    2017-04-27

    Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less

  18. Load Balancing Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearce, Olga Tkachyshyn

    2014-12-01

    The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one atmore » the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.« less

  19. Vegetation and the importance of insecticide-treated target siting for control of Glossina fuscipes fuscipes.

    PubMed

    Esterhuizen, Johan; Njiru, Basilio; Vale, Glyn A; Lehane, Michael J; Torr, Stephen J

    2011-09-01

    Control of tsetse flies using insecticide-treated targets is often hampered by vegetation re-growth and encroachment which obscures a target and renders it less effective. Potentially this is of particular concern for the newly developed small targets (0.25 high × 0.5 m wide) which show promise for cost-efficient control of Palpalis group tsetse flies. Consequently the performance of a small target was investigated for Glossina fuscipes fuscipes in Kenya, when the target was obscured following the placement of vegetation to simulate various degrees of natural bush encroachment. Catches decreased significantly only when the target was obscured by more than 80%. Even if a small target is underneath a very low overhanging bush (0.5 m above ground), the numbers of G. f. fuscipes decreased by only about 30% compared to a target in the open. We show that the efficiency of the small targets, even in small (1 m diameter) clearings, is largely uncompromised by vegetation re-growth because G. f. fuscipes readily enter between and under vegetation. The essential characteristic is that there should be some openings between vegetation. This implies that for this important vector of HAT, and possibly other Palpalis group flies, a smaller initial clearance zone around targets can be made and longer interval between site maintenance visits is possible both of which will result in cost savings for large scale operations. We also investigated and discuss other site features e.g. large solid objects and position in relation to the water's edge in terms of the efficacy of the small targets.

  20. In-depth investigation of spin-on doped solar cells with thermally grown oxide passivation

    NASA Astrophysics Data System (ADS)

    Ahmad, Samir Mahmmod; Cheow, Siu Leong; Ludin, Norasikin A.; Sopian, K.; Zaidi, Saleem H.

    Solar cell industrial manufacturing, based largely on proven semiconductor processing technologies supported by significant advancements in automation, has reached a plateau in terms of cost and efficiency. However, solar cell manufacturing cost (dollar/watt) is still substantially higher than fossil fuels. The route to lowering cost may not lie with continuing automation and economies of scale. Alternate fabrication processes with lower cost and environmental-sustainability coupled with self-reliance, simplicity, and affordability may lead to price compatibility with carbon-based fuels. In this paper, a custom-designed formulation of phosphoric acid has been investigated, for n-type doping in p-type substrates, as a function of concentration and drive-in temperature. For post-diffusion surface passivation and anti-reflection, thermally-grown oxide films in 50-150-nm thickness were grown. These fabrication methods facilitate process simplicity, reduced costs, and environmental sustainability by elimination of poisonous chemicals and toxic gases (POCl3, SiH4, NH3). Simultaneous fire-through contact formation process based on screen-printed front surface Ag and back surface through thermally grown oxide films was optimized as a function of the peak temperature in conveyor belt furnace. Highest efficiency solar cells fabricated exhibited efficiency of ∼13%. Analysis of results based on internal quantum efficiency and minority carried measurements reveals three contributing factors: high front surface recombination, low minority carrier lifetime, and higher reflection. Solar cell simulations based on PC1D showed that, with improved passivation, lower reflection, and high lifetimes, efficiency can be enhanced to match with commercially-produced PECVD SiN-coated solar cells.

  1. Using multi-scale distribution and movement effects along a montane highway to identify optimal crossing locations for a large-bodied mammal community.

    PubMed

    Schuster, Richard; Römer, Heinrich; Germain, Ryan R

    2013-01-01

    Roads are a major cause of habitat fragmentation that can negatively affect many mammal populations. Mitigation measures such as crossing structures are a proposed method to reduce the negative effects of roads on wildlife, but the best methods for determining where such structures should be implemented, and how their effects might differ between species in mammal communities is largely unknown. We investigated the effects of a major highway through south-eastern British Columbia, Canada on several mammal species to determine how the highway may act as a barrier to animal movement, and how species may differ in their crossing-area preferences. We collected track data of eight mammal species across two winters, along both the highway and pre-marked transects, and used a multi-scale modeling approach to determine the scale at which habitat characteristics best predicted preferred crossing sites for each species. We found evidence for a severe barrier effect on all investigated species. Freely-available remotely-sensed habitat landscape data were better than more costly, manually-digitized microhabitat maps in supporting models that identified preferred crossing sites; however, models using both types of data were better yet. Further, in 6 of 8 cases models which incorporated multiple spatial scales were better at predicting preferred crossing sites than models utilizing any single scale. While each species differed in terms of the landscape variables associated with preferred/avoided crossing sites, we used a multi-model inference approach to identify locations along the highway where crossing structures may benefit all of the species considered. By specifically incorporating both highway and off-highway data and predictions we were able to show that landscape context plays an important role for maximizing mitigation measurement efficiency. Our results further highlight the need for mitigation measures along major highways to improve connectivity between mammal populations, and illustrate how multi-scale data can be used to identify preferred crossing sites for different species within a mammal community.

  2. Efficient Generation of an Array of Single Silicon-Vacancy Defects in Silicon Carbide

    NASA Astrophysics Data System (ADS)

    Wang, Junfeng; Zhou, Yu; Zhang, Xiaoming; Liu, Fucai; Li, Yan; Li, Ke; Liu, Zheng; Wang, Guanzhong; Gao, Weibo

    2017-06-01

    Color centers in silicon carbide have increasingly attracted attention in recent years owing to their excellent properties such as single-photon emission, good photostability, and long spin-coherence time even at room temperature. As compared to diamond, which is widely used for hosting nitrogen-vacancy centers, silicon carbide has an advantage in terms of large-scale, high-quality, and low-cost growth, as well as an advanced fabrication technique in optoelectronics, leading to prospects for large-scale quantum engineering. In this paper, we report an experimental demonstration of the generation of a single-photon-emitter array through ion implantation. VSi defects are generated in predetermined locations with high generation efficiency (approximately 19 % ±4 % ). The single emitter probability reaches approximately 34 % ±4 % when the ion-implantation dose is properly set. This method serves as a critical step in integrating single VSi defect emitters with photonic structures, which, in turn, can improve the emission and collection efficiency of VSi defects when they are used in a spin photonic quantum network. On the other hand, the defects are shallow, and they are generated about 40 nm below the surface which can serve as a critical resource in quantum-sensing applications.

  3. High-Performance Carbon Dioxide Electrocatalytic Reduction by Easily Fabricated Large-Scale Silver Nanowire Arrays.

    PubMed

    Luan, Chuhao; Shao, Yang; Lu, Qi; Gao, Shenghan; Huang, Kai; Wu, Hui; Yao, Kefu

    2018-05-30

    An efficient and selective catalyst is in urgent need for carbon dioxide electroreduction and silver is one of the promising candidates with affordable costs. Here we fabricated large-scale vertically standing Ag nanowire arrays with high crystallinity and electrical conductivity as carbon dioxide electroreduction catalysts by a simple nanomolding method that was usually considered not feasible for metallic crystalline materials. A great enhancement of current densities and selectivity for CO at moderate potentials was achieved. The current density for CO ( j co ) of Ag nanowire array with 200 nm in diameter was more than 2500 times larger than that of Ag foil at an overpotential of 0.49 V with an efficiency over 90%. The origin of enhanced performances are attributed to greatly increased electrochemically active surface area (ECSA) and higher intrinsic activity compared to those of polycrystalline Ag foil. More low-coordinated sites on the nanowires which can stabilize the CO 2 intermediate better are responsible for the high intrinsic activity. In addition, the impact of surface morphology that induces limited mass transportation on reaction selectivity and efficiency of nanowire arrays with different diameters was also discussed.

  4. Effect of video server topology on contingency capacity requirements

    NASA Astrophysics Data System (ADS)

    Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.

    1996-03-01

    Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.

  5. Cost analysis of cassava cellulose utilization scenarios for ethanol production on flowsheet simulation platform.

    PubMed

    Zhang, Jian; Fang, Zhenhong; Deng, Hongbo; Zhang, Xiaoxi; Bao, Jie

    2013-04-01

    Cassava cellulose accounts for one quarter of cassava residues and its utilization is important for improving the efficiency and profit in commercial scale cassava ethanol industry. In this study, three scenarios of cassava cellulose utilization for ethanol production were experimentally tested under same conditions and equipment. Based on the experimental results, a rigorous flowsheet simulation model was established on Aspen plus platform and the cost of cellulase enzyme and steam energy in the three cases was calculated. The results show that the simultaneous co-saccharification of cassava starch/cellulose and ethanol fermentation process (Co-SSF) provided a cost effective option of cassava cellulose utilization for ethanol production, while the utilization of cassava cellulose from cassava ethanol fermentation residues was not economically sound. Comparing to the current fuel ethanol selling price, the Co-SSF process may provide an important choice for enhancing cassava ethanol production efficiency and profit in commercial scale. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. High efficiency solar cells for concentrator systems: silicon or multi-junction?

    NASA Astrophysics Data System (ADS)

    Slade, Alexander; Stone, Kenneth W.; Gordon, Robert; Garboushian, Vahan

    2005-08-01

    Amonix has become the first company to begin production of high concentration silicon solar cells where volumes are over 10 MW/year. Higher volumes are available due to the method of manufacture; Amonix solely uses semiconductor foundries for solar cell production. In the previous years of system and cell field testing, this method of manufacturing enabled Amonix to maintain a very low overhead while incurring a high cost for the solar cell. However, recent simplifications to the solar cell processing sequence resulted in cost reduction and increased yield. This new process has been tested by producing small qualities in very short time periods, enabling a simulation of high volume production. Results have included over 90% wafer yield, up to 100% die yield and world record performance (η =27.3%). This reduction in silicon solar cell cost has increased the required efficiency for multi-junction concentrator solar cells to be competitive / advantageous. Concentrator systems are emerging as a low-cost, high volume option for solar-generated electricity due to the very high utilization of the solar cell, leading to a much lower $/Watt cost of a photovoltaic system. Parallel to this is the onset of alternative solar cell technologies, such as the very high efficiency multi-junction solar cells developed at NREL over the last two decades. The relatively high cost of these type of solar cells has relegated their use to non-terrestrial applications. However, recent advancements in both multi-junction concentrator cell efficiency and their stability under high flux densities has made their large-scale terrestrial deployment significantly more viable. This paper presents Amonix's experience and testing results of both high-efficiency silicon rear-junction solar cells and multi-junction solar cells made for concentrated light operation.

  7. Economically sustainable scaling of photovoltaics to meet climate targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Needleman, David Berney; Poindexter, Jeremy R.; Kurchin, Rachel C.

    To meet climate targets, power generation capacity from photovoltaics (PV) in 2030 will have to be much greater than is predicted from either steady state growth using today's manufacturing capacity or industry roadmaps. Analysis of whether current technology can scale, in an economically sustainable way, to sufficient levels to meet these targets has not yet been undertaken, nor have tools to perform this analysis been presented. Here, we use bottom-up cost modeling to predict cumulative capacity as a function of technological and economic variables. We find that today's technology falls short in two ways: profits are too small relative tomore » upfront factory costs to grow manufacturing capacity rapidly enough to meet climate targets, and costs are too high to generate enough demand to meet climate targets. We show that decreasing the capital intensity (capex) of PV manufacturing to increase manufacturing capacity and effectively reducing cost (e.g., through higher efficiency) to increase demand are the most effective and least risky ways to address these barriers to scale. We also assess the effects of variations in demand due to hard-to-predict factors, like public policy, on the necessary reductions in cost.Lastly, we review examples of redundant technology pathways for crystalline silicon PV to achieve the necessary innovations in capex, performance, and price.« less

  8. Economically sustainable scaling of photovoltaics to meet climate targets

    DOE PAGES

    Needleman, David Berney; Poindexter, Jeremy R.; Kurchin, Rachel C.; ...

    2016-04-21

    To meet climate targets, power generation capacity from photovoltaics (PV) in 2030 will have to be much greater than is predicted from either steady state growth using today's manufacturing capacity or industry roadmaps. Analysis of whether current technology can scale, in an economically sustainable way, to sufficient levels to meet these targets has not yet been undertaken, nor have tools to perform this analysis been presented. Here, we use bottom-up cost modeling to predict cumulative capacity as a function of technological and economic variables. We find that today's technology falls short in two ways: profits are too small relative tomore » upfront factory costs to grow manufacturing capacity rapidly enough to meet climate targets, and costs are too high to generate enough demand to meet climate targets. We show that decreasing the capital intensity (capex) of PV manufacturing to increase manufacturing capacity and effectively reducing cost (e.g., through higher efficiency) to increase demand are the most effective and least risky ways to address these barriers to scale. We also assess the effects of variations in demand due to hard-to-predict factors, like public policy, on the necessary reductions in cost.Lastly, we review examples of redundant technology pathways for crystalline silicon PV to achieve the necessary innovations in capex, performance, and price.« less

  9. The effects of scale on the costs of targeted HIV prevention interventions among female and male sex workers, men who have sex with men and transgenders in India

    PubMed Central

    Guinness, L; Kumaranayake, L; Reddy, Bhaskar; Govindraj, Y; Vickerman, P; Alary, M

    2010-01-01

    Background The India AIDS Initiative (Avahan) project is involved in rapid scale-up of HIV-prevention interventions in high-risk populations. This study examines the cost variation of 107 non-governmental organisations (NGOs) implementing targeted interventions, over the start up (defined as period from project inception until services to the key population commenced) and first 2 years of intervention. Methods The Avahan interventions for female and male sex workers and their clients, in 62 districts of four southern states were costed for the financial years 2004/2005 and 2005/2006 using standard costing techniques. Data sources include financial and economic costs from the lead implementing partners (LPs) and subcontracted local implementing NGOs retrospectively and prospectively collected from a provider perspective. Ingredients and step-down allocation processes were used. Outcomes were measured using routinely collected project data. The average costs were estimated and a regression analysis carried out to explore causes of cost variation. Costs were calculated in US$ 2006. Results The total number of registered people was 134 391 at the end of 2 years, and 124 669 had used STI services during that period. The median average cost of Avahan programme for this period was $76 per person registered with the project. Sixty-one per cent of the cost variation could be explained by scale (positive association), number of NGOs per district (negative), number of LPs in the state (negative) and project maturity (positive) (p<0.0001). Conclusions During rapid scale-up in the initial phase of the Avahan programme, a significant reduction in average costs was observed. As full scale-up had not yet been achieved, the average cost at scale is yet to be realised and the extent of the impact of scale on costs yet to be captured. Scale effects are important to quantify for planning resource requirements of large-scale interventions. The average cost after 2 years is within the range of global scale-up costs estimates and other studies in India. PMID:20167740

  10. The effects of scale on the costs of targeted HIV prevention interventions among female and male sex workers, men who have sex with men and transgenders in India.

    PubMed

    Chandrashekar, S; Guinness, L; Kumaranayake, L; Reddy, Bhaskar; Govindraj, Y; Vickerman, P; Alary, M

    2010-02-01

    The India AIDS Initiative (Avahan) project is involved in rapid scale-up of HIV-prevention interventions in high-risk populations. This study examines the cost variation of 107 non-governmental organisations (NGOs) implementing targeted interventions, over the start up (defined as period from project inception until services to the key population commenced) and first 2 years of intervention. The Avahan interventions for female and male sex workers and their clients, in 62 districts of four southern states were costed for the financial years 2004/2005 and 2005/2006 using standard costing techniques. Data sources include financial and economic costs from the lead implementing partners (LPs) and subcontracted local implementing NGOs retrospectively and prospectively collected from a provider perspective. Ingredients and step-down allocation processes were used. Outcomes were measured using routinely collected project data. The average costs were estimated and a regression analysis carried out to explore causes of cost variation. Costs were calculated in US$ 2006. The total number of registered people was 134,391 at the end of 2 years, and 124,669 had used STI services during that period. The median average cost of Avahan programme for this period was $76 per person registered with the project. Sixty-one per cent of the cost variation could be explained by scale (positive association), number of NGOs per district (negative), number of LPs in the state (negative) and project maturity (positive) (p<0.0001). During rapid scale-up in the initial phase of the Avahan programme, a significant reduction in average costs was observed. As full scale-up had not yet been achieved, the average cost at scale is yet to be realised and the extent of the impact of scale on costs yet to be captured. Scale effects are important to quantify for planning resource requirements of large-scale interventions. The average cost after 2 years is within the range of global scale-up costs estimates and other studies in India.

  11. Metal-Free Aqueous Flow Battery with Novel Ultrafiltered Lignin as Electrolyte

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukhopadhyay, Alolika; Hamel, Jonathan; Katahira, Rui

    As the number of generation sources from intermittent renewable technologies on the electric grid increases, the need for large-scale energy storage devices is becoming essential to ensure grid stability. Flow batteries offer numerous advantages over conventional sealed batteries for grid storage. In this work, for the first time, we investigated lignin, the second most abundant wood derived biopolymer, as an anolyte for the aqueous flow battery. Lignosulfonate, a water-soluble derivative of lignin, is environmentally benign, low cost and abundant as it is obtained from the byproduct of paper and biofuel manufacturing. The lignosulfonate utilizes the redox chemistry of quinone tomore » store energy and undergoes a reversible redox reaction. Here, we paired lignosulfonate with Br2/Br-, and the full cell runs efficiently with high power density. Also, the large and complex molecular structure of lignin considerably reduces the electrolytic crossover, which ensures very high capacity retention. The flowcell was able to achieve current densities of up to 20 mA/cm2 and charge polarization resistance of 15 ohm cm2. This technology presents a unique opportunity for a low-cost, metal-free flow battery capable of large-scale sustainable energy storage.« less

  12. Flat-plate solar array project of the US Department of Energy's National Photovoltaics Program: Ten years of progress

    NASA Technical Reports Server (NTRS)

    Christensen, Elmer

    1985-01-01

    The Flat-Plate Solar Array (FSA) Project, a Government-sponsored photovoltaics project, was initiated in January 1975 (previously named the Low-Cost Silicon Solar Array Project) to stimulate the development of PV systems for widespread use. Its goal then was to develop PV modules with 10% efficiency, a 20-year lifetime, and a selling price of $0.50 per peak watt of generating capacity (1975 dollars). It was recognized that cost reduction of PV solar-cell and module manufacturing was the key achievement needed if PV power systems were to be economically competitive for large-scale terrestrial use.

  13. Characterization of the Ecosole HCPV tracker and single module inverter

    NASA Astrophysics Data System (ADS)

    Carpanelli, Maurizio; Borelli, Gianni; Verdilio, Daniele; De Nardis, Davide; Migali, Fabrizio; Cancro, Carmine; Graditi, Giorgio

    2015-09-01

    BECAR, the Beghelli group's R&D company, is leading ECOSOLE (Elevated COncentration SOlar Energy), one of the largest European Demonstration projects in solar photovoltaic. ECOSOLE, started in 2012, is focused on the study, design, and realization of new HCPV generator made of high efficiency PV modules equipped with SoG (Silicone on Glass) fresnel lenses and III-V solar cells, and a low cost matched solar tracker with distributed inverters approach. The project also regards the study and demonstration of new high throughput methods for the industrial large scale productions, at very low manufacturing costs. This work reports the description of the characterization of the tracker and single module.

  14. Bacteriophage Mediates Efficient Gene Transfer in Combination with Conventional Transfection Reagents

    PubMed Central

    Donnelly, Amanda; Yata, Teerapong; Bentayebi, Kaoutar; Suwan, Keittisak; Hajitou, Amin

    2015-01-01

    The development of commercially available transfection reagents for gene transfer applications has revolutionized the field of molecular biology and scientific research. However, the challenge remains in ensuring that they are efficient, safe, reproducible and cost effective. Bacteriophage (phage)-based viral vectors have the potential to be utilized for general gene transfer applications within research and industry. Yet, they require adaptations in order to enable them to efficiently enter cells and overcome mammalian cellular barriers, as they infect bacteria only; furthermore, limited progress has been made at increasing their efficiency. The production of a novel hybrid nanocomplex system consisting of two different nanomaterial systems, phage vectors and conventional transfection reagents, could overcome these limitations. Here we demonstrate that the combination of cationic lipids, cationic polymers or calcium phosphate with M13 bacteriophage-derived vectors, engineered to carry a mammalian transgene cassette, resulted in increased cellular attachment, entry and improved transgene expression in human cells. Moreover, addition of a targeting ligand into the nanocomplex system, through genetic engineering of the phage capsid further increased gene expression and was effective in a stable cell line generation application. Overall, this new hybrid nanocomplex system (i) provides enhanced phage-mediated gene transfer; (ii) is applicable for laboratory transfection processes and (iii) shows promise within industry for large-scale gene transfer applications. PMID:26670247

  15. Bacteriophage Mediates Efficient Gene Transfer in Combination with Conventional Transfection Reagents.

    PubMed

    Donnelly, Amanda; Yata, Teerapong; Bentayebi, Kaoutar; Suwan, Keittisak; Hajitou, Amin

    2015-12-08

    The development of commercially available transfection reagents for gene transfer applications has revolutionized the field of molecular biology and scientific research. However, the challenge remains in ensuring that they are efficient, safe, reproducible and cost effective. Bacteriophage (phage)-based viral vectors have the potential to be utilized for general gene transfer applications within research and industry. Yet, they require adaptations in order to enable them to efficiently enter cells and overcome mammalian cellular barriers, as they infect bacteria only; furthermore, limited progress has been made at increasing their efficiency. The production of a novel hybrid nanocomplex system consisting of two different nanomaterial systems, phage vectors and conventional transfection reagents, could overcome these limitations. Here we demonstrate that the combination of cationic lipids, cationic polymers or calcium phosphate with M13 bacteriophage-derived vectors, engineered to carry a mammalian transgene cassette, resulted in increased cellular attachment, entry and improved transgene expression in human cells. Moreover, addition of a targeting ligand into the nanocomplex system, through genetic engineering of the phage capsid further increased gene expression and was effective in a stable cell line generation application. Overall, this new hybrid nanocomplex system (i) provides enhanced phage-mediated gene transfer; (ii) is applicable for laboratory transfection processes and (iii) shows promise within industry for large-scale gene transfer applications.

  16. Chemoenzymatic Synthesis, Characterization, and Scale-Up of Milk Thistle Flavonolignan Glucuronides.

    PubMed

    Gufford, Brandon T; Graf, Tyler N; Paguigan, Noemi D; Oberlies, Nicholas H; Paine, Mary F

    2015-11-01

    Plant-based therapeutics, including herbal products, continue to represent a growing facet of the contemporary health care market. Mechanistic descriptions of the pharmacokinetics and pharmacodynamics of constituents composing these products remain nascent, particularly for metabolites produced following herbal product ingestion. Generation and characterization of authentic metabolite standards are essential to improve the quantitative mechanistic understanding of herbal product disposition in both in vitro and in vivo systems. Using the model herbal product, milk thistle, the objective of this work was to biosynthesize multimilligram quantities of glucuronides of select constituents (flavonolignans) to fill multiple knowledge gaps in the understanding of herbal product disposition and action. A partnership between clinical pharmacology and natural products chemistry expertise was leveraged to optimize reaction conditions for efficient glucuronide formation and evaluate alternate enzyme and reagent sources to improve cost effectiveness. Optimized reaction conditions used at least one-fourth the amount of microsomal protein (from bovine liver) and cofactor (UDP glucuronic acid) compared with typical conditions using human-derived subcellular fractions, providing substantial cost savings. Glucuronidation was flavonolignan-dependent. Silybin A, silybin B, isosilybin A, and isosilybin B generated five, four, four, and three monoglucuronides, respectively. Large-scale synthesis (40 mg of starting material) generated three glucuronides of silybin A: silybin A-7-O-β-D-glucuronide (15.7 mg), silybin A-5-O-β-D-glucuronide (1.6 mg), and silybin A-4´´-O-β-D-glucuronide (11.1 mg). This optimized, cost-efficient method lays the foundation for a systematic approach to synthesize and characterize herbal product constituent glucuronides, enabling an improved understanding of mechanisms underlying herbal product disposition and action. Copyright © 2015 by The American Society for Pharmacology and Experimental Therapeutics.

  17. Chemoenzymatic Synthesis, Characterization, and Scale-Up of Milk Thistle Flavonolignan Glucuronides

    PubMed Central

    Gufford, Brandon T.; Graf, Tyler N.; Paguigan, Noemi D.; Oberlies, Nicholas H.

    2015-01-01

    Plant-based therapeutics, including herbal products, continue to represent a growing facet of the contemporary health care market. Mechanistic descriptions of the pharmacokinetics and pharmacodynamics of constituents composing these products remain nascent, particularly for metabolites produced following herbal product ingestion. Generation and characterization of authentic metabolite standards are essential to improve the quantitative mechanistic understanding of herbal product disposition in both in vitro and in vivo systems. Using the model herbal product, milk thistle, the objective of this work was to biosynthesize multimilligram quantities of glucuronides of select constituents (flavonolignans) to fill multiple knowledge gaps in the understanding of herbal product disposition and action. A partnership between clinical pharmacology and natural products chemistry expertise was leveraged to optimize reaction conditions for efficient glucuronide formation and evaluate alternate enzyme and reagent sources to improve cost effectiveness. Optimized reaction conditions used at least one-fourth the amount of microsomal protein (from bovine liver) and cofactor (UDP glucuronic acid) compared with typical conditions using human-derived subcellular fractions, providing substantial cost savings. Glucuronidation was flavonolignan-dependent. Silybin A, silybin B, isosilybin A, and isosilybin B generated five, four, four, and three monoglucuronides, respectively. Large-scale synthesis (40 mg of starting material) generated three glucuronides of silybin A: silybin A-7-O-β-d-glucuronide (15.7 mg), silybin A-5-O-β-d-glucuronide (1.6 mg), and silybin A-4´´-O-β-d-glucuronide (11.1 mg). This optimized, cost-efficient method lays the foundation for a systematic approach to synthesize and characterize herbal product constituent glucuronides, enabling an improved understanding of mechanisms underlying herbal product disposition and action. PMID:26316643

  18. Parallel implementation of geometrical shock dynamics for two dimensional converging shock waves

    NASA Astrophysics Data System (ADS)

    Qiu, Shi; Liu, Kuang; Eliasson, Veronica

    2016-10-01

    Geometrical shock dynamics (GSD) theory is an appealing method to predict the shock motion in the sense that it is more computationally efficient than solving the traditional Euler equations, especially for converging shock waves. However, to solve and optimize large scale configurations, the main bottleneck is the computational cost. Among the existing numerical GSD schemes, there is only one that has been implemented on parallel computers, with the purpose to analyze detonation waves. To extend the computational advantage of the GSD theory to more general applications such as converging shock waves, a numerical implementation using a spatial decomposition method has been coupled with a front tracking approach on parallel computers. In addition, an efficient tridiagonal system solver for massively parallel computers has been applied to resolve the most expensive function in this implementation, resulting in an efficiency of 0.93 while using 32 HPCC cores. Moreover, symmetric boundary conditions have been developed to further reduce the computational cost, achieving a speedup of 19.26 for a 12-sided polygonal converging shock.

  19. Significance, evolution and recent advances in adsorption technology, materials and processes for desalination, water softening and salt removal.

    PubMed

    Alaei Shahmirzadi, Mohammad Amin; Hosseini, Seyed Saeid; Luo, Jianquan; Ortiz, Inmaculada

    2018-06-01

    Desalination and softening of sea, brackish, and ground water are becoming increasingly important solutions to overcome water shortage challenges. Various technologies have been developed for salt removal from water resources including multi-stage flash, multi-effect distillation, ion exchange, reverse osmosis, nanofiltration, electrodialysis, as well as adsorption. Recently, removal of solutes by adsorption onto selective adsorbents has shown promising perspectives. Different types of adsorbents such as zeolites, carbon nanotubes (CNTs), activated carbons, graphenes, magnetic adsorbents, and low-cost adsorbents (natural materials, industrial by-products and wastes, bio-sorbents, and biopolymer) have been synthesized and examined for salt removal from aqueous solutions. It is obvious from literature that the existing adsorbents have good potentials for desalination and water softening. Besides, nano-adsorbents have desirable surface area and adsorption capacity, though are not found at economically viable prices and still have challenges in recovery and reuse. On the other hand, natural and modified adsorbents seem to be efficient alternatives for this application compared to other types of adsorbents due to their availability and low cost. Some novel adsorbents are also emerging. Generally, there are a few issues such as low selectivity and adsorption capacity, process efficiency, complexity in preparation or synthesis, and problems associated to recovery and reuse that require considerable improvements in research and process development. Moreover, large-scale applications of sorbents and their practical utility need to be evaluated for possible commercialization and scale up. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Silicon-on ceramic process: Silicon sheet growth and device development for the large-area silicon sheet task of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Grung, B. L.; Heaps, J. D.; Schmit, F. M.; Schuldt, S. B.; Zook, J. D.

    1981-01-01

    The technical feasibility of producing solar-cell-quality sheet silicon to meet the Department of Energy (DOE) 1986 overall price goal of $0.70/watt was investigated. With the silicon-on-ceramic (SOC) approach, a low-cost ceramic substrate is coated with large-grain polycrystalline silicon by unidirectional solidification of molten silicon. This effort was divided into several areas of investigation in order to most efficiently meet the goals of the program. These areas include: (1) dip-coating; (2) continuous coating designated SCIM-coating, and acronym for Silicon Coating by an Inverted Meniscus (SCIM); (3) material characterization; (4) cell fabrication and evaluation; and (5) theoretical analysis. Both coating approaches were successful in producing thin layers of large grain, solar-cell-quality silicon. The dip-coating approach was initially investigated and considerable effort was given to this technique. The SCIM technique was adopted because of its scale-up potential and its capability to produce more conventiently large areas of SOC.

  1. Large-scale cauliflower-shaped hierarchical copper nanostructures for efficient photothermal conversion

    NASA Astrophysics Data System (ADS)

    Fan, Peixun; Wu, Hui; Zhong, Minlin; Zhang, Hongjun; Bai, Benfeng; Jin, Guofan

    2016-07-01

    Efficient solar energy harvesting and photothermal conversion have essential importance for many practical applications. Here, we present a laser-induced cauliflower-shaped hierarchical surface nanostructure on a copper surface, which exhibits extremely high omnidirectional absorption efficiency over a broad electromagnetic spectral range from the UV to the near-infrared region. The measured average hemispherical absorptance is as high as 98% within the wavelength range of 200-800 nm, and the angle dependent specular reflectance stays below 0.1% within the 0-60° incident angle. Such a structured copper surface can exhibit an apparent heating up effect under the sunlight illumination. In the experiment of evaporating water, the structured surface yields an overall photothermal conversion efficiency over 60% under an illuminating solar power density of ~1 kW m-2. The presented technology provides a cost-effective, reliable, and simple way for realizing broadband omnidirectional light absorptive metal surfaces for efficient solar energy harvesting and utilization, which is highly demanded in various light harvesting, anti-reflection, and photothermal conversion applications. Since the structure is directly formed by femtosecond laser writing, it is quite suitable for mass production and can be easily extended to a large surface area.Efficient solar energy harvesting and photothermal conversion have essential importance for many practical applications. Here, we present a laser-induced cauliflower-shaped hierarchical surface nanostructure on a copper surface, which exhibits extremely high omnidirectional absorption efficiency over a broad electromagnetic spectral range from the UV to the near-infrared region. The measured average hemispherical absorptance is as high as 98% within the wavelength range of 200-800 nm, and the angle dependent specular reflectance stays below 0.1% within the 0-60° incident angle. Such a structured copper surface can exhibit an apparent heating up effect under the sunlight illumination. In the experiment of evaporating water, the structured surface yields an overall photothermal conversion efficiency over 60% under an illuminating solar power density of ~1 kW m-2. The presented technology provides a cost-effective, reliable, and simple way for realizing broadband omnidirectional light absorptive metal surfaces for efficient solar energy harvesting and utilization, which is highly demanded in various light harvesting, anti-reflection, and photothermal conversion applications. Since the structure is directly formed by femtosecond laser writing, it is quite suitable for mass production and can be easily extended to a large surface area. Electronic supplementary information (ESI) available: XRD patterns of the fs laser structured Cu surface as produced and after the photothermal conversion test, directly measured temperature values on Cu surfaces, temperature rise on Cu surfaces at varied solar irradiation angles, comparison of the white light and IR images of the structured Cu surface with the polished Cu surface, temperature rise on the peripheral zones of the blue coating surface. See DOI: 10.1039/c6nr03662g

  2. Possibility of Database Research as a Means of Pharmacovigilance in Japan Based on a Comparison with Sertraline Postmarketing Surveillance.

    PubMed

    Hirano, Yoko; Asami, Yuko; Kuribayashi, Kazuhiko; Kitazaki, Shigeru; Yamamoto, Yuji; Fujimoto, Yoko

    2018-05-01

    Many pharmacoepidemiologic studies using large-scale databases have recently been utilized to evaluate the safety and effectiveness of drugs in Western countries. In Japan, however, conventional methodology has been applied to postmarketing surveillance (PMS) to collect safety and effectiveness information on new drugs to meet regulatory requirements. Conventional PMS entails enormous costs and resources despite being an uncontrolled observational study method. This study is aimed at examining the possibility of database research as a more efficient pharmacovigilance approach by comparing a health care claims database and PMS with regard to the characteristics and safety profiles of sertraline-prescribed patients. The characteristics of sertraline-prescribed patients recorded in a large-scale Japanese health insurance claims database developed by MinaCare Co. Ltd. were scanned and compared with the PMS results. We also explored the possibility of detecting signals indicative of adverse reactions based on the claims database by using sequence symmetry analysis. Diabetes mellitus, hyperlipidemia, and hyperthyroidism served as exploratory events, and their detection criteria for the claims database were reported by the Pharmaceuticals and Medical Devices Agency in Japan. Most of the characteristics of sertraline-prescribed patients in the claims database did not differ markedly from those in the PMS. There was no tendency for higher risks of the exploratory events after exposure to sertraline, and this was consistent with sertraline's known safety profile. Our results support the concept of using database research as a cost-effective pharmacovigilance tool that is free of selection bias . Further investigation using database research is required to confirm our preliminary observations. Copyright © 2018. Published by Elsevier Inc.

  3. Space Weather Research at the National Science Foundation

    NASA Astrophysics Data System (ADS)

    Moretto, T.

    2015-12-01

    There is growing recognition that the space environment can have substantial, deleterious, impacts on society. Consequently, research enabling specification and forecasting of hazardous space effects has become of great importance and urgency. This research requires studying the entire Sun-Earth system to understand the coupling of regions all the way from the source of disturbances in the solar atmosphere to the Earth's upper atmosphere. The traditional, region-based structure of research programs in Solar and Space physics is ill suited to fully support the change in research directions that the problem of space weather dictates. On the observational side, dense, distributed networks of observations are required to capture the full large-scale dynamics of the space environment. However, the cost of implementing these is typically prohibitive, especially for measurements in space. Thus, by necessity, the implementation of such new capabilities needs to build on creative and unconventional solutions. A particularly powerful idea is the utilization of new developments in data engineering and informatics research (big data). These new technologies make it possible to build systems that can collect and process huge amounts of noisy and inaccurate data and extract from them useful information. The shift in emphasis towards system level science for geospace also necessitates the development of large-scale and multi-scale models. The development of large-scale models capable of capturing the global dynamics of the Earth's space environment requires investment in research team efforts that go beyond what can typically be funded under the traditional grants programs. This calls for effective interdisciplinary collaboration and efficient leveraging of resources both nationally and internationally. This presentation will provide an overview of current and planned initiatives, programs, and activities at the National Science Foundation pertaining to space weathe research.

  4. Promising and Reversible Electrolyte with Thermal Switching Behavior for Safer Electrochemical Storage Devices.

    PubMed

    Shi, Yunhui; Zhang, Qian; Zhang, Yan; Jia, Limin; Xu, Xinhua

    2018-02-28

    A major stumbling block in large-scale adoption of high-energy-density electrochemical devices has been safety issues. Methods to control thermal runaway are limited by providing a one-time thermal protection. Herein, we developed a simple and reversible thermoresponsive electrolyte system that is efficient to shutdown the current flow according to temperature changes. The thermal management is ascribed to the thermally activated sol-gel transition of methyl cellulose solution, associated with the concentration of ions that can move between isolated chains freely or be restricted by entangled molecular chains. We studied the effect of cellulose concentration, substituent types, and operating temperature on the electrochemical performance, demonstrating an obvious capacity loss up to 90% approximately of its initial value. Moreover, this is a cost-effective approach that has the potential for use in practical electrochemical storage devices.

  5. Fungal-assisted algal flocculation: application in wastewater treatment and biofuel production.

    PubMed

    Muradov, Nazim; Taha, Mohamed; Miranda, Ana F; Wrede, Digby; Kadali, Krishna; Gujar, Amit; Stevenson, Trevor; Ball, Andrew S; Mouradov, Aidyn

    2015-01-01

    The microalgal-based industries are facing a number of important challenges that in turn affect their economic viability. Arguably the most important of these are associated with the high costs of harvesting and dewatering of the microalgal cells, the costs and sustainability of nutrient supplies and costly methods for large scale oil extraction. Existing harvesting technologies, which can account for up to 50% of the total cost, are not economically feasible because of either requiring too much energy or the addition of chemicals. Fungal-assisted flocculation is currently receiving increased attention because of its high harvesting efficiency. Moreover, some of fungal and microalgal strains are well known for their ability to treat wastewater, generating biomass which represents a renewable and sustainable feedstock for bioenergy production. We screened 33 fungal strains, isolated from compost, straws and soil for their lipid content and flocculation efficiencies against representatives of microalgae commercially used for biodiesel production, namely the heterotrophic freshwater microalgae Chlorella protothecoides and the marine microalgae Tetraselmis suecica. Lipid levels and composition were analyzed in fungal-algal pellets grown on media containing alternative carbon, nitrogen and phosphorus sources from wheat straw and swine wastewater, respectively. The biomass of fungal-algal pellets grown on swine wastewater was used as feedstock for the production of value-added chemicals, biogas, bio-solids and liquid petrochemicals through pyrolysis. Co-cultivation of microalgae and filamentous fungus increased total biomass production, lipid yield and wastewater bioremediation efficiency. Fungal-assisted microalgal flocculation shows significant potential for solving the major challenges facing the commercialization of microalgal biotechnology, namely (i) the efficient and cost-effective harvesting of freshwater and seawater algal strains; (ii) enhancement of total oil production and optimization of its composition; (iii) nutrient supply through recovering of the primary nutrients, nitrogen and phosphates and microelements from wastewater. The biomass generated was thermochemically converted into biogas, bio-solids and a range of liquid petrochemicals including straight-chain C12 to C21 alkanes which can be directly used as a glycerine-free component of biodiesel. Pyrolysis represents an efficient alternative strategy for biofuel production from species with tough cell walls such as fungi and fungal-algal pellets.

  6. Cost-effectiveness of HIV prevention for high-risk groups at scale: an economic evaluation of the Avahan programme in south India.

    PubMed

    Vassall, Anna; Pickles, Michael; Chandrashekar, Sudhashree; Boily, Marie-Claude; Shetty, Govindraj; Guinness, Lorna; Lowndes, Catherine M; Bradley, Janet; Moses, Stephen; Alary, Michel; Vickerman, Peter

    2014-09-01

    Avahan is a large-scale, HIV preventive intervention, targeting high-risk populations in south India. We assessed the cost-effectiveness of Avahan to inform global and national funding institutions who are considering investing in worldwide HIV prevention in concentrated epidemics. We estimated cost effectiveness from a programme perspective in 22 districts in four high-prevalence states. We used the UNAIDS Costing Guidelines for HIV Prevention Strategies as the basis for our costing method, and calculated effect estimates using a dynamic transmission model of HIV and sexually transmitted disease transmission that was parameterised and fitted to locally observed behavioural and prevalence trends. We calculated incremental cost-effective ratios (ICERs), comparing the incremental cost of Avahan per disability-adjusted life-year (DALY) averted versus a no-Avahan counterfactual scenario. We also estimated incremental cost per HIV infection averted and incremental cost per person reached. Avahan reached roughly 150 000 high-risk individuals between 2004 and 2008 in the 22 districts studied, at a mean cost per person reached of US$327 during the 4 years. This reach resulted in an estimated 61 000 HIV infections averted, with roughly 11 000 HIV infections averted in the general population, at a mean incremental cost per HIV infection averted of $785 (SD 166). We estimate that roughly 1 million DALYs were averted across the 22 districts, at a mean incremental cost per DALY averted of $46 (SD 10). Future antiretroviral treatment (ART) cost savings during the lifetime of the cohort exposed to HIV prevention were estimated to be more than $77 million (compared with the slightly more than $50 million spent on Avahan in the 22 districts during the 4 years of the study). This study provides evidence that the investment in targeted HIV prevention programmes in south India has been cost effective, and is likely to be cost saving if a commitment is made to provide ART to all that can benefit from it. Policy makers should consider funding and sustaining large-scale targeted HIV prevention programmes in India and beyond. Bill & Melinda Gates Foundation. Copyright © 2014 Vassall et al. Open Access article distributed under the terms of CC BY-NC-ND. Published by .. All rights reserved.

  7. Spatially explicit forecasts of large wildland fire probability and suppression costs for California

    Treesearch

    Haiganoush Preisler; Anthony L. Westerling; Krista M. Gebert; Francisco Munoz-Arriola; Thomas P. Holmes

    2011-01-01

    In the last decade, increases in fire activity and suppression expenditures have caused budgetary problems for federal land management agencies. Spatial forecasts of upcoming fire activity and costs have the potential to help reduce expenditures, and increase the efficiency of suppression efforts, by enabling them to focus resources where they have the greatest effect...

  8. Comprehensive Evaluation of Biological Growth Control by Chlorine-Based Biocides in Power Plant Cooling Systems Using Tertiary Effluent

    PubMed Central

    Chien, Shih-Hsiang; Dzombak, David A.; Vidic, Radisav D.

    2013-01-01

    Abstract Recent studies have shown that treated municipal wastewater can be a reliable cooling water alternative to fresh water. However, elevated nutrient concentration and microbial population in wastewater lead to aggressive biological proliferation in the cooling system. Three chlorine-based biocides were evaluated for the control of biological growth in cooling systems using tertiary treated wastewater as makeup, based on their biocidal efficiency and cost-effectiveness. Optimal chemical regimens for achieving successful biological growth control were elucidated based on batch-, bench-, and pilot-scale experiments. Biocide usage and biological activity in planktonic and sessile phases were carefully monitored to understand biological growth potential and biocidal efficiency of the three disinfectants in this particular environment. Water parameters, such as temperature, cycles of concentration, and ammonia concentration in recirculating water, critically affected the biocide performance in recirculating cooling systems. Bench-scale recirculating tests were shown to adequately predict the biocide residual required for a pilot-scale cooling system. Optimal residuals needed for proper biological growth control were 1, 2–3, and 0.5–1 mg/L as Cl2 for NaOCl, preformed NH2Cl, and ClO2, respectively. Pilot-scale tests also revealed that Legionella pneumophila was absent from these cooling systems when using the disinfectants evaluated in this study. Cost analysis showed that NaOCl is the most cost-effective for controlling biological growth in power plant recirculating cooling systems using tertiary-treated wastewater as makeup. PMID:23781129

  9. Comprehensive Evaluation of Biological Growth Control by Chlorine-Based Biocides in Power Plant Cooling Systems Using Tertiary Effluent.

    PubMed

    Chien, Shih-Hsiang; Dzombak, David A; Vidic, Radisav D

    2013-06-01

    Recent studies have shown that treated municipal wastewater can be a reliable cooling water alternative to fresh water. However, elevated nutrient concentration and microbial population in wastewater lead to aggressive biological proliferation in the cooling system. Three chlorine-based biocides were evaluated for the control of biological growth in cooling systems using tertiary treated wastewater as makeup, based on their biocidal efficiency and cost-effectiveness. Optimal chemical regimens for achieving successful biological growth control were elucidated based on batch-, bench-, and pilot-scale experiments. Biocide usage and biological activity in planktonic and sessile phases were carefully monitored to understand biological growth potential and biocidal efficiency of the three disinfectants in this particular environment. Water parameters, such as temperature, cycles of concentration, and ammonia concentration in recirculating water, critically affected the biocide performance in recirculating cooling systems. Bench-scale recirculating tests were shown to adequately predict the biocide residual required for a pilot-scale cooling system. Optimal residuals needed for proper biological growth control were 1, 2-3, and 0.5-1 mg/L as Cl 2 for NaOCl, preformed NH 2 Cl, and ClO 2 , respectively. Pilot-scale tests also revealed that Legionella pneumophila was absent from these cooling systems when using the disinfectants evaluated in this study. Cost analysis showed that NaOCl is the most cost-effective for controlling biological growth in power plant recirculating cooling systems using tertiary-treated wastewater as makeup.

  10. Large-scale hybrid poplar production economics: 1995 Alexandria, Minnesota establishment cost and management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Downing, M.; Langseth, D.; Stoffel, R.

    1996-12-31

    The purpose of this project was to track and monitor costs of planting, maintaining, and monitoring large scale commercial plantings of hybrid poplar in Minnesota. These costs assists potential growers and purchasers of this resource to determine the ways in which supply and demand may be secured through developing markets.

  11. Does integration of HIV and SRH services achieve economies of scale and scope in practice? A cost function analysis of the Integra Initiative.

    PubMed

    Obure, Carol Dayo; Guinness, Lorna; Sweeney, Sedona; Initiative, Integra; Vassall, Anna

    2016-03-01

    Policy-makers have long argued about the potential efficiency gains and cost savings from integrating HIV and sexual reproductive health (SRH) services, particularly in resource-constrained settings with generalised HIV epidemics. However, until now, little empirical evidence exists on whether the hypothesised efficiency gains associated with such integration can be achieved in practice. We estimated a quadratic cost function using data obtained from 40 health facilities, over a 2-year-period, in Kenya and Swaziland. The quadratic specification enables us to determine the existence of economies of scale and scope. The empirical results reveal that at the current output levels, only HIV counselling and testing services are characterised by service-specific economies of scale. However, no overall economies of scale exist as all outputs are increased. The results also indicate cost complementarities between cervical cancer screening and HIV care; post-natal care and HIV care and family planning and sexually transmitted infection treatment combinations only. The results from this analysis reveal that contrary to expectation, efficiency gains from the integration of HIV and SRH services, if any, are likely to be modest. Efficiency gains are likely to be most achievable in settings that are currently delivering HIV and SRH services at a low scale with high levels of fixed costs. The presence of cost complementarities for only three service combinations implies that careful consideration of setting-specific clinical practices and the extent to which they can be combined should be made when deciding which services to integrate. NCT01694862. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. Does integration of HIV and SRH services achieve economies of scale and scope in practice? A cost function analysis of the Integra Initiative

    PubMed Central

    Obure, Carol Dayo; Guinness, Lorna; Sweeney, Sedona; Initiative, Integra; Vassall, Anna

    2016-01-01

    Objective Policy-makers have long argued about the potential efficiency gains and cost savings from integrating HIV and sexual reproductive health (SRH) services, particularly in resource-constrained settings with generalised HIV epidemics. However, until now, little empirical evidence exists on whether the hypothesised efficiency gains associated with such integration can be achieved in practice. Methods We estimated a quadratic cost function using data obtained from 40 health facilities, over a 2-year-period, in Kenya and Swaziland. The quadratic specification enables us to determine the existence of economies of scale and scope. Findings The empirical results reveal that at the current output levels, only HIV counselling and testing services are characterised by service-specific economies of scale. However, no overall economies of scale exist as all outputs are increased. The results also indicate cost complementarities between cervical cancer screening and HIV care; post-natal care and HIV care and family planning and sexually transmitted infection treatment combinations only. Conclusions The results from this analysis reveal that contrary to expectation, efficiency gains from the integration of HIV and SRH services, if any, are likely to be modest. Efficiency gains are likely to be most achievable in settings that are currently delivering HIV and SRH services at a low scale with high levels of fixed costs. The presence of cost complementarities for only three service combinations implies that careful consideration of setting-specific clinical practices and the extent to which they can be combined should be made when deciding which services to integrate. Trial registration number NCT01694862. PMID:26438349

  13. Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott

    2017-11-01

    Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.

  14. Polyaluminium chloride as an alternative to alum for the direct filtration of drinking water.

    PubMed

    Zarchi, Idit; Friedler, Eran; Rebhun, Menahem

    2013-01-01

    The efficiency of various polyaluminium chloride coagulants (PACls) was compared to the efficiency of aluminium sulfate (alum) in the coagulation-flocculation process preceding direct filtration in drinking water treatment. The comparative study consisted of two separate yet complementary series of experiments: the first series included short (5-7 h) and long (24 h) filter runs conducted at a pilot filtration plant equipped with large filter columns that simulated full-scale filters. Partially treated surface water from the Sea of Galilee, characterized by very low turbidity (-1 NTU), was used. In the second series of experiments, speciation of aluminium in situ was investigated using the ferron assay method. Results from the pilot-scale study indicate that most PACls were as or more efficient a coagulant as alum for direct filtration of surface water without requiring acid addition for pH adjustment and subsequent base addition for re-stabilizing the water. Consequently, cost analysis of the chemicals needed for the process showed that treatment with PACl would be significantly less costly than treatment with alum. The aluminium speciation experiments revealed that the performance of the coagulant is more influenced by the species present during the coagulation process than those present in the original reagents.

  15. Mixing Carrots and Sticks to Conserve Forests in the Brazilian Amazon: A Spatial Probabilistic Modeling Approach

    PubMed Central

    Börner, Jan; Marinho, Eduardo; Wunder, Sven

    2015-01-01

    Annual forest loss in the Brazilian Amazon had in 2012 declined to less than 5,000 sqkm, from over 27,000 in 2004. Mounting empirical evidence suggests that changes in Brazilian law enforcement strategy and the related governance system may account for a large share of the overall success in curbing deforestation rates. At the same time, Brazil is experimenting with alternative approaches to compensate farmers for conservation actions through economic incentives, such as payments for environmental services, at various administrative levels. We develop a spatially explicit simulation model for deforestation decisions in response to policy incentives and disincentives. The model builds on elements of optimal enforcement theory and introduces the notion of imperfect payment contract enforcement in the context of avoided deforestation. We implement the simulations using official deforestation statistics and data collected from field-based forest law enforcement operations in the Amazon region. We show that a large-scale integration of payments with the existing regulatory enforcement strategy involves a tradeoff between the cost-effectiveness of forest conservation and landholder incomes. Introducing payments as a complementary policy measure increases policy implementation cost, reduces income losses for those hit hardest by law enforcement, and can provide additional income to some land users. The magnitude of the tradeoff varies in space, depending on deforestation patterns, conservation opportunity and enforcement costs. Enforcement effectiveness becomes a key determinant of efficiency in the overall policy mix. PMID:25650966

  16. Mixing carrots and sticks to conserve forests in the Brazilian Amazon: a spatial probabilistic modeling approach.

    PubMed

    Börner, Jan; Marinho, Eduardo; Wunder, Sven

    2015-01-01

    Annual forest loss in the Brazilian Amazon had in 2012 declined to less than 5,000 sqkm, from over 27,000 in 2004. Mounting empirical evidence suggests that changes in Brazilian law enforcement strategy and the related governance system may account for a large share of the overall success in curbing deforestation rates. At the same time, Brazil is experimenting with alternative approaches to compensate farmers for conservation actions through economic incentives, such as payments for environmental services, at various administrative levels. We develop a spatially explicit simulation model for deforestation decisions in response to policy incentives and disincentives. The model builds on elements of optimal enforcement theory and introduces the notion of imperfect payment contract enforcement in the context of avoided deforestation. We implement the simulations using official deforestation statistics and data collected from field-based forest law enforcement operations in the Amazon region. We show that a large-scale integration of payments with the existing regulatory enforcement strategy involves a tradeoff between the cost-effectiveness of forest conservation and landholder incomes. Introducing payments as a complementary policy measure increases policy implementation cost, reduces income losses for those hit hardest by law enforcement, and can provide additional income to some land users. The magnitude of the tradeoff varies in space, depending on deforestation patterns, conservation opportunity and enforcement costs. Enforcement effectiveness becomes a key determinant of efficiency in the overall policy mix.

  17. Fiber supercapacitors utilizing pen ink for flexible/wearable energy storage.

    PubMed

    Fu, Yongping; Cai, Xin; Wu, Hongwei; Lv, Zhibin; Hou, Shaocong; Peng, Ming; Yu, Xiao; Zou, Dechun

    2012-11-08

    A novel type of flexible fiber/wearable supercapacitor that is composed of two fiber electrodes - a helical spacer wire and an electrolyte - is demonstrated. In the carbon-based fiber supercapacitor (FSC), which has high capacitance performance, commercial pen ink is directly utilized as the electrochemical material. FSCs have potential benefits in the pursuit of low-cost, large-scale, and efficient flexible/wearable energy storage systems. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. An numerical analysis of high-temperature helium reactor power plant for co-production of hydrogen and electricity

    NASA Astrophysics Data System (ADS)

    Dudek, M.; Podsadna, J.; Jaszczur, M.

    2016-09-01

    In the present work, the feasibility of using a high temperature gas cooled nuclear reactor (HTR) for electricity generation and hydrogen production are analysed. The HTR is combined with a steam and a gas turbine, as well as with the system for heat delivery for medium temperature hydrogen production. Industrial-scale hydrogen production using copper-chlorine (Cu-Cl) thermochemical cycle is considered and compared with high temperature electrolysis. Presented cycle shows a very promising route for continuous, efficient, large-scale and environmentally benign hydrogen production without CO2 emissions. The results show that the integration of a high temperature helium reactor, with a combined cycle for electric power generation and hydrogen production, may reach very high efficiency and could possibly lead to a significant decrease of hydrogen production costs.

  19. Direct transfer of wafer-scale graphene films

    NASA Astrophysics Data System (ADS)

    Kim, Maria; Shah, Ali; Li, Changfeng; Mustonen, Petri; Susoma, Jannatul; Manoocheri, Farshid; Riikonen, Juha; Lipsanen, Harri

    2017-09-01

    Flexible electronics serve as the ubiquitous platform for the next-generation life science, environmental monitoring, display, and energy conversion applications. Outstanding multi-functional mechanical, thermal, electrical, and chemical properties of graphene combined with transparency and flexibility solidifies it as ideal for these applications. Although chemical vapor deposition (CVD) enables cost-effective fabrication of high-quality large-area graphene films, one critical bottleneck is an efficient and reproducible transfer of graphene to flexible substrates. We explore and describe a direct transfer method of 6-inch monolayer CVD graphene onto transparent and flexible substrate based on direct vapor phase deposition of conformal parylene on as-grown graphene/copper (Cu) film. The method is straightforward, scalable, cost-effective and reproducible. The transferred film showed high uniformity, lack of mechanical defects and sheet resistance for doped graphene as low as 18 Ω/sq and 96.5% transparency at 550 nm while withstanding high strain. To underline that the introduced technique is capable of delivering graphene films for next-generation flexible applications we demonstrate a wearable capacitive controller, a heater, and a self-powered triboelectric sensor.

  20. Production cost structure in US outpatient physical therapy health care.

    PubMed

    Lubiani, Gregory G; Okunade, Albert A

    2013-02-01

    This paper investigates the technology cost structure in US physical therapy care. We exploit formal economic theories and a rich national data of providers to tease out implications for operational cost efficiencies. The 2008-2009 dataset comprising over 19 000 bi-weekly, site-specific physical therapy center observations across 28 US states and Occupational Employment Statistics data (Bureau of Labor Statistics) includes measures of output, three labor types (clinical, support, and administrative), and facilities (capital). We discuss findings from the iterative seemingly unrelated regression estimation system model. The generalized translog cost estimates indicate a well-behaved underlying technology structure. We also find the following: (i) factor demands are downwardly sloped; (ii) pair-wise factor relationships largely reflect substitutions; (iii) factor demand for physical therapists is more inelastic compared with that for administrative staff; and (iv) diminishing scale economies exist at the 25%, 50%, and 75% output (patient visits) levels. Our findings advance the timely economic understanding of operations in an increasingly important segment of the medical care sector that has, up-to-now (because of data paucity), been missing from healthcare efficiency analysis. Our work further provides baseline estimates for comparing operational efficiencies in physical therapy care after implementations of the 2010 US healthcare reforms. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Dense image registration through MRFs and efficient linear programming.

    PubMed

    Glocker, Ben; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir; Paragios, Nikos

    2008-12-01

    In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.

  2. The cost of a large-scale hollow fibre MBR.

    PubMed

    Verrecht, Bart; Maere, Thomas; Nopens, Ingmar; Brepols, Christoph; Judd, Simon

    2010-10-01

    A cost sensitivity analysis was carried out for a full-scale hollow fibre membrane bioreactor to quantify the effect of design choices and operational parameters on cost. Different options were subjected to a long term dynamic influent profile and evaluated using ASM1 for effluent quality, aeration requirements and sludge production. The results were used to calculate a net present value (NPV), incorporating both capital expenditure (capex), based on costs obtained from equipment manufacturers and full-scale plants, and operating expenditure (opex), accounting for energy demand, sludge production and chemical cleaning costs. Results show that the amount of contingency built in to cope with changes in feedwater flow has a large impact on NPV. Deviation from a constant daily flow increases NPV as mean plant utilisation decreases. Conversely, adding a buffer tank reduces NPV, since less membrane surface is required when average plant utilisation increases. Membrane cost and lifetime is decisive in determining NPV: an increased membrane replacement interval from 5 to 10 years reduces NPV by 19%. Operation at higher SRT increases the NPV, since the reduced costs for sludge treatment are offset by correspondingly higher aeration costs at higher MLSS levels, though the analysis is very sensitive to sludge treatment costs. A higher sustainable flux demands greater membrane aeration, but the subsequent opex increase is offset by the reduced membrane area and the corresponding lower capex. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Transitioning glass-ceramic scintillators for diagnostic x-ray imaging from the laboratory to commercial scale

    NASA Astrophysics Data System (ADS)

    Beckert, M. Brooke; Gallego, Sabrina; Elder, Eric; Nadler, Jason

    2016-10-01

    This study sought to mitigate risk in transitioning newly developed glass-ceramic scintillator technology from a laboratory concept to commercial product by identifying the most significant hurdles to increased scale. These included selection of cost effective raw material sources, investigation of process parameters with the most significant impact on performance, and synthesis steps that could see the greatest benefit from participation of an industry partner that specializes in glass or optical component manufacturing. Efforts focused on enhancing the performance of glass-ceramic nanocomposite scintillators developed specifically for medical imaging via composition and process modifications that ensured efficient capture of incident X-ray energy and emission of scintillation light. The use of cost effective raw materials and existing manufacturing methods demonstrated proof-of-concept for economical viable alternatives to existing benchmark materials, as well as possible disruptive applications afforded by novel geometries and comparatively lower cost per volume. The authors now seek the expertise of industry to effectively navigate the transition from laboratory demonstrations to pilot scale production and testing to evince the industry of the viability and usefulness of composite-based scintillators.

  4. Dip coating process: Silicon sheet growth development for the large-area silicon sheet task of the low-cost silicon solar array project

    NASA Technical Reports Server (NTRS)

    Heaps, J. D.; Maciolek, R. B.; Zook, J. D.; Harrison, W. B.; Scott, M. W.; Hendrickson, G.; Wolner, H. A.; Nelson, L. D.; Schuller, T. L.; Peterson, A. A.

    1976-01-01

    The technical and economic feasibility of producing solar cell quality sheet silicon by dip-coating one surface of carbonized ceramic substrates with a thin layer of large grain polycrystalline silicon was investigated. The dip-coating methods studied were directed toward a minimum cost process with the ultimate objective of producing solar cells with a conversion efficiency of 10% or greater. The technique shows excellent promise for low cost, labor-saving, scale-up potentialities and would provide an end product of sheet silicon with a rigid and strong supportive backing. An experimental dip-coating facility was designed and constructed, several substrates were successfully dip-coated with areas as large as 25 sq cm and thicknesses of 12 micron to 250 micron. There appears to be no serious limitation on the area of a substrate that could be coated. Of the various substrate materials dip-coated, mullite appears to best satisfy the requirement of the program. An inexpensive process was developed for producing mullite in the desired geometry.

  5. CSP ELEMENTS: High-Temperature Thermochemical Storage with Redox-Stable Perovskites for Concentrating Solar Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackson, Gregory S; Braun, Robert J; Ma, Zhiwen

    This project was motivated by the potential of reducible perovskite oxides for high-temperature, thermochemical energy storage (TCES) to provide dispatchable renewable heat for concentrating solar power (CSP) plants. This project sought to identify and characterize perovskites from earth-abundant cations with high reducibility below 1000 °C for coupling TCES of solar energy to super-critical CO2 (s-CO2) plants that operate above temperature limits (< 600 °C) of current molten-salt storage. Specific TCES > 750 kJ/kg for storage cycles between 500 and 900 °C was targeted with a system cost goal of $15/kWhth. To realize feasibility of TCES systems based on reducible perovskites,more » our team focused on designing and testing a lab-scale concentrating solar receiver, wherein perovskite particles capture solar energy by fast O2 release and sensible heating at a thermal efficiency of 90% and wall temperatures below 1100 °C. System-level models of the receiver and reoxidation reactor coupled to validated thermochemical materials models can assess approaches to scale-up a full TCES system based on reduction/oxidation cycles of perovskite oxides at large scales. After characterizing many Ca-based perovskites for TCES, our team identified strontium-doped calcium manganite Ca1-xSrxMnO3-δ (with x ≤ 0.1) as a composition with adequate stability and specific TCES capacity (> 750 kJ/kg for Ca0.95Sr0.05MnO3-δ) for cycling between air at 500 °C and low-PO2 (10-4 bar) N2 at 900 °C. Substantial kinetic tests demonstrated that resident times of several minutes in low-PO2 gas were needed for these materials to reach the specific TCES goals with particles of reasonable size for large-scale transport (diameter dp > 200 μm). On the other hand, fast reoxidation kinetics in air enables subsequent rapid heat release in a fluidized reoxidation reactor/ heat recovery unit for driving s-CO2 power plants. Validated material thermochemistry coupled to radiation and convective particle-gas transport models facilitated full TCES system analysis for CSP and results showed that receiver efficiencies approaching 85% were feasible with wall-to-particle heat transfer coefficients observed in laboratory experiments. Coupling these reactive particle-gas transport models to external SolTrace and CFD models drove design of a reactive-particle receiver with indirect heating through flux spreading. A lab-scale receiver using Ca0.9Sr0.1MnO3-δ was demonstrated at NREL’s High Flux Solar Furnace with particle temperatures reaching 900 °C while wall temperatures remained below 1100 °C and approximately 200 kJ/kg of chemical energy storage. These first demonstrations of on-sun perovskite reduction and the robust modeling tools from this program provide a basis for going forward with improved receiver designs to increase heat fluxes and solar-energy capture efficiencies. Measurements and modeling tools from this project provide the foundations for advancing TCES for CSP and other applications using reducible perovskite oxides from low-cost, earth-abundant elements. A perovskite composition has been identified that has the thermodynamic potential to meet the targeted TCES capacity of 750 kJ/kg over a range of temperatures amenable for integration with s-CO2 cycles. Further research needs to explore ways of accelerating effective particle kinetics through variations in composition and/or reactor/receiver design. Initial demonstrations of on-sun particle reduction for TCES show a need for testing at larger scales with reduced heat losses and improved particle-wall heat transfer. The gained insight into particle-gas transport and reactor design can launch future development of cost-effective, large-scale particle-based TCES as a technology for enabling increased renewable energy penetration.« less

  6. Advanced avionics concepts: Autonomous spacecraft control

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A large increase in space operations activities is expected because of Space Station Freedom (SSF) and long range Lunar base missions and Mars exploration. Space operations will also increase as a result of space commercialization (especially the increase in satellite networks). It is anticipated that the level of satellite servicing operations will grow tenfold from the current level within the next 20 years. This growth can be sustained only if the cost effectiveness of space operations is improved. Cost effectiveness is operational efficiency with proper effectiveness. A concept is presented of advanced avionics, autonomous spacecraft control, that will enable the desired growth, as well as maintain the cost effectiveness (operational efficiency) in satellite servicing operations. The concept of advanced avionics that allows autonomous spacecraft control is described along with a brief description of each component. Some of the benefits of autonomous operations are also described. A technology utilization breakdown is provided in terms of applications.

  7. Light extraction from organic light-emitting diodes for lighting applications by sand-blasting substrates.

    PubMed

    Chen, Shuming; Kwok, Hoi Sing

    2010-01-04

    Light extraction from organic light-emitting diodes (OLEDs) by scattering the light is one of the effective methods for large-area lighting applications. In this paper, we present a very simple and cost-effective method to rough the substrates and hence to scatter the light. By simply sand-blasting the edges and back-side surface of the glass substrates, a 20% improvement of forward efficiency has been demonstrated. Moreover, due to scattering effect, a constant color over all viewing angles and uniform light pattern with Lambertian distribution has been obtained. This simple and cost-effective method may be suitable for mass production of large-area OLEDs for lighting applications.

  8. Economical and technical efficiencies evaluation of full scale piggery wastewater treatment BNR plants.

    PubMed

    Oa, S W; Choi, E; Kim, S W; Kwon, K H; Min, K S

    2009-01-01

    A method evaluating the economic efficiency of piggery waste treatment plant based on kinetics for nitrogen removal performances is executed in this study and five full scale plants were evaluated, monitored intensively during one year under steady-state conditions. The performance data from those surveyed plants were recalculated by first-order kinetic equation instead of the Monod's equation, and the nitrogen removal kinetics related with COD/TKN ratios. Two plants adapting two extreme strategies for pre treatment, 'excess phase separation', and 'minimum phase separation', were evaluated by the assessment of life cycle cost (LCC). Although the compared two plants use an opposite strategy to each other, similar evaluation results are deduced by nitrogen removal efficiencies and operational and construction costs. But the proportions of constituent elements are as different as two opposite strategies, so electrical and construction costs are inversely proportional to chemical costs and operational costs respectively.

  9. Frequency Control of Single Quantum Emitters in Integrated Photonic Circuits

    NASA Astrophysics Data System (ADS)

    Schmidgall, Emma R.; Chakravarthi, Srivatsa; Gould, Michael; Christen, Ian R.; Hestroffer, Karine; Hatami, Fariba; Fu, Kai-Mei C.

    2018-02-01

    Generating entangled graph states of qubits requires high entanglement rates, with efficient detection of multiple indistinguishable photons from separate qubits. Integrating defect-based qubits into photonic devices results in an enhanced photon collection efficiency, however, typically at the cost of a reduced defect emission energy homogeneity. Here, we demonstrate that the reduction in defect homogeneity in an integrated device can be partially offset by electric field tuning. Using photonic device-coupled implanted nitrogen vacancy (NV) centers in a GaP-on-diamond platform, we demonstrate large field-dependent tuning ranges and partial stabilization of defect emission energies. These results address some of the challenges of chip-scale entanglement generation.

  10. Frequency Control of Single Quantum Emitters in Integrated Photonic Circuits.

    PubMed

    Schmidgall, Emma R; Chakravarthi, Srivatsa; Gould, Michael; Christen, Ian R; Hestroffer, Karine; Hatami, Fariba; Fu, Kai-Mei C

    2018-02-14

    Generating entangled graph states of qubits requires high entanglement rates with efficient detection of multiple indistinguishable photons from separate qubits. Integrating defect-based qubits into photonic devices results in an enhanced photon collection efficiency, however, typically at the cost of a reduced defect emission energy homogeneity. Here, we demonstrate that the reduction in defect homogeneity in an integrated device can be partially offset by electric field tuning. Using photonic device-coupled implanted nitrogen vacancy (NV) centers in a GaP-on-diamond platform, we demonstrate large field-dependent tuning ranges and partial stabilization of defect emission energies. These results address some of the challenges of chip-scale entanglement generation.

  11. Statistical media design for efficient polyhydroxyalkanoate production in Pseudomonas sp. MNNG-S.

    PubMed

    Saranya, V; Rajeswari, V; Abirami, P; Poornimakkani, K; Suguna, P; Shenbagarathai, R

    2016-07-03

    Polyhydroxyalkanoate (PHA) is a promising polymer for various biomedical applications. There is a high need to improve the production rate to achieve end use. When a cost-effective production was carried out with cheaper agricultural residues like molasses, traces of toxins were incorporated into the polymer, which makes it unfit for biomedical applications. On the other hand, there is an increase in the popularity of using chemically defined media for the production of compounds with biomedical applications. However, these media do not exhibit favorable characteristics such as efficient utilization at large scale compared to complex media. This article aims to determine the specific nutritional requirement of Pseudomonas sp. MNNG-S for efficient production of polyhydroxyalkanoate. Response surface methodology (RSM) was used in this study to statistically design for PHA production based on the interactive effect of five significant variables (sucrose; potassium dihydrogen phosphate; ammonium sulfate; magnesium sulfate; trace elements). The interactive effects of sucrose with ammonium sulfate, ammonium sulfate with combined potassium phosphate, and trace element with magnesium sulfate were found to be significant (p < .001). The optimization approach adapted in this study increased the PHA production more than fourfold (from 0.85 g L(-1) to 4.56 g L(-1)).

  12. Advanced Wear-resistant Nanocomposites for Increased Energy Efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, B. A.; Harringa, J. L.; Russel, A. M.

    This report summarizes the work performed by an Ames-led project team under a 4-year DOE-ITP sponsored project titled, 'Advanced Wear-resistant Nanocomposites for Increased Energy Efficiency.' The Report serves as the project deliverable for the CPS agreement number 15015. The purpose of this project was to develop and commercialize a family of lightweight, bulk composite materials that are highly resistant to degradation by erosive and abrasive wear. These materials, based on AlMgB{sub 14}, are projected to save over 30 TBtu of energy per year when fully implemented in industrial applications, with the associated environmental benefits of eliminating the burning of 1.5more » M tons/yr of coal and averting the release of 4.2 M tons/yr of CO{sub 2} into the air. This program targeted applications in the mining, drilling, machining, and dry erosion applications as key platforms for initial commercialization, which includes some of the most severe wear conditions in industry. Production-scale manufacturing of this technology has begun through a start-up company, NewTech Ceramics (NTC). This project included providing technical support to NTC in order to facilitate cost-effective mass production of the wear-resistant boride components. Resolution of issues related to processing scale-up, reduction in energy intensity during processing, and improving the quality and performance of the composites, without adding to the cost of processing were among the primary technical focus areas of this program. Compositional refinements were also investigated in order to achieve the maximum wear resistance. In addition, synthesis of large-scale, single-phase AlMgB{sub 14} powder was conducted for use as PVD sputtering targets for nanocoating applications.« less

  13. Amphibian and reptile road-kills on tertiary roads in relation to landscape structure: using a citizen science approach with open-access land cover data.

    PubMed

    Heigl, Florian; Horvath, Kathrin; Laaha, Gregor; Zaller, Johann G

    2017-06-26

    Amphibians and reptiles are among the most endangered vertebrate species worldwide. However, little is known how they are affected by road-kills on tertiary roads and whether the surrounding landscape structure can explain road-kill patterns. The aim of our study was to examine the applicability of open-access remote sensing data for a large-scale citizen science approach to describe spatial patterns of road-killed amphibians and reptiles on tertiary roads. Using a citizen science app we monitored road-kills of amphibians and reptiles along 97.5 km of tertiary roads covering agricultural, municipal and interurban roads as well as cycling paths in eastern Austria over two seasons. Surrounding landscape was assessed using open access land cover classes for the region (Coordination of Information on the Environment, CORINE). Hotspot analysis was performed using kernel density estimation (KDE+). Relations between land cover classes and amphibian and reptile road-kills were analysed with conditional probabilities and general linear models (GLM). We also estimated the potential cost-efficiency of a large scale citizen science monitoring project. We recorded 180 amphibian and 72 reptile road-kills comprising eight species mainly occurring on agricultural roads. KDE+ analyses revealed a significant clustering of road-killed amphibians and reptiles, which is an important information for authorities aiming to mitigate road-kills. Overall, hotspots of amphibian and reptile road-kills were next to the land cover classes arable land, suburban areas and vineyards. Conditional probabilities and GLMs identified road-kills especially next to preferred habitats of green toad, common toad and grass snake, the most often found road-killed species. A citizen science approach appeared to be more cost-efficient than monitoring by professional researchers only when more than 400 km of road are monitored. Our findings showed that freely available remote sensing data in combination with a citizen science approach would be a cost-efficient method aiming to identify and monitor road-kill hotspots of amphibians and reptiles on a larger scale.

  14. Experimental investigation of precision grinding oriented to achieve high process efficiency for large and middle-scale optic

    NASA Astrophysics Data System (ADS)

    Li, Ping; Jin, Tan; Guo, Zongfu; Lu, Ange; Qu, Meina

    2016-10-01

    High efficiency machining of large precision optical surfaces is a challenging task for researchers and engineers worldwide. The higher form accuracy and lower subsurface damage helps to significantly reduce the cycle time for the following polishing process, save the cost of production, and provide a strong enabling technology to support the large telescope and laser energy fusion projects. In this paper, employing an Infeed Grinding (IG) mode with a rotary table and a cup wheel, a multi stage grinding process chain, as well as precision compensation technology, a Φ300mm diameter plano mirror is ground by the Schneider Surfacing Center SCG 600 that delivers a new level of quality and accuracy when grinding such large flats. Results show a PV form error of Pt<2 μm, the surface roughness Ra<30 nm and Rz<180 nm, with subsurface damage <20 μm, and a material removal rates of up to 383.2 mm3/s.

  15. Staged, High-Pressure Oxy-Combustion Technology: Development and Scale-Up

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Axelbaum, Richard; Kumfer, Benjamin; Gopan, Akshay

    The immediate need for a high efficiency, low cost carbon capture process has prompted the recent development of pressurized oxy-combustion. With a greater combustion pressure the dew point of the flue gas is increased, allowing for effective integration of the latent heat of flue gas moisture into the Rankine cycle. This increases the net plant efficiency and reduces costs. A novel, transformational process, named Staged, Pressurized Oxy-Combustion (SPOC), achieves additional step changes in efficiency and cost reduction by significantly reducing the recycle of flue gas. The research and development activities conducted under Phases I and II of this project (FE0009702)more » include: SPOC power plant cost and performance modeling, CFD-assisted design of pressurized SPOC boilers, theoretical analysis of radiant heat transfer and ash deposition, boiler materials corrosion testing, construction of a 100 kWth POC test facility, and experimental testing. The results of this project have advanced the technology readiness level (TRL) of the SPOC technology from 1 to 5.« less

  16. Source-gated transistors for order-of-magnitude performance improvements in thin-film digital circuits

    NASA Astrophysics Data System (ADS)

    Sporea, R. A.; Trainor, M. J.; Young, N. D.; Shannon, J. M.; Silva, S. R. P.

    2014-03-01

    Ultra-large-scale integrated (ULSI) circuits have benefited from successive refinements in device architecture for enormous improvements in speed, power efficiency and areal density. In large-area electronics (LAE), however, the basic building-block, the thin-film field-effect transistor (TFT) has largely remained static. Now, a device concept with fundamentally different operation, the source-gated transistor (SGT) opens the possibility of unprecedented functionality in future low-cost LAE. With its simple structure and operational characteristics of low saturation voltage, stability under electrical stress and large intrinsic gain, the SGT is ideally suited for LAE analog applications. Here, we show using measurements on polysilicon devices that these characteristics lead to substantial improvements in gain, noise margin, power-delay product and overall circuit robustness in digital SGT-based designs. These findings have far-reaching consequences, as LAE will form the technological basis for a variety of future developments in the biomedical, civil engineering, remote sensing, artificial skin areas, as well as wearable and ubiquitous computing, or lightweight applications for space exploration.

  17. Source-gated transistors for order-of-magnitude performance improvements in thin-film digital circuits

    PubMed Central

    Sporea, R. A.; Trainor, M. J.; Young, N. D.; Shannon, J. M.; Silva, S. R. P.

    2014-01-01

    Ultra-large-scale integrated (ULSI) circuits have benefited from successive refinements in device architecture for enormous improvements in speed, power efficiency and areal density. In large-area electronics (LAE), however, the basic building-block, the thin-film field-effect transistor (TFT) has largely remained static. Now, a device concept with fundamentally different operation, the source-gated transistor (SGT) opens the possibility of unprecedented functionality in future low-cost LAE. With its simple structure and operational characteristics of low saturation voltage, stability under electrical stress and large intrinsic gain, the SGT is ideally suited for LAE analog applications. Here, we show using measurements on polysilicon devices that these characteristics lead to substantial improvements in gain, noise margin, power-delay product and overall circuit robustness in digital SGT-based designs. These findings have far-reaching consequences, as LAE will form the technological basis for a variety of future developments in the biomedical, civil engineering, remote sensing, artificial skin areas, as well as wearable and ubiquitous computing, or lightweight applications for space exploration. PMID:24599023

  18. The relative efficiency of modular and non-modular networks of different size

    PubMed Central

    Tosh, Colin R.; McNally, Luke

    2015-01-01

    Most biological networks are modular but previous work with small model networks has indicated that modularity does not necessarily lead to increased functional efficiency. Most biological networks are large, however, and here we examine the relative functional efficiency of modular and non-modular neural networks at a range of sizes. We conduct a detailed analysis of efficiency in networks of two size classes: ‘small’ and ‘large’, and a less detailed analysis across a range of network sizes. The former analysis reveals that while the modular network is less efficient than one of the two non-modular networks considered when networks are small, it is usually equally or more efficient than both non-modular networks when networks are large. The latter analysis shows that in networks of small to intermediate size, modular networks are much more efficient that non-modular networks of the same (low) connective density. If connective density must be kept low to reduce energy needs for example, this could promote modularity. We have shown how relative functionality/performance scales with network size, but the precise nature of evolutionary relationship between network size and prevalence of modularity will depend on the costs of connectivity. PMID:25631996

  19. Facile Synthesis of Monodisperse Gold Nanocrystals Using Virola oleifera

    NASA Astrophysics Data System (ADS)

    Milaneze, Bárbara A.; Oliveira, Jairo P.; Augusto, Ingrid; Keijok, Wanderson J.; Côrrea, Andressa S.; Ferreira, Débora M.; Nunes, Otalíbio C.; Gonçalves, Rita de Cássia R.; Kitagawa, Rodrigo R.; Celante, Vinícius G.; da Silva, André Romero; Pereira, Ana Claudia H.; Endringer, Denise C.; Schuenck, Ricardo P.; Guimarães, Marco C. C.

    2016-10-01

    The development of new routes and strategies for nanotechnology applications that only employ green synthesis has inspired investigators to devise natural systems. Among these systems, the synthesis of gold nanoparticles using plant extracts has been actively developed as an alternative, efficient, cost-effective, and environmentally safe method for producing nanoparticles, and this approach is also suitable for large-scale synthesis. This study reports reproducible and completely natural gold nanocrystals that were synthesized using Virola oleifera extract. V. oleifera resin is rich in epicatechin, ferulic acid, gallic acid, and flavonoids (i.e., quercetin and eriodictyol). These gold nanoparticles play three roles. First, these nanoparticles exhibit remarkable stability based on their zeta potential. Second, these nanoparticles are functionalized with flavonoids, and third, an efficient, economical, and environmentally friendly mechanism can be employed to produce green nanoparticles with organic compounds on the surface. Our model is capable of reducing the resin of V. oleifera, which creates stability and opens a new avenue for biological applications. This method does not require painstaking conditions or hazardous agents and is a rapid, efficient, and green approach for the fabrication of monodisperse gold nanoparticles.

  20. Characterization of the ETEL D784UKFLB 11 in. photomultiplier tube

    NASA Astrophysics Data System (ADS)

    Barros, N.; Kaptanoglu, T.; Kimelman, B.; Klein, J. R.; Moore, E.; Nguyen, J.; Stavreva, K.; Svoboda, R.

    2017-04-01

    Water Cherenkov and scintillator detectors are a critical tool for neutrino physics. Their large size, low threshold, and low operational cost make them excellent detectors for long baseline neutrino oscillations, proton decay, supernova and solar neutrinos, double beta decay, and ultra-high energy astrophysical neutrinos. Proposals for a new generation of large detectors rely on the availability of large format, fast, cost-effective photomultiplier tubes. The Electron Tubes Enterprises, Ltd (ETEL) D784KFLB 11 in. Photomultiplier Tube has been developed for large neutrino detectors. We have measured the timing characteristics, relative efficiency, and magnetic field sensitivity of the first fifteen prototypes.

  1. Expression of recombinant staphylokinase in the methylotrophic yeast Hansenula polymorpha

    PubMed Central

    2012-01-01

    Background Currently, the two most commonly used fibrinolytic agents in thrombolytic therapy are recombinant tissue plasminogen activator (rt-PA) and streptokinase (SK). Whereas SK has the advantage of substantially lower costs when compared to other agents, it is less effective than either rt-PA or related variants, has significant allergenic potential, lacks fibrin selectivity and causes transient hypotensive effects in high dosing schedules. Therefore, development of an alternative fibrinolytic agent having superior efficacy to SK, approaching that of rt-PA, together with a similar or enhanced safety profile and advantageous cost-benefit ratio, would be of substantial importance. Pre-clinical data suggest that the novel fibrinolytic recombinant staphylokinase (rSAK), or related rSAK variants, could be candidates for such development. However, since an efficient expression system for rSAK is still lacking, it has not yet been fully developed or evaluated for clinical purposes. This study’s goal was development of an efficient fermentation process for the production of a modified, non-glycosylated, biologically active rSAK, namely rSAK-2, using the well-established single cell yeast Hansenula polymorpha expression system. Results The development of an efficient large scale (80 L) Hansenula polymorpha fermentation process of short duration for rSAK-2 production is described. It evolved from an initial 1mL HTP methodology by successive scale-up over almost 5 orders of magnitude and improvement steps, including the optimization of critical process parameters (e.g. temperature, pH, feeding strategy, medium composition, etc.). Potential glycosylation of rSAK-2 was successfully suppressed through amino acid substitution within its only N-acetyl glycosylation motif. Expression at high yields (≥ 1g rSAK-2/L cell culture broth) of biologically active rSAK-2 of expected molecular weight was achieved. Conclusion The optimized production process described for rSAK-2 in Hansenula polymorpha provides an excellent, economically superior, manufacturing platform for a promising therapeutic fibrinolytic agent. PMID:23253823

  2. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    NASA Astrophysics Data System (ADS)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  3. A phasing and imputation method for pedigreed populations that results in a single-stage genomic evaluation

    PubMed Central

    2012-01-01

    Background Efficient, robust, and accurate genotype imputation algorithms make large-scale application of genomic selection cost effective. An algorithm that imputes alleles or allele probabilities for all animals in the pedigree and for all genotyped single nucleotide polymorphisms (SNP) provides a framework to combine all pedigree, genomic, and phenotypic information into a single-stage genomic evaluation. Methods An algorithm was developed for imputation of genotypes in pedigreed populations that allows imputation for completely ungenotyped animals and for low-density genotyped animals, accommodates a wide variety of pedigree structures for genotyped animals, imputes unmapped SNP, and works for large datasets. The method involves simple phasing rules, long-range phasing and haplotype library imputation and segregation analysis. Results Imputation accuracy was high and computational cost was feasible for datasets with pedigrees of up to 25 000 animals. The resulting single-stage genomic evaluation increased the accuracy of estimated genomic breeding values compared to a scenario in which phenotypes on relatives that were not genotyped were ignored. Conclusions The developed imputation algorithm and software and the resulting single-stage genomic evaluation method provide powerful new ways to exploit imputation and to obtain more accurate genetic evaluations. PMID:22462519

  4. Hi-Corrector: a fast, scalable and memory-efficient package for normalizing large-scale Hi-C data.

    PubMed

    Li, Wenyuan; Gong, Ke; Li, Qingjiao; Alber, Frank; Zhou, Xianghong Jasmine

    2015-03-15

    Genome-wide proximity ligation assays, e.g. Hi-C and its variant TCC, have recently become important tools to study spatial genome organization. Removing biases from chromatin contact matrices generated by such techniques is a critical preprocessing step of subsequent analyses. The continuing decline of sequencing costs has led to an ever-improving resolution of the Hi-C data, resulting in very large matrices of chromatin contacts. Such large-size matrices, however, pose a great challenge on the memory usage and speed of its normalization. Therefore, there is an urgent need for fast and memory-efficient methods for normalization of Hi-C data. We developed Hi-Corrector, an easy-to-use, open source implementation of the Hi-C data normalization algorithm. Its salient features are (i) scalability-the software is capable of normalizing Hi-C data of any size in reasonable times; (ii) memory efficiency-the sequential version can run on any single computer with very limited memory, no matter how little; (iii) fast speed-the parallel version can run very fast on multiple computing nodes with limited local memory. The sequential version is implemented in ANSI C and can be easily compiled on any system; the parallel version is implemented in ANSI C with the MPI library (a standardized and portable parallel environment designed for solving large-scale scientific problems). The package is freely available at http://zhoulab.usc.edu/Hi-Corrector/. © The Author 2014. Published by Oxford University Press.

  5. NOVEL OXIDANT FOR ELEMENTAL MERCURY CONTROL FROM FLUE GAS

    EPA Science Inventory

    The primary objective of this study is to develop and test advanced noncarbonaceous solid sorbent materials suitable for removing the elemental form of mercury from power plant emissions. An efficient and cost-effective novel Hg(0) oxidant was evaluated in a lab-scale fixed-bed ...

  6. Three essays in transportation energy and environmental policy

    NASA Astrophysics Data System (ADS)

    Hajiamiri, Sara

    Concerns about climate change, dependence on oil, and unstable gasoline prices have led to significant efforts by policymakers to cut greenhouse gas (GHG) emissions and oil consumption. The transportation sector is one of the principle emitters of CO2 in the US. It accounts for two-thirds of total U.S. oil consumption and is almost entirely dependent on oil. Within the transportation sector, the light-duty vehicle (LDV) fleet is the main culprit. It is responsible for more than 65 percent of the oil used and for more than 60 percent of total GHG emissions. If a significant fraction of the LDV fleet is gradually replaced by more fuel-efficient technologies, meaningful reductions in GHG emissions and oil consumption will be achieved. This dissertation investigates the potential benefits and impacts of deploying more fuel-efficient vehicles in the LDV fleet. Findings can inform decisions surrounding the development and deployment of the next generation of LDVs. The first essay uses data on 2003 and 2006 model gasoline-powered passenger cars, light trucks and sport utility vehicles to investigate the implicit private cost of improving vehicle fuel efficiencies through reducing other desired attributes such as weight (that is valued for its perceived effect on personal safety) and horsepower. Breakeven gasoline prices that would justify the estimated implicit costs were also calculated. It is found that to justify higher fuel efficiency standards from a consumer perspective, either the external benefits need to be very large or technological advances will need to greatly reduce fuel efficiency costs. The second essay estimates the private benefits and societal impacts of electric vehicles. The findings from the analysis contribute to policy deliberations on how to incentivize the purchase and production of these vehicles. A spreadsheet model was developed to estimate the private benefits and societal impacts of purchasing and utilizing three electric vehicle technologies instead of a similar-sized conventional gasoline-powered vehicle (CV). The electric vehicle technologies considered are gasoline-powered hybrid and plug-in hybrid electric vehicles and battery electric vehicles. It is found that the private benefits are positive, but smaller than the expected short-term cost premiums on these technologies, which suggest the need for government support if a large-scale adoption of electric vehicles is desired. Also, it is found that the net present values of the societal benefits that are not internalized by the vehicle purchaser are not likely to exceed $1,700. This estimate accounts for changes in GHG emissions, criteria air pollutants, gasoline consumption and the driver's contribution to congestion. The third essay explores the implications of a large-scale adoption of electric vehicles on transportation finance. While fuel efficiency improvements are desirable with respect to goals for achieving energy security and environmental improvement, it has adverse implications for the current system of transportation finance. Reductions in gasoline consumption relative to the amount of driving that takes place would result in a decline in fuel tax revenues that are needed to fund planning, construction, maintenance, and operation of highways and public transit systems. In this paper the forgone fuel tax revenue that results when an electric vehicle replaces a similar-sized CV is estimated. It is found that under several vehicle electrification scenarios, the combined federal and state trust funds could decline by as much as 5 percent by 2020 and as much as 12.5 percent by 2030. Alternative fee systems that tie more directly to transportation system use rather then to fuel consumption could reconcile energy security, environmental, and transportation finance goals.

  7. Body frame close coupling wave packet approach to gas phase atom-rigid rotor inelastic collisions

    NASA Technical Reports Server (NTRS)

    Sun, Y.; Judson, R. S.; Kouri, D. J.

    1989-01-01

    The close coupling wave packet (CCWP) method is formulated in a body-fixed representation for atom-rigid rotor inelastic scattering. For J greater than j-max (where J is the total angular momentum and j is the rotational quantum number), the computational cost of propagating the coupled channel wave packets in the body frame is shown to scale approximately as N exp 3/2, where N is the total number of channels. For large numbers of channels, this will be much more efficient than the space frame CCWP method previously developed which scales approximately as N-squared under the same conditions.

  8. Technical- and environmental-efficiency analysis of irrigated cotton-cropping systems in Punjab, Pakistan using data envelopment analysis.

    PubMed

    Ullah, Asmat; Perret, Sylvain R

    2014-08-01

    Cotton cropping in Pakistan uses substantial quantities of resources and adversely affects the environment with pollutants from the inputs, particularly pesticides. A question remains regarding to what extent the reduction of such environmental impact is possible without compromising the farmers' income. This paper investigates the environmental, technical, and economic performances of selected irrigated cotton-cropping systems in Punjab to quantify the sustainability of cotton farming and reveal options for improvement. Using mostly primary data, our study quantifies the technical, cost, and environmental efficiencies of different farm sizes. A set of indicators has been computed to reflect these three domains of efficiency using the data envelopment analysis technique. The results indicate that farmers are broadly environmentally inefficient; which primarily results from poor technical inefficiency. Based on an improved input mix, the average potential environmental impact reduction for small, medium, and large farms is 9, 13, and 11 %, respectively, without compromising the economic return. Moreover, the differences in technical, cost, and environmental efficiencies between small and medium and small and large farm sizes were statistically significant. The second-stage regression analysis identifies that the entire farm size significantly affects the efficiencies, whereas exposure to extension and training has positive effects, and the sowing methods significantly affect the technical and environmental efficiencies. Paradoxically, the formal education level is determined to affect the efficiencies negatively. This paper discusses policy interventions that can improve the technical efficiency to ultimately increase the environmental efficiency and reduce the farmers' operating costs.

  9. Micro-fabrication method of graphite mesa microdevices based on optical lithography technology

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Wen, Donghui; Zhu, Huamin; Zhang, Xiaorui; Yang, Xing; Shi, Yunsheng; Zheng, Tianxiang

    2017-12-01

    Graphite mesa microdevices have incommensurate contact nanometer interfaces, superlubricity, high-speed self-retraction, and other characteristics, which have potential applications in high-performance oscillators and micro-scale switches, memory devices, and gyroscopes. However, the current method of fabricating graphite mesa microdevices is mainly based on high-cost, low efficiency electron beam lithography technology. In this paper, the processing technologies of graphite mesa microdevices with various shapes and sizes were investigated by a low-cost micro-fabrication method, which was mainly based on optical lithography technology. The characterization results showed that the optical lithography technology could realize a large-area of patterning on the graphite surface, and the graphite mesa microdevices, which have a regular shape, neat arrangement, and high verticality could be fabricated in large batches through optical lithography technology. The experiments and analyses showed that the graphite mesa microdevices fabricated through optical lithography technology basically have the same self-retracting characteristics as those fabricated through electron beam lithography technology, and the maximum size of the graphite mesa microdevices with self-retracting phenomenon can reach 10 µm  ×  10 µm. Therefore, the proposed method of this paper can realize the high-efficiency and low-cost processing of graphite mesa microdevices, which is significant for batch fabrication and application of graphite mesa microdevices.

  10. High-performance flat-panel solar thermoelectric generators with high thermal concentration.

    PubMed

    Kraemer, Daniel; Poudel, Bed; Feng, Hsien-Ping; Caylor, J Christopher; Yu, Bo; Yan, Xiao; Ma, Yi; Wang, Xiaowei; Wang, Dezhi; Muto, Andrew; McEnaney, Kenneth; Chiesa, Matteo; Ren, Zhifeng; Chen, Gang

    2011-05-01

    The conversion of sunlight into electricity has been dominated by photovoltaic and solar thermal power generation. Photovoltaic cells are deployed widely, mostly as flat panels, whereas solar thermal electricity generation relying on optical concentrators and mechanical heat engines is only seen in large-scale power plants. Here we demonstrate a promising flat-panel solar thermal to electric power conversion technology based on the Seebeck effect and high thermal concentration, thus enabling wider applications. The developed solar thermoelectric generators (STEGs) achieved a peak efficiency of 4.6% under AM1.5G (1 kW m(-2)) conditions. The efficiency is 7-8 times higher than the previously reported best value for a flat-panel STEG, and is enabled by the use of high-performance nanostructured thermoelectric materials and spectrally-selective solar absorbers in an innovative design that exploits high thermal concentration in an evacuated environment. Our work opens up a promising new approach which has the potential to achieve cost-effective conversion of solar energy into electricity. © 2011 Macmillan Publishers Limited. All rights reserved

  11. Technical difficulties and solutions of direct transesterification process of microbial oil for biodiesel synthesis.

    PubMed

    Yousuf, Abu; Khan, Maksudur Rahman; Islam, M Amirul; Wahid, Zularisam Ab; Pirozzi, Domenico

    2017-01-01

    Microbial oils are considered as alternative to vegetable oils or animal fats as biodiesel feedstock. Microalgae and oleaginous yeast are the main candidates of microbial oil producers' community. However, biodiesel synthesis from these sources is associated with high cost and process complexity. The traditional transesterification method includes several steps such as biomass drying, cell disruption, oil extraction and solvent recovery. Therefore, direct transesterification or in situ transesterification, which combines all the steps in a single reactor, has been suggested to make the process cost effective. Nevertheless, the process is not applicable for large-scale biodiesel production having some difficulties such as high water content of biomass that makes the reaction rate slower and hurdles of cell disruption makes the efficiency of oil extraction lower. Additionally, it requires high heating energy in the solvent extraction and recovery stage. To resolve these difficulties, this review suggests the application of antimicrobial peptides and high electric fields to foster the microbial cell wall disruption.

  12. Proteomic analyses bring new insights into the effect of a dark stress on lipid biosynthesis in Phaeodactylum tricornutum

    PubMed Central

    Bai, Xiaocui; Song, Hao; Lavoie, Michel; Zhu, Kun; Su, Yiyuan; Ye, Hanqi; Chen, Si; Fu, Zhengwei; Qian, Haifeng

    2016-01-01

    Microalgae biosynthesize high amount of lipids and show high potential for renewable biodiesel production. However, the production cost of microalgae-derived biodiesel hampers large-scale biodiesel commercialization and new strategies for increasing lipid production efficiency from algae are urgently needed. Here we submitted the marine algae Phaeodactylum tricornutum to a 4-day dark stress, a condition increasing by 2.3-fold the total lipid cell quotas, and studied the cellular mechanisms leading to lipid accumulation using a combination of physiological, proteomic (iTRAQ) and genomic (qRT-PCR) approaches. Our results show that the expression of proteins in the biochemical pathways of glycolysis and the synthesis of fatty acids were induced in the dark, potentially using excess carbon and nitrogen produced from protein breakdown. Treatment of algae in the dark, which increased algal lipid cell quotas at low cost, combined with optimal growth treatment could help optimizing biodiesel production. PMID:27147218

  13. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less

  14. Dye Sensitized Solar Cells for Economically Viable Photovoltaic Systems.

    PubMed

    Jung, Hyun Suk; Lee, Jung-Kun

    2013-05-16

    TiO2 nanoparticle-based dye sensitized solar cells (DSSCs) have attracted a significant level of scientific and technological interest for their potential as economically viable photovoltaic devices. While DSSCs have multiple benefits such as material abundance, a short energy payback period, constant power output, and compatibility with flexible applications, there are still several challenges that hold back large scale commercialization. Critical factors determining the future of DSSCs involve energy conversion efficiency, long-term stability, and production cost. Continuous advancement of their long-term stability suggests that state-of-the-art DSSCs will operate for over 20 years without a significant decrease in performance. Nevertheless, key questions remain in regards to energy conversion efficiency improvements and material cost reduction. In this Perspective, the present state of the field and the ongoing efforts to address the requirements of DSSCs are summarized with views on the future of DSSCs.

  15. A Rotifer-Based Technique to Rear Zebrafish Larvae in Small Academic Settings.

    PubMed

    Allen, Raymond L; Wallace, Robert L; Sisson, Barbara E

    2016-08-01

    Raising zebrafish from larvae to juveniles can be laborious, requiring frequent water exchanges and continuous culturing of live feed. This task becomes even more difficult for small institutions that do not have access to the necessary funding, equipment, or personnel to maintain large-scale systems usually employed in zebrafish husbandry. To open this opportunity to smaller institutions, a cost-efficient protocol was developed to culture Nannochloropsis to feed the halophilic, planktonic rotifer Brachionus plicatilis; the rotifers were then used to raise larval zebrafish to juveniles. By using these methods, small institutions can easily raise zebrafish embryos in a cost-efficient manner without the need to establish an extensive fish-raising facility. In addition, culturing rotifers provides a micrometazoan that serves as a model organism for teaching and undergraduate research studies for a variety of topics, including aging, toxicology, and predator-prey dynamics.

  16. Convergent evolution in locomotory patterns of flying and swimming animals.

    PubMed

    Gleiss, Adrian C; Jorgensen, Salvador J; Liebsch, Nikolai; Sala, Juan E; Norman, Brad; Hays, Graeme C; Quintana, Flavio; Grundy, Edward; Campagna, Claudio; Trites, Andrew W; Block, Barbara A; Wilson, Rory P

    2011-06-14

    Locomotion is one of the major energetic costs faced by animals and various strategies have evolved to reduce its cost. Birds use interspersed periods of flapping and gliding to reduce the mechanical requirements of level flight while undergoing cyclical changes in flight altitude, known as undulating flight. Here we equipped free-ranging marine vertebrates with accelerometers and demonstrate that gait patterns resembling undulating flight occur in four marine vertebrate species comprising sharks and pinnipeds. Both sharks and pinnipeds display intermittent gliding interspersed with powered locomotion. We suggest, that the convergent use of similar gait patterns by distinct groups of animals points to universal physical and physiological principles that operate beyond taxonomic limits and shape common solutions to increase energetic efficiency. Energetically expensive large-scale migrations performed by many vertebrates provide common selection pressure for efficient locomotion, with potential for the convergence of locomotory strategies by a wide variety of species.

  17. Lead (Pb) Hohlraum: Target for Inertial Fusion Energy

    PubMed Central

    Ross, J. S.; Amendt, P.; Atherton, L. J.; Dunne, M.; Glenzer, S. H.; Lindl, J. D.; Meeker, D.; Moses, E. I.; Nikroo, A.; Wallace, R.

    2013-01-01

    Recent progress towards demonstrating inertial confinement fusion (ICF) ignition at the National Ignition Facility (NIF) has sparked wide interest in Laser Inertial Fusion Energy (LIFE) for carbon-free large-scale power generation. A LIFE-based fleet of power plants promises clean energy generation with no greenhouse gas emissions and a virtually limitless, widely available thermonuclear fuel source. For the LIFE concept to be viable, target costs must be minimized while the target material efficiency or x-ray albedo is optimized. Current ICF targets on the NIF utilize a gold or depleted uranium cylindrical radiation cavity (hohlraum) with a plastic capsule at the center that contains the deuterium and tritium fuel. Here we show a direct comparison of gold and lead hohlraums in efficiently ablating deuterium-filled plastic capsules with soft x rays. We report on lead hohlraum performance that is indistinguishable from gold, yet costing only a small fraction. PMID:23486285

  18. Lead (Pb) hohlraum: target for inertial fusion energy.

    PubMed

    Ross, J S; Amendt, P; Atherton, L J; Dunne, M; Glenzer, S H; Lindl, J D; Meeker, D; Moses, E I; Nikroo, A; Wallace, R

    2013-01-01

    Recent progress towards demonstrating inertial confinement fusion (ICF) ignition at the National Ignition Facility (NIF) has sparked wide interest in Laser Inertial Fusion Energy (LIFE) for carbon-free large-scale power generation. A LIFE-based fleet of power plants promises clean energy generation with no greenhouse gas emissions and a virtually limitless, widely available thermonuclear fuel source. For the LIFE concept to be viable, target costs must be minimized while the target material efficiency or x-ray albedo is optimized. Current ICF targets on the NIF utilize a gold or depleted uranium cylindrical radiation cavity (hohlraum) with a plastic capsule at the center that contains the deuterium and tritium fuel. Here we show a direct comparison of gold and lead hohlraums in efficiently ablating deuterium-filled plastic capsules with soft x rays. We report on lead hohlraum performance that is indistinguishable from gold, yet costing only a small fraction.

  19. AgBr/diatomite for the efficient visible-light-driven photocatalytic degradation of Rhodamine B

    NASA Astrophysics Data System (ADS)

    Fang, Jing; Zhao, Huamei; Liu, Qinglei; Zhang, Wang; Gu, Jiajun; Su, Yishi; Abbas, Waseem; Su, Huilan; You, Zhengwei; Zhang, Di

    2018-03-01

    The treatment of organic pollution via photocatalysis has been investigated for a few decades. However, earth-abundant, cheap, stable, and efficient substrates are still to be developed. Here, we prepare an efficient visible-light-driven photocatalyst via the deposition of Ag nanoparticles (< 60 nm) on diatomite and the conversion of Ag to AgBr nanoparticles (< 600 nm). Experimental results show that 95% of Rhodamine B could be removed within 20 min, and the degradation rate constant ( κ) is 0.11 min-1 under 100 mW/cm2 light intensity. For comparison, AgBr/SiO2 ( κ = 0.04 min-1) and commercial AgBr nanoparticles ( κ = 0.05 min-1) were measured as well. The experimental results reveal that diatomite acted more than a substrate benefiting the dispersion of AgBr nanoparticles, as well as a cooperator to help harvest visible light and adsorb dye molecules, leading to the efficient visible-light-driven photocatalytic performance of AgBr/diatomite. Considering the low cost (10 per ton) and large-scale availability of diatomite, our study provides the possibility to prepare other types of diatomite-based efficient photocatalytic composites with low-cost but excellent photocatalytic performance.

  20. Stochastic injection-strategy optimization for the preliminary assessment of candidate geological storage sites

    NASA Astrophysics Data System (ADS)

    Cody, Brent M.; Baù, Domenico; González-Nicolás, Ana

    2015-09-01

    Geological carbon sequestration (GCS) has been identified as having the potential to reduce increasing atmospheric concentrations of carbon dioxide (CO2). However, a global impact will only be achieved if GCS is cost-effectively and safely implemented on a massive scale. This work presents a computationally efficient methodology for identifying optimal injection strategies at candidate GCS sites having uncertainty associated with caprock permeability, effective compressibility, and aquifer permeability. A multi-objective evolutionary optimization algorithm is used to heuristically determine non-dominated solutions between the following two competing objectives: (1) maximize mass of CO2 sequestered and (2) minimize project cost. A semi-analytical algorithm is used to estimate CO2 leakage mass rather than a numerical model, enabling the study of GCS sites having vastly different domain characteristics. The stochastic optimization framework presented herein is applied to a feasibility study of GCS in a brine aquifer in the Michigan Basin (MB), USA. Eight optimization test cases are performed to investigate the impact of decision-maker (DM) preferences on Pareto-optimal objective-function values and carbon-injection strategies. This analysis shows that the feasibility of GCS at the MB test site is highly dependent upon the DM's risk-adversity preference and degree of uncertainty associated with caprock integrity. Finally, large gains in computational efficiency achieved using parallel processing and archiving are discussed.

Top