A simple biosynthetic pathway for large product generation from small substrate amounts
NASA Astrophysics Data System (ADS)
Djordjevic, Marko; Djordjevic, Magdalena
2012-10-01
A recently emerging discipline of synthetic biology has the aim of constructing new biosynthetic pathways with useful biological functions. A major application of these pathways is generating a large amount of the desired product. However, toxicity due to the possible presence of toxic precursors is one of the main problems for such production. We consider here the problem of generating a large amount of product from a potentially toxic substrate. To address this, we propose a simple biosynthetic pathway, which can be induced in order to produce a large number of the product molecules, by keeping the substrate amount at low levels. Surprisingly, we show that the large product generation crucially depends on fast non-specific degradation of the substrate molecules. We derive an optimal induction strategy, which allows as much as three orders of magnitude increase in the product amount through biologically realistic parameter values. We point to a recently discovered bacterial immune system (CRISPR/Cas in E. coli) as a putative example of the pathway analysed here. We also argue that the scheme proposed here can be used not only as a stand-alone pathway, but also as a strategy to produce a large amount of the desired molecules with small perturbations of endogenous biosynthetic pathways.
Querying Large Biological Network Datasets
ERIC Educational Resources Information Center
Gulsoy, Gunhan
2013-01-01
New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…
Transient Stability of the US Western Interconnection with High Wind and Solar Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Kara; Miller, Nicholas W.; Shao, Miaolei
The addition of large amounts of wind and solar generation to bulk power systems that are traditionally subject to operating constraints set by transient limitations is the subject of considerable concern in the industry. The US Western Interconnection (WI) is expected to experience substantial additional growth in both wind and solar generation. These plants will, to some extent, displace large central station thermal generation, both coal and gas-fired, which have traditionally helped maintain stability. This paper reports the results of a study that investigated the transient stability of the WI with high penetrations of wind and solar generation. The mainmore » goals of this work were to (1) create a realistic, baseline model of the WI, (2) test selected transient stability events, (3) investigate the impact of large amounts of wind and solar generation, and (4) examine means to improve performance.« less
Rungarunlert, Sasitorn; Ferreira, Joao N; Dinnyes, Andras
2016-01-01
Generation of cardiomyocytes from pluripotent stem cells (PSCs) is a common and valuable approach to produce large amount of cells for various applications, including assays and models for drug development, cell-based therapies, and tissue engineering. All these applications would benefit from a reliable bioreactor-based methodology to consistently generate homogenous PSC-derived embryoid bodies (EBs) at a large scale, which can further undergo cardiomyogenic differentiation. The goal of this chapter is to describe a scalable method to consistently generate large amount of homogeneous and synchronized EBs from PSCs. This method utilizes a slow-turning lateral vessel bioreactor to direct the EB formation and their subsequent cardiomyogenic lineage differentiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Nicholas W.; Shao, Miaolei; Pajic, Slobodan
The addition of large amounts of wind and solar generation to bulk power systems that are traditionally subject to operating constraints set by transient stability and frequency response limitations is the subject of considerable concern in the industry. The US Western Interconnection (WI) is expected to experience substantial additional growth in both wind and solar generation. These plants will, to some extent, displace large central station thermal generation, both coal and gas-fired, which have traditionally helped maintain stability. This paper reports the results of a study that investigated the transient stability and frequency response of the WI with high penetrationsmore » of wind and solar generation. The main goals of this work were to (1) create a realistic, baseline model of the WI, (2) test selected transient stability and frequency events, (3) investigate the impact of large amounts of wind and solar generation, and (4) examine means to improve performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Kara; Miller, Nicholas W.; Shao, Miaolei
Adding large amounts of wind and solar generation to bulk power systems that are traditionally subject to operating constraints set by transient stability and frequency response limitations is the subject of considerable concern in the industry. The US Western Interconnection (WI) is expected to experience substantial additional growth in both wind and solar generation. These plants will, to some extent, displace large central station thermal generation, both coal and gas-fired, which have traditionally helped maintain stability. Our paper reports the results of a study that investigated the transient stability and frequency response of the WI with high penetrations of windmore » and solar generation. Moreover, the main goals of this work were to (1) create a realistic, baseline model of the WI, (2) test selected transient stability and frequency events, (3) investigate the impact of large amounts of wind and solar generation, and (4) examine means to improve performance.« less
Stock flow diagram analysis on solid waste management in Malaysia
NASA Astrophysics Data System (ADS)
Zulkipli, Faridah; Nopiah, Zulkifli Mohd; Basri, Noor Ezlin Ahmad; Kie, Cheng Jack
2016-10-01
The effectiveness on solid waste management is a major importance to societies. Numerous generation of solid waste from our daily activities has risked for our communities. These due to rapid population grow and advance in economic development. Moreover, the complexity of solid waste management is inherently involved large scale, diverse and element of uncertainties that must assist stakeholders with deviating objectives. In this paper, we proposed a system dynamics simulation by developing a stock flow diagram to illustrate the solid waste generation process and waste recycle process. The analysis highlights the impact on increasing the number of population toward the amount of solid waste generated and the amount of recycled waste. The results show an increment in the number of population as well as the amount of recycled waste will decrease the amount of waste generated. It is positively represent the achievement of government aim to minimize the amount of waste to be disposed by year 2020.
Taking Energy to the Physics Classroom from the Large Hadron Collider at CERN
ERIC Educational Resources Information Center
Cid, Xabier; Cid, Ramon
2009-01-01
In 2008, the greatest experiment in history began. When in full operation, the Large Hadron Collider (LHC) at CERN will generate the greatest amount of information that has ever been produced in an experiment before. It will also reveal some of the most fundamental secrets of nature. Despite the enormous amount of information available on this…
Plasma issues associated with the use of electrodynamic tethers
NASA Technical Reports Server (NTRS)
Hastings, D. E.
1986-01-01
The use of an electrodynamic tether to generate power or thrust on the space station raises important plasma issues associted with the current flow. In addition to the issue of current closure through the space station, high power tethers (equal to or greater than tens of kilowatts) require the use of plasma contactors to enhance the current flow. They will generate large amounts of electrostatic turbulence in the vicinity of the space station. This is because the contactors work best when a large amount of current driven turbulence is excited. Current work is reviewed and future directions suggested.
ERIC Educational Resources Information Center
Lissaman, P. B. S.
1979-01-01
Detailed are the history, development, and future objectives of the Coriolis program, a project designed to place large turbine units in the Florida Current that would generate large amounts of electric power. (BT)
Wang, Guo-Cang; Sun, Min-Zhuo; Gao, Shu-Fang; Tang, Li
2018-04-26
This organic-rich shale was analyzed to determine the type, origin, maturity and depositional environment of the organic matter and to evaluate the hydrocarbon generation potential of the shale. This study is based on geochemical (total carbon content, Rock-Eval pyrolysis and the molecular composition of hydrocarbons) and whole-rock petrographic (maceral composition) analyses. The petrographic analyses show that the shale penetrated by the Chaiye 2 well contains large amounts of vitrinite and sapropelinite and that the organic matter within these rocks is type III and highly mature. The geochemical analyses show that these rocks are characterized by high total organic carbon contents and that the organic matter is derived from a mix of terrestrial and marine sources and highly mature. These geochemical characteristics are consistent with the results of the petrographic analyses. The large amounts of organic matter in the Carboniferous shale succession penetrated by the Chaiye 2 well may be due to good preservation under hypersaline lacustrine and anoxic marine conditions. Consequently, the studied shale possesses very good hydrocarbon generation potential because of the presence of large amounts of highly mature type III organic matter.
Profiling Oman education data using data mining approach
NASA Astrophysics Data System (ADS)
Alawi, Sultan Juma Sultan; Shaharanee, Izwan Nizal Mohd; Jamil, Jastini Mohd
2017-10-01
Nowadays, with a large amount of data generated by many application services in different learning fields has led to the new challenges in education field. Education portal is an important system that leads to a better development of education field. This research paper presents an innovative data mining techniques to understand and summarizes the information of Oman's education data generated from the Ministry of Education Oman "Educational Portal". This research embarks into performing student profiling of the Oman student database. This study utilized the k-means clustering technique to determine the students' profiles. An amount of 42484-student records from Sultanate of Oman has been extracted for this study. The findings of this study show the practicality of clustering technique to investigating student's profiles. Allowing for a better understanding of student's behavior and their academic performance. Oman Education Portal contain a large amounts of user activity and interaction data. Analyses of this large data can be meaningful for educator to improve the student performance level and recognize students who needed additional attention.
Stager, Ron; Chambers, Douglas; Wiatzka, Gerd; Dupre, Monica; Callough, Micah; Benson, John; Santiago, Erwin; van Veen, Walter
2017-04-01
The Port Hope Area Initiative is a project mandated and funded by the Government of Canada to remediate properties with legacy low-level radioactive waste contamination in the Town of Port Hope, Ontario. The management and use of large amounts of data from surveys of some 4800 properties is a significant task critical to the success of the project. A large amount of information is generated through the surveys, including scheduling individual field visits to the properties, capture of field data laboratory sample tracking, QA/QC, property report generation and project management reporting. Web-mapping tools were used to track and display temporal progress of various tasks and facilitated consideration of spatial associations of contamination levels. The IM system facilitated the management and integrity of the large amounts of information collected, evaluation of spatial associations, automated report reproduction and consistent application and traceable execution for this project.x. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Miller, Michael W; Spear, Linda P
2006-09-01
Alcohol exposure largely affects 3 populations: fetuses, adolescents, and adults. These 3 developmental stages are inextricably intertwined such that elevated alcohol exposure at any time increases the probability of exposure at the others. This circular interdependency is called the alcoholism generator. Furthermore, exposure to large amounts of alcohol at these 3 times can cause cognitive dysfunction, largely through mechanisms of alcohol-induced perturbations in neurogenesis and synaptogenesis. Breaking this cycle is key to reducing problem alcohol drinking and the associated sequelae.
Dairy manure biochar as a phosphorus fertilizer
USDA-ARS?s Scientific Manuscript database
Future manure management practices will need to remove large amounts of organic waste as well as harness energy to generate value-added products. Manures can be processed using thermochemical conversion technologies to generate a solid product called biochar. Dairy manure biochars contain sufficient...
A Cost Benefit Analysis of Emerging LED Water Purification Systems in Expeditionary Environments
2017-03-23
the initial contingency response phase, ROWPUs are powered by large generators which require relatively large amounts of fossil fuels. The amount of...they attract and cling together forming a larger particle (Chem Treat, 2016). Flocculation is the addition of a polymer to water that clumps...smaller particles together to form larger particles. The idea for both methods is that larger particles will either settle out of or be removed from the
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui
Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of windmore » power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.« less
Koopmans, M.P.; Rijpstra, W.I.C.; De Leeuw, J. W.; Lewan, M.D.; Damste, J.S.S.
1998-01-01
An immature (Ro=0.39%), S-rich (S(org)/C = 0.07), organic matter-rich (19.6 wt. % TOC) limestone from the Ghareb Formation (Upper Cretaceous) in Jordan was artificially matured by hydrous pyrolysis (200, 220 ..., 300??C; 72 h) to study the effect of progressive diagenesis and early catagenesis on the amounts and distributions of hydrocarbons, organic sulfur compounds and S-rich geomacromolecules. The use of internal standards allowed the determination of absolute amounts. With increasing thermal maturation, large amounts of alkanes and alkylthiophenes with predominantly linear carbon skeletons are generated from the kerogen. The alkylthiophene isomer distributions do not change significantly with increasing thermal maturation, indicating the applicability of alkylthiophenes as biomarkers at relatively high levels of thermal maturity. For a given carbon skeleton, the saturated hydrocarbon, alkylthiophenes and alkylbenzo[b]thiophenes are stable forms at relatively high temperatures, whereas the alkylsulfides are not stable. The large amount of alkylthiophenes produced relative to the alkanes may be explained by the large number of monosulfide links per carbon skeleton. These results are in good agreement with those obtained previously for an artificial maturation series of an immature S-rich sample from the Gessoso-solfifera Formation.An immature (Ro = 0.39%), S-rich (Sorg/C = 0.07), organic matter-rich (19.6 wt.% TOC) limestone from the Ghareb Formation (Upper Cretaceous) in Jordan was artificially matured by hydrous pyrolysis (200, 220, ..., 300??C; 72 h) to study the effect of progressive diagenesis and early catagenesis on the amounts and distributions of hydrocarbons, organic sulfur compounds and S-rich geomacromolecules. The use of internal standards allowed the determination of absolute amounts. With increasing thermal maturation, large amounts of alkanes and alkylthiophenes with predominantly linear carbon skeletons are generated from the kerogen. The alkylthiophene isomer distributions do not change significantly with increasing thermal maturation, indicating the applicability of alkylthiophenes as biomarkers at relatively high levels of thermal maturity. For a given carbon skeleton, the saturated hydrocarbon, alkylthiophene and alkylbenzo[b]thiophenes are stable forms at relatively high temperatures, whereas the alkylsulfides are not stable. The large amount of alkylthiophenes produced relative to the alkanes may be explained by the large number of monosulfide links per carbon skeleton. These results are in good agreement with those obtained previously for an artificial maturation series of an immature S-rich sample from the Gessoso-solfifera Formation.
A novel methodology to estimate the evolution of construction waste in construction sites.
Katz, Amnon; Baum, Hadassa
2011-02-01
This paper focuses on the accumulation of construction waste generated throughout the erection of new residential buildings. A special methodology was developed in order to provide a model that will predict the flow of construction waste. The amount of waste and its constituents, produced on 10 relatively large construction sites (7000-32,000 m(2) of built area) was monitored periodically for a limited time. A model that predicts the accumulation of construction waste was developed based on these field observations. According to the model, waste accumulates in an exponential manner, i.e. smaller amounts are generated during the early stages of construction and increasing amounts are generated towards the end of the project. The total amount of waste from these sites was estimated at 0.2m(3) per 1m(2) floor area. A good correlation was found between the model predictions and actual data from the field survey. Copyright © 2010 Elsevier Ltd. All rights reserved.
Experimental quantum computing without entanglement.
Lanyon, B P; Barbieri, M; Almeida, M P; White, A G
2008-11-14
Deterministic quantum computation with one pure qubit (DQC1) is an efficient model of computation that uses highly mixed states. Unlike pure-state models, its power is not derived from the generation of a large amount of entanglement. Instead it has been proposed that other nonclassical correlations are responsible for the computational speedup, and that these can be captured by the quantum discord. In this Letter we implement DQC1 in an all-optical architecture, and experimentally observe the generated correlations. We find no entanglement, but large amounts of quantum discord-except in three cases where an efficient classical simulation is always possible. Our results show that even fully separable, highly mixed, states can contain intrinsically quantum mechanical correlations and that these could offer a valuable resource for quantum information technologies.
Koopmans, M.P.; Rijpstra, W.I.C.; Klapwijk, M.M.; De Leeuw, J. W.; Lewan, M.D.; Sinninghe, Damste J.S.
1999-01-01
A thermal and chemical degradation approach was followed to determine the precursors of pristane (Pr) and phytane (Ph) in samples from the Gessoso-solfifera, Ghareb and Green River Formations. Hydrous pyrolysis of these samples yields large amounts of Pr and Ph carbon skeletons, indicating that their precursors are predominantly sequestered in high-molecular-weight fractions. However, chemical degradation of the polar fraction and the kerogen of the unheated samples generally does not release large amounts of Pr and Ph. Additional information on the precursors of Pr and Ph is obtained from flash pyrolysis analyses of kerogens and residues after hydrous pyrolysis and after chemical degradation. Multiple precursors for Pr and Ph are recognised in these three samples. The main increase of the Pr/Ph ratio with increasing maturation temperature, which is associated with strongly increasing amounts of Pr and Ph, is probably due to the higher amount of precursors of Pr compared to Ph, and not to the different timing of generation of Pr and Ph.A thermal and chemical degradation approach was followed to determine the precursors of pristane (Pr) and phytane (Ph) in samples from the Gessoso-solfifera, Ghareb and Green River Formations. Hydrous pyrolysis of these samples yields large amounts of Pr and Ph carbon skeletons, indicating that their precursors are predominantly sequestered in high-molecular-weight fractions. However, chemical degradation of the polar fraction and the kerogen of the unheated samples generally does not release large amounts of Pr and Ph. Additional information on the precursors of Pr and Ph is obtained from flash pyrolysis analyses of kerogens and residues after hydrous pyrolysis and after chemical degradation. Multiple precursors for Pr and Ph are recognised in these three samples. The main increase of the Pr/Ph ratio with increasing maturation temperature, which is associated with strongly increasing amounts of Pr and Ph, is probably due to the higher amount of precursors of Pr compared to Ph, and not to the different timing of generation of Pr and Ph.
An Internet of Things platform architecture for supporting ambient assisted living environments.
Tsirmpas, Charalampos; Kouris, Ioannis; Anastasiou, Athanasios; Giokas, Kostas; Iliopoulou, Dimitra; Koutsouris, Dimitris
2017-01-01
Internet of Things (IoT) is the logical further development of today's Internet, enabling a huge amount of devices to communicate, compute, sense and act. IoT sensors placed in Ambient Assisted Living (AAL) environments, enable the context awareness and allow the support of the elderly in their daily routines, ultimately allowing an independent and safe lifestyle. The vast amount of data that are generated and exchanged between the IoT nodes require innovative context modeling approaches that go beyond currently used models. Current paper presents and evaluates an open interoperable platform architecture in order to utilize the technical characteristics of IoT and handle the large amount of generated data, as a solution to the technical requirements of AAL applications.
Development of device producing electrolyzed water for home care
NASA Astrophysics Data System (ADS)
Umimoto, K.; Nagata, S.; Yanagida, J.
2013-06-01
When water containing ionic substances is electrolyzed, electrolyzed water with strong bactericidal ability due to the available chlorine(AC) is generated on the anode side. Slightly acidic to neutral electrolyzed water (pH 6.5 to 7.5) is physiological pH and is suitable for biological applications. For producing slightly acidic to neutral electrolyzed water simply, a vertical-type electrolytic tank with an asymmetric structure was made. As a result, a small amount of strongly alkaline water was generated in the upper cathodic small chamber, and a large amount of weakly acidic water generated in the lower anodic large chamber. The pH and AC concentration in solutin mixed with both electrolyzed water were 6.3 and 39.5 ppm, respectively, This solution was slightly acidic to neutral electrolyzed water and had strong bactericidal activity. This device is useful for producing slightly acidic to neutral electrolyzed water as a disinfectant to employ at home care, when considering economic and environmental factors, since it returns to ordinary water after use.
New Earth-abundant Materials for Large-scale Solar Fuels Generation.
Prabhakar, Rajiv Ramanujam; Cui, Wei; Tilley, S David
2018-05-30
The solar resource is immense, but the power density of light striking the Earth's surface is relatively dilute, necessitating large area solar conversion devices in order to harvest substantial amounts of power for renewable energy applications. In addition, energy storage is a key challenge for intermittent renewable resources such as solar and wind, which adds significant cost to these energies. As the majority of humanity's present-day energy consumption is based on fuels, an ideal solution is to generate renewable fuels from abundant resources such as sunlight and water. In this account, we detail our recent work towards generating highly efficient and stable Earth-abundant semiconducting materials for solar water splitting to generate renewable hydrogen fuel.
Pollution Prevention Guideline for Academic Laboratories.
ERIC Educational Resources Information Center
Li, Edwin; Barnett, Stanley M.; Ray, Barbara
2003-01-01
Explains how to manage waste after a classroom laboratory experiment which generally has the potential to generate large amounts of waste. Focuses on pollution prevention and the selection processes to eliminate or minimize waste. (YDS)
The NASA Lewis large wind turbine program
NASA Technical Reports Server (NTRS)
Thomas, R. L.; Baldwin, D. H.
1981-01-01
The program is directed toward development of the technology for safe, reliable, environmentally acceptable large wind turbines that have the potential to generate a significant amount of electricity at costs competitive with conventional electric generation systems. In addition, these large wind turbines must be fully compatible with electric utility operations and interface requirements. Advances are made by gaining a better understanding of the system design drivers, improvements in the analytical design tools, verification of design methods with operating field data, and the incorporation of new technology and innovative designs. An overview of the program activities is presented and includes results from the first and second generation field machines (Mod-OA, -1, and -2), the design phase of the third generation wind turbine (Mod-5) and the advanced technology projects. Also included is the status of the Department of Interior WTS-4 machine.
Seo, Seongwon; Hwang, Yongwoo
1999-08-01
Construction and demolition (C&D) debris is generated at the site of various construction activities. However, the amount of the debris is usually so large that it is necessary to estimate the amount of C&D debris as accurately as possible for effective waste management and control in urban areas. In this paper, an effective estimation method using a statistical model was proposed. The estimation process was composed of five steps: estimation of the life span of buildings; estimation of the floor area of buildings to be constructed and demolished; calculation of individual intensity units of C&D debris; and estimation of the future C&D debris production. This method was also applied in the city of Seoul as an actual case, and the estimated amount of C&D debris in Seoul in 2021 was approximately 24 million tons. Of this total amount, 98% was generated by demolition, and the main components of debris were concrete and brick.
Detection of tiny amounts of fissile materials in large-sized containers with radioactive waste
NASA Astrophysics Data System (ADS)
Batyaev, V. F.; Skliarov, S. V.
2018-01-01
The paper is devoted to non-destructive control of tiny amounts of fissile materials in large-sized containers filled with radioactive waste (RAW). The aim of this work is to model an active neutron interrogation facility for detection of fissile ma-terials inside NZK type containers with RAW and determine the minimal detectable mass of U-235 as a function of various param-eters: matrix type, nonuniformity of container filling, neutron gen-erator parameters (flux, pulse frequency, pulse duration), meas-urement time. As a result the dependence of minimal detectable mass on fissile materials location inside container is shown. Nonu-niformity of the thermal neutron flux inside a container is the main reason of the space-heterogeneity of minimal detectable mass in-side a large-sized container. Our experiments with tiny amounts of uranium-235 (<1 g) confirm the detection of fissile materials in NZK containers by using active neutron interrogation technique.
Automating the Generation of the Cassini Tour Atlas Database
NASA Technical Reports Server (NTRS)
Grazier, Kevin R.; Roumeliotis, Chris; Lange, Robert D.
2010-01-01
The Tour Atlas is a large database of geometrical tables, plots, and graphics used by Cassini science planning engineers and scientists primarily for science observation planning. Over time, as the contents of the Tour Atlas grew, the amount of time it took to recreate the Tour Atlas similarly grew--to the point that it took one person a week of effort. When Cassini tour designers estimated that they were going to create approximately 30 candidate Extended Mission trajectories--which needed to be analyzed for science return in a short amount of time--it became a necessity to automate. We report on the automation methodology that reduced the amount of time it took one person to (re)generate a Tour Atlas from a week to, literally, one UNIX command.
Multiresource inventories incorporating GIS, GPS, and database management systems
Loukas G. Arvanitis; Balaji Ramachandran; Daniel P. Brackett; Hesham Abd-El Rasol; Xuesong Du
2000-01-01
Large-scale natural resource inventories generate enormous data sets. Their effective handling requires a sophisticated database management system. Such a system must be robust enough to efficiently store large amounts of data and flexible enough to allow users to manipulate a wide variety of information. In a pilot project, related to a multiresource inventory of the...
A DATABASE FOR TRACKING TOXICOGENOMIC SAMPLES AND PROCEDURES
Reproductive toxicogenomic studies generate large amounts of toxicological and genomic data. On the toxicology side, a substantial quantity of data accumulates from conventional endpoints such as histology, reproductive physiology and biochemistry. The largest source of genomics...
Toxic fluoride gas emissions from lithium-ion battery fires.
Larsson, Fredrik; Andersson, Petra; Blomqvist, Per; Mellander, Bengt-Erik
2017-08-30
Lithium-ion battery fires generate intense heat and considerable amounts of gas and smoke. Although the emission of toxic gases can be a larger threat than the heat, the knowledge of such emissions is limited. This paper presents quantitative measurements of heat release and fluoride gas emissions during battery fires for seven different types of commercial lithium-ion batteries. The results have been validated using two independent measurement techniques and show that large amounts of hydrogen fluoride (HF) may be generated, ranging between 20 and 200 mg/Wh of nominal battery energy capacity. In addition, 15-22 mg/Wh of another potentially toxic gas, phosphoryl fluoride (POF 3 ), was measured in some of the fire tests. Gas emissions when using water mist as extinguishing agent were also investigated. Fluoride gas emission can pose a serious toxic threat and the results are crucial findings for risk assessment and management, especially for large Li-ion battery packs.
Generation and Recovery of Solid Wood Waste in the U.S.
Bob Falk; David McKeever
2012-01-01
North America has a vast system of hardwood and softwood forests, and the wood harvested from this resource is widely used in many applications. These include lumber and other building materials, furniture, crating, containers, pallets and other consumer goods. This wide array of wood products generates not only a large amount of industrial wood by-product during the...
Designing a low-cost pollution prevention plan to pay off at the University of Houston.
Bialowas, Yurika Diaz; Sullivan, Emmett C; Schneller, Robert D
2006-09-01
The University of Houston is located just south of downtown Houston, TX. Many different chemical substances are used in scientific research and teaching activities throughout the campus. These activities generate a significant amount of waste materials that must be discarded as regulated hazardous waste per U.S. Environmental Protection Agency (EPA) rules. The Texas Commission on Environmental Quality (TCEQ) is the state regulatory agency that has enforcement authority for EPA hazardous waste rules in Texas. Currently, the University is classified as a large quantity generator and generates >1000 kg per month of hazardous waste. In addition, the University has experienced a major surge in research activities during the past several years, and overall the quantity of the hazardous waste generated has increased. The TCEQ requires large quantity generators to prepare a 5-yr Pollution Prevention (P2) Plan, which describes efforts to eliminate or minimize the amount of hazardous waste generated. This paper addresses the design and development of a low-cost P2 plan with minimal implementation obstacles and strong payoff potentials for the University. The projects identified can be implemented with existing University staff resources. This benefits the University by enhancing its environmental compliance efforts, and the disposal cost savings can be used for other purposes. Other educational institutions may benefit by undertaking a similar process.
Development of large, horizontal-axis wind turbines
NASA Technical Reports Server (NTRS)
Baldwin, D. H.; Kennard, J.
1985-01-01
A program to develop large, horizontal-axis wind turbines is discussed. The program is directed toward developing the technology for safe, reliable, environmentally acceptable large wind turbines that can generate a significant amount of electricity at costs competitive with those of conventional electricity-generating systems. In addition, these large wind turbines must be fully compatible with electric utility operations and interface requirements. Several ongoing projects in large-wind-turbine development are directed toward meeting the technology requirements for utility applications. The machines based on first-generation technology (Mod-OA and Mod-1) successfully completed their planned periods of experimental operation in June, 1982. The second-generation machines (Mod-2) are in operation at selected utility sites. A third-generation machine (Mod-5) is under contract. Erection and initial operation of the Mod-5 in Hawaii should take place in 1986. Each successive generation of technology increased reliability and energy capture while reducing the cost of electricity. These advances are being made by gaining a better understanding of the system-design drivers, improving the analytical design tools, verifying design methods with operating field data, and incorporating new technology and innovative designs. Information is given on the results from the first- and second-generation machines (Mod-OA, - 1, and -2), the status of the Department of Interior, and the status of the third-generation wind turbine (Mod-5).
NASA Technical Reports Server (NTRS)
Williamson, P. R.; Banks, P. M.
1976-01-01
The objectives of the Tethered Balloon Current Generator experiment are to: (1) generate relatively large regions of thermalized, field-aligned currents, (2) produce controlled-amplitude Alfven waves, (3) study current-driven electrostatic plasma instabilities, and (4) generate substantial amounts of power or propulsion through the MHD interaction. A large balloon (a diameter of about 30 m) will be deployed with a conducting surface above the space shuttle at a distance of about 10 km. For a generally eastward directed orbit at an altitude near 400 km, the balloon, connected to the shuttle by a conducting wire, will be positive with respect to the shuttle, enabling it to collect electrons. At the same time, the shuttle will collect positive ions and, upon command, emit an electron beam to vary current flow in the system.
Inventories of woody residues and solid wood waste in the United States, 2002
David B. McKeever
2004-01-01
Large amounts of woody residues and wood waste are generated annually in the United States. In 2002, an estimated 240 million metric tons was generated during the extraction of timber from the Nationâs forests, from forestry cultural operations, in the conversion of forest land to nonforest uses, in the initial processing of roundwood timber into usable products, in...
ENCAPSULATING WASTE DISPOSAL METHODS - PHASE I
Totally Connected Healthcare with TV White Spaces.
Katzis, Konstantinos; Jones, Richard W; Despotou, Georgios
2017-01-01
Recent technological advances in electronics, wireless communications and low cost medical sensors generated a plethora of Wearable Medical Devices (WMDs), which are capable of generating considerably large amounts of new, unstructured real-time data. This contribution outlines how this data can be propagated to a healthcare system through the internet, using long distance Radio Access Networks (RANs) and proposes a novel communication system architecture employing White Space Devices (WSD) to provide seamless connectivity to its users. Initial findings indicate that the proposed communication system can facilitate broadband services over a large geographical area taking advantage of the freely available TV White Spaces (TVWS).
Automatic generation of reports at the TELECOM SCC
NASA Astrophysics Data System (ADS)
Beltan, Thierry; Jalbaud, Myriam; Fronton, Jean Francois
In-orbit satellite follow-up produces a certain amount of reports on a regular basis (daily, weekly, quarterly, annually). Most of these documents use the information of former issues with the increments of the last period of time. They are made up of text, tables, graphs or pictures. The system presented here is the SGMT (Systeme de Gestion de la Memoire Technique), which means Technical Memory Mangement System. It provides the system operators with tools to generate the greatest part of these reports, as automatically as possible. It gives an easy access to the reports and the large amount of available memory enables the user to consult data on the complete lifetime of a satellite family.
Tools to Manage and Access the NOMAD Data
NASA Astrophysics Data System (ADS)
Trompet, L.; Vandaele, A. C.; Thomas, I. R.
2018-04-01
The NOMAD instrument on-board the ExoMars spacecraft will generate a large amount of data of the atmosphere of Mars. The Planetary Aeronomy Division at IASB is willing to make their tools and these data available to the whole planetary science community.
New insights into liquid chromatography for more eco-friendly analysis of pharmaceuticals.
Shaaban, Heba
2016-10-01
Greening the analytical methods used for analysis of pharmaceuticals has been receiving great interest aimed at eliminating or minimizing the amount of organic solvents consumed daily worldwide without loss in chromatographic performance. Traditional analytical LC techniques employed in pharmaceutical analysis consume tremendous amounts of hazardous solvents and consequently generate large amounts of waste. The monetary and ecological impact of using large amounts of solvents and waste disposal motivated the analytical community to search for alternatives to replace polluting analytical methodologies with clean ones. In this context, implementing the principles of green analytical chemistry (GAC) in analytical laboratories is highly desired. This review gives a comprehensive overview on different green LC pathways for implementing GAC principles in analytical laboratories and focuses on evaluating the greenness of LC analytical procedures. This review presents green LC approaches for eco-friendly analysis of pharmaceuticals in industrial, biological, and environmental matrices. Graphical Abstract Green pathways of liquid chromatography for more eco-friendly analysis of pharmaceuticals.
Environmental status of livestock and poultry sectors in China under current transformation stage.
Qian, Yi; Song, Kaihui; Hu, Tao; Ying, Tianyu
2018-05-01
Intensive animal husbandry had aroused great environmental concerns in many developed countries. However, some developing countries are still undergoing the environmental pollution from livestock and poultry sectors. Driven by the large demand, China has experienced a remarkable increase in dairy and meat production, especially in the transformation stage from conventional household breeding to large-scale industrial breeding. At the same time, a large amount of manure from the livestock and poultry sector is released into waterbodies and soil, causing eutrophication and soil degradation. This condition will be reinforced in the large-scale cultivation where the amount of manure exceeds the soil nutrient capacity, if not treated or utilized properly. Our research aims to analyze whether the transformation of raising scale would be beneficial to the environment as well as present the latest status of livestock and poultry sectors in China. The estimation of the pollutants generated and discharged from livestock and poultry sector in China will facilitate the legislation of manure management. This paper analyzes the pollutants generated from the manure of the five principal commercial animals in different farming practices. The results show that the fattening pigs contribute almost half of the pollutants released from manure. Moreover, the beef cattle exert the largest environmental impact for unitary production, about 2-3 times of pork and 5-20 times of chicken. The animals raised with large-scale feedlots practice generate fewer pollutants than those raised in households. The shift towards industrial production of livestock and poultry is easier to manage from the environmental perspective, but adequate large-scale cultivation is encouraged. Regulation control, manure treatment and financial subsidies for the manure treatment and utilization are recommended to achieve the ecological agriculture in China. Copyright © 2017 Elsevier B.V. All rights reserved.
Enabling Graph Appliance for Genome Assembly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Rina; Graves, Jeffrey A; Lee, Sangkeun
2015-01-01
In recent years, there has been a huge growth in the amount of genomic data available as reads generated from various genome sequencers. The number of reads generated can be huge, ranging from hundreds to billions of nucleotide, each varying in size. Assembling such large amounts of data is one of the challenging computational problems for both biomedical and data scientists. Most of the genome assemblers developed have used de Bruijn graph techniques. A de Bruijn graph represents a collection of read sequences by billions of vertices and edges, which require large amounts of memory and computational power to storemore » and process. This is the major drawback to de Bruijn graph assembly. Massively parallel, multi-threaded, shared memory systems can be leveraged to overcome some of these issues. The objective of our research is to investigate the feasibility and scalability issues of de Bruijn graph assembly on Cray s Urika-GD system; Urika-GD is a high performance graph appliance with a large shared memory and massively multithreaded custom processor designed for executing SPARQL queries over large-scale RDF data sets. However, to the best of our knowledge, there is no research on representing a de Bruijn graph as an RDF graph or finding Eulerian paths in RDF graphs using SPARQL for potential genome discovery. In this paper, we address the issues involved in representing a de Bruin graphs as RDF graphs and propose an iterative querying approach for finding Eulerian paths in large RDF graphs. We evaluate the performance of our implementation on real world ebola genome datasets and illustrate how genome assembly can be accomplished with Urika-GD using iterative SPARQL queries.« less
USE OF ALTERED MICROORGANISMS FOR FIELD BIODEGRADATION OF HAZARDOUS MATERIALS
The large amount of hazardous waste generated and disposed of has given rise to environmental conditions requiring remedial treatment. he use of landfills has traditionally been a cost-effective means to dispose of waste. owever, increased costs of transportation and decreasing n...
Composing Data Parallel Code for a SPARQL Graph Engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Tumeo, Antonino; Villa, Oreste
Big data analytics process large amount of data to extract knowledge from them. Semantic databases are big data applications that adopt the Resource Description Framework (RDF) to structure metadata through a graph-based representation. The graph based representation provides several benefits, such as the possibility to perform in memory processing with large amounts of parallelism. SPARQL is a language used to perform queries on RDF-structured data through graph matching. In this paper we present a tool that automatically translates SPARQL queries to parallel graph crawling and graph matching operations. The tool also supports complex SPARQL constructs, which requires more than basicmore » graph matching for their implementation. The tool generates parallel code annotated with OpenMP pragmas for x86 Shared-memory Multiprocessors (SMPs). With respect to commercial database systems such as Virtuoso, our approach reduces memory occupation due to join operations and provides higher performance. We show the scaling of the automatically generated graph-matching code on a 48-core SMP.« less
An SQL query generator for CLIPS
NASA Technical Reports Server (NTRS)
Snyder, James; Chirica, Laurian
1990-01-01
As expert systems become more widely used, their access to large amounts of external information becomes increasingly important. This information exists in several forms such as statistical, tabular data, knowledge gained by experts and large databases of information maintained by companies. Because many expert systems, including CLIPS, do not provide access to this external information, much of the usefulness of expert systems is left untapped. The scope of this paper is to describe a database extension for the CLIPS expert system shell. The current industry standard database language is SQL. Due to SQL standardization, large amounts of information stored on various computers, potentially at different locations, will be more easily accessible. Expert systems should be able to directly access these existing databases rather than requiring information to be re-entered into the expert system environment. The ORACLE relational database management system (RDBMS) was used to provide a database connection within the CLIPS environment. To facilitate relational database access a query generation system was developed as a CLIPS user function. The queries are entered in a CLlPS-like syntax and are passed to the query generator, which constructs and submits for execution, an SQL query to the ORACLE RDBMS. The query results are asserted as CLIPS facts. The query generator was developed primarily for use within the ICADS project (Intelligent Computer Aided Design System) currently being developed by the CAD Research Unit in the California Polytechnic State University (Cal Poly). In ICADS, there are several parallel or distributed expert systems accessing a common knowledge base of facts. Expert system has a narrow domain of interest and therefore needs only certain portions of the information. The query generator provides a common method of accessing this information and allows the expert system to specify what data is needed without specifying how to retrieve it.
Large, horizontal-axis wind turbines
NASA Technical Reports Server (NTRS)
Linscott, B. S.; Perkins, P.; Dennett, J. T.
1984-01-01
Development of the technology for safe, reliable, environmentally acceptable large wind turbines that have the potential to generate a significant amount of electricity at costs competitive with conventional electric generating systems are presented. In addition, these large wind turbines must be fully compatible with electric utility operations and interface requirements. There are several ongoing large wind system development projects and applied research efforts directed toward meeting the technology requirements for utility applications. Detailed information on these projects is provided. The Mod-O research facility and current applied research effort in aerodynamics, structural dynamics and aeroelasticity, composite and hybrid composite materials, and multiple system interaction are described. A chronology of component research and technology development for large, horizontal axis wind turbines is presented. Wind characteristics, wind turbine economics, and the impact of wind turbines on the environment are reported. The need for continued wind turbine research and technology development is explored. Over 40 references are sited and a bibliography is included.
Privacy Preserving PCA on Distributed Bioinformatics Datasets
ERIC Educational Resources Information Center
Li, Xin
2011-01-01
In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…
ERIC Educational Resources Information Center
Demski, Jennifer
2013-01-01
The University of San Diego (USD) and Point Loma Nazarene University (PLNU) are licensing the sun. Both California schools are generating solar power on campus without having to sink large amounts of capital into equipment and installation. By negotiating power purchasing agreements (PPAs) with Amsolar and Perpetual Energy Systems, respectively,…
DOT National Transportation Integrated Search
2013-03-01
Coal power plants generate approximately 50% of the electricity in the : United States. As a result, large amounts of coal combustion byproducts, : especially fly ash, are produced annually. Only 40% of the fly ash : (mainly C and F-type classificati...
Minimising generation of acid whey during Greek yoghurt manufacturing.
Uduwerella, Gangani; Chandrapala, Jayani; Vasiljevic, Todor
2017-08-01
Greek yoghurt, a popular dairy product, generates large amounts of acid whey as a by-product during manufacturing. Post-processing treatment of this stream presents one of the main concerns for the industry. The objective of this study was to manipulate initial milk total solids content (15, 20 or 23 g/100 g) by addition of milk protein concentrate, thus reducing whey expulsion. Such an adjustment was investigated from the technological standpoint including starter culture performance, chemical and physical properties of manufactured Greek yoghurt and generated acid whey. A comparison was made to commercially available products. Increasing protein content in regular yoghurt reduced the amount of acid whey during whey draining. This protein fortification also enhanced the Lb. bulgaricus growth rate and proteolytic activity. Best structural properties including higher gel strength and lower syneresis were observed in the Greek yoghurt produced with 20 g/100 g initial milk total solid compared to manufactured or commercially available products, while acid whey generation was lowered due to lower drainage requirement.
Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana
2016-01-01
With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.
pyPcazip: A PCA-based toolkit for compression and analysis of molecular simulation data
NASA Astrophysics Data System (ADS)
Shkurti, Ardita; Goni, Ramon; Andrio, Pau; Breitmoser, Elena; Bethune, Iain; Orozco, Modesto; Laughton, Charles A.
The biomolecular simulation community is currently in need of novel and optimised software tools that can analyse and process, in reasonable timescales, the large generated amounts of molecular simulation data. In light of this, we have developed and present here pyPcazip: a suite of software tools for compression and analysis of molecular dynamics (MD) simulation data. The software is compatible with trajectory file formats generated by most contemporary MD engines such as AMBER, CHARMM, GROMACS and NAMD, and is MPI parallelised to permit the efficient processing of very large datasets. pyPcazip is a Unix based open-source software (BSD licenced) written in Python.
Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.
2017-06-19
The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less
Geo-reCAPTCHA: Crowdsourcing large amounts of geographic information from earth observation data
NASA Astrophysics Data System (ADS)
Hillen, Florian; Höfle, Bernhard
2015-08-01
The reCAPTCHA concept provides a large amount of valuable information for various applications. First, it provides security, e.g., for a form on a website, by means of a test that only a human could solve. Second, the effort of the user for this test is used to generate additional information, e.g., digitization of books or identification of house numbers. In this work, we present a concept for adapting the reCAPTCHA idea to create user-generated geographic information from earth observation data, and the requirements during the conception and implementation are depicted in detail. Furthermore, the essential parts of a Geo-reCAPTCHA system are described, and afterwards transferred, to a prototype implementation. An empirical user study is conducted to investigate the Geo-reCAPTCHA approach, assessing time and quality of the resulting geographic information. Our results show that a Geo-reCAPTCHA can be solved by the users of our study on building digitization in a short amount of time (19.2 s on average) with an overall average accuracy of the digitizations of 82.2%. In conclusion, Geo-reCAPTCHA has the potential to be a reasonable alternative to the typical reCAPTCHA, and to become a new data-rich channel of crowdsourced geographic information.
An extremely simple green approach is described that generates bulk quantities of nanofibers of the electronic polymer polyaniline in fully reduced state (leucoemeraldine form) in one step without using any reducing agent, surfactants, and/or large amounts of insoluble templates....
Jensen et al. investigated aspects of the normal reproductive biology of the fathead minnow (FHM, P. promelas), and subsequent studies have generated a large amount of additional reproductive data for endpoints such as plasma steroid hormone and vitellogenin concentrations, spa...
Converting waste gases from pulp mills into value-added chemicals
Engineering, Miami University, 64 J Engineering Building, Oxford, OH, 45056 The pulp and paper industry generates large amounts of HAPs, VOCs and total reduced sulfur compounds (TRSs) of the various sources. As the industry is moving to a sustainable future, the U.S. EPA and Mia...
Stakeholder engagement and feedback efforts to increase use of the iCSS ToxCast Dashboard (SETAC)
In the era of ‘Big Data’ research, many government agencies are engaged in generating and making public large amounts of data that underly both research and regulatory decisions. Public access increases the ‘democratization’ of science by enhancing transparency and access. Howev...
Separation methods and chemical and nutritional characteristics of tomato pomace
USDA-ARS?s Scientific Manuscript database
Tomato processing generates a large amount of pomace as a low value by-product primarily used as livestock feed or disposed. The objectives of this research were to investigate the chemical and nutritional characteristics and determine effective separation methods of peel and seed of commercial toma...
Major soybean maturity gene haplotypes revealed by SNPViz analysis of 72 sequenced soybean genomes
USDA-ARS?s Scientific Manuscript database
In this Genomics Era, vast amounts of next generation sequencing data have become publicly-available for multiple genomes across hundreds of species. Analysis of these large-scale datasets can become cumbersome, especially when comparing nucleotide polymorphisms across many samples within a dataset...
Knockout of an outer membrane protein operon of anaplasma marginale by transposon mutagenesis
USDA-ARS?s Scientific Manuscript database
Large amounts of data generated by genomics, transcriptomics and proteomics technologies have increased our understanding of the biology of Anaplasma marginale. However, these data have also led to new assumptions that require testing, ideally through classic genetic mutation. One example is the def...
Self-Assembled Nano-energetic Gas Generators based on Bi2O3
NASA Astrophysics Data System (ADS)
Hobosyan, Mkhitar; Trevino, Tyler; Martirosyan, Karen
2012-10-01
Nanoenergetic Gas-Generators are formulations that rapidly release a large amount of gaseous products and generate a fast moving thermal wave. They are mainly based on thermite systems, which are pyrotechnic mixtures of metal powders (fuel- Al, Mg, etc.) and metal oxides (oxidizer, Bi2O3, Fe2O3, WO3, MoO3 etc.) that can generate an exothermic oxidation-reduction reaction referred to as a thermite reaction. A thermite reaction releases a large amount of energy and can generate rapidly extremely high temperatures. The intimate contact between the fuel and oxidizer can be enhanced by use of nano instead of micro particles. The contact area between oxidizer and metal particles depends from method of mixture preparation. In this work we utilize the self-assembly processes, which use the electrostatic forces to produce ordered and self-organized binary systems. In this process the intimate contact significantly enhances and gives the ability to build an energetic material in molecular level, which is crucial for thepressure discharge efficiency of nano-thermites. The DTA-TGA, Zeta-size analysis and FTIR technique were performed to characterize the Bi2O3 particles. The self-assembly of Aluminum and Bi2O3 was conducted in sonic bath with appropriate solvents and linkers. The resultant thermite pressure discharge values were tested in modified Parr reactor. In general, the self-assembled thermites give much higher-pressure discharge values than the thermites prepared with conventional roll-mixing technique.
Gigwa-Genotype investigator for genome-wide analyses.
Sempéré, Guilhem; Philippe, Florian; Dereeper, Alexis; Ruiz, Manuel; Sarah, Gautier; Larmande, Pierre
2016-06-06
Exploring the structure of genomes and analyzing their evolution is essential to understanding the ecological adaptation of organisms. However, with the large amounts of data being produced by next-generation sequencing, computational challenges arise in terms of storage, search, sharing, analysis and visualization. This is particularly true with regards to studies of genomic variation, which are currently lacking scalable and user-friendly data exploration solutions. Here we present Gigwa, a web-based tool that provides an easy and intuitive way to explore large amounts of genotyping data by filtering it not only on the basis of variant features, including functional annotations, but also on genotype patterns. The data storage relies on MongoDB, which offers good scalability properties. Gigwa can handle multiple databases and may be deployed in either single- or multi-user mode. In addition, it provides a wide range of popular export formats. The Gigwa application is suitable for managing large amounts of genomic variation data. Its user-friendly web interface makes such processing widely accessible. It can either be simply deployed on a workstation or be used to provide a shared data portal for a given community of researchers.
Fast calculation method for computer-generated cylindrical holograms.
Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi
2008-07-01
Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.
Chun, Young Nam; Jeong, Byeo Ri
2017-07-28
Microwave drying-pyrolysis or drying-gasification characteristics were examined to convert sewage sludge into energy and resources. The gasification was carried out with carbon dioxide as a gasifying agent. The examination results were compared with those of the conventional heating-type electric furnace to compare both product characteristics. Through the pyrolysis or gasification, gas, tar, and char were generated as products. The produced gas was the largest component of each process, followed by the sludge char and the tar. During the pyrolysis process, the main components of the produced gas were hydrogen and carbon monoxide, with a small amount of hydrocarbons such as methane and ethylene. In the gasification process, however, the amount of carbon monoxide was greater than the amount of hydrogen. In microwave gasification, a large amount of heavy tar was produced. The largest amount of benzene in light tar was generated from the pyrolysis or gasification. Ammonia and hydrogen cyanide, which are precursors of NO x , were also generated. In the microwave heating method, the sludge char produced by pyrolysis and gasification had pores in the mesopore range. This could be explained that the gas obtained from the microwave pyrolysis or gasification of the wet sewage sludge can be used as an alternative fuel, but the tar and NO x precursors in the produced gas should be treated. Sludge char can be used as a biomass solid fuel or as a tar removal adsorbent if necessary.
Lightweight Integrated Solar Array (LISA): Providing Higher Power to Small Spacecraft
NASA Technical Reports Server (NTRS)
Johnson, Les; Carr, John; Fabisinski, Leo; Lockett, Tiffany Russell
2015-01-01
Affordable and convenient access to electrical power is essential for all spacecraft and is a critical design driver for the next generation of smallsats, including CubeSats, which are currently extremely power limited. The Lightweight Integrated Solar Array (LISA), a concept designed, prototyped, and tested at the NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama provides an affordable, lightweight, scalable, and easily manufactured approach for power generation in space. This flexible technology has many wide-ranging applications from serving small satellites to providing abundant power to large spacecraft in GEO and beyond. By using very thin, ultraflexible solar arrays adhered to an inflatable or deployable structure, a large area (and thus large amount of power) can be folded and packaged into a relatively small volume.
NASA Astrophysics Data System (ADS)
Masuta, Taisuke; Shimizu, Koichiro; Yokoyama, Akihiko
In Japan, from the viewpoints of global warming countermeasures and energy security, it is expected to establish a smart grid as a power system into which a large amount of generation from renewable energy sources such as wind power generation and photovoltaic generation can be installed. Measures for the power system stability and reliability are necessary because a large integration of these renewable energy sources causes some problems in power systems, e.g. frequency fluctuation and distribution voltage rise, and Battery Energy Storage System (BESS) is one of effective solutions to these problems. Due to a high cost of the BESS, our research group has studied an application of controllable loads such as Heat Pump Water Heater (HPWH) and Electric Vehicle (EV) to the power system control for reduction of the required capacity of BESS. This paper proposes a new coordinated Load Frequency Control (LFC) method for the conventional power plants, the BESS, the HPWHs, and the EVs. The performance of the proposed LFC method is evaluated by the numerical simulations conducted on a power system model with a large integration of wind power generation and photovoltaic generation.
Energy Storage Requirements for Achieving 50% Penetration of Solar Photovoltaic Energy in California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, Paul; Margolis, Robert
2016-09-01
We estimate the storage required to enable PV penetration up to 50% in California (with renewable penetration over 66%), and we quantify the complex relationships among storage, PV penetration, grid flexibility, and PV costs due to increased curtailment. We find that the storage needed depends strongly on the amount of other flexibility resources deployed. With very low-cost PV (three cents per kilowatt-hour) and a highly flexible electric power system, about 19 gigawatts of energy storage could enable 50% PV penetration with a marginal net PV levelized cost of energy (LCOE) comparable to the variable costs of future combined-cycle gas generatorsmore » under carbon constraints. This system requires extensive use of flexible generation, transmission, demand response, and electrifying one quarter of the vehicle fleet in California with largely optimized charging. A less flexible system, or more expensive PV would require significantly greater amounts of storage. The amount of storage needed to support very large amounts of PV might fit within a least-cost framework driven by declining storage costs and reduced storage-duration needs due to high PV penetration.« less
Energy Storage Requirements for Achieving 50% Solar Photovoltaic Energy Penetration in California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, Paul; Margolis, Robert
2016-08-01
We estimate the storage required to enable PV penetration up to 50% in California (with renewable penetration over 66%), and we quantify the complex relationships among storage, PV penetration, grid flexibility, and PV costs due to increased curtailment. We find that the storage needed depends strongly on the amount of other flexibility resources deployed. With very low-cost PV (three cents per kilowatt-hour) and a highly flexible electric power system, about 19 gigawatts of energy storage could enable 50% PV penetration with a marginal net PV levelized cost of energy (LCOE) comparable to the variable costs of future combined-cycle gas generatorsmore » under carbon constraints. This system requires extensive use of flexible generation, transmission, demand response, and electrifying one quarter of the vehicle fleet in California with largely optimized charging. A less flexible system, or more expensive PV would require significantly greater amounts of storage. The amount of storage needed to support very large amounts of PV might fit within a least-cost framework driven by declining storage costs and reduced storage-duration needs due to high PV penetration.« less
He, W; Zhao, S; Liu, X; Dong, S; Lv, J; Liu, D; Wang, J; Meng, Z
2013-12-04
Large-scale next-generation sequencing (NGS)-based resequencing detects sequence variations, constructs evolutionary histories, and identifies phenotype-related genotypes. However, NGS-based resequencing studies generate extraordinarily large amounts of data, making computations difficult. Effective use and analysis of these data for NGS-based resequencing studies remains a difficult task for individual researchers. Here, we introduce ReSeqTools, a full-featured toolkit for NGS (Illumina sequencing)-based resequencing analysis, which processes raw data, interprets mapping results, and identifies and annotates sequence variations. ReSeqTools provides abundant scalable functions for routine resequencing analysis in different modules to facilitate customization of the analysis pipeline. ReSeqTools is designed to use compressed data files as input or output to save storage space and facilitates faster and more computationally efficient large-scale resequencing studies in a user-friendly manner. It offers abundant practical functions and generates useful statistics during the analysis pipeline, which significantly simplifies resequencing analysis. Its integrated algorithms and abundant sub-functions provide a solid foundation for special demands in resequencing projects. Users can combine these functions to construct their own pipelines for other purposes.
Parisi Kern, Andrea; Ferreira Dias, Michele; Piva Kulakowski, Marlova; Paulo Gomes, Luciana
2015-05-01
Reducing construction waste is becoming a key environmental issue in the construction industry. The quantification of waste generation rates in the construction sector is an invaluable management tool in supporting mitigation actions. However, the quantification of waste can be a difficult process because of the specific characteristics and the wide range of materials used in different construction projects. Large variations are observed in the methods used to predict the amount of waste generated because of the range of variables involved in construction processes and the different contexts in which these methods are employed. This paper proposes a statistical model to determine the amount of waste generated in the construction of high-rise buildings by assessing the influence of design process and production system, often mentioned as the major culprits behind the generation of waste in construction. Multiple regression was used to conduct a case study based on multiple sources of data of eighteen residential buildings. The resulting statistical model produced dependent (i.e. amount of waste generated) and independent variables associated with the design and the production system used. The best regression model obtained from the sample data resulted in an adjusted R(2) value of 0.694, which means that it predicts approximately 69% of the factors involved in the generation of waste in similar constructions. Most independent variables showed a low determination coefficient when assessed in isolation, which emphasizes the importance of assessing their joint influence on the response (dependent) variable. Copyright © 2015 Elsevier Ltd. All rights reserved.
The east coast petroleum province: Science and society
Jordan, R.R.
1999-01-01
The U.S. Atlantic offshore, especially the mid-Atlantic, was an exciting exploration area from the 1970s into the 1980s. Much pioneering 'frontier' activity in both scientific and policy matters occurred in this area. Although production was not achieved, objective geological evidence indicates that the province does have potential. Major population centers of the mid-Atlantic area demand large amounts of energy and enormous amounts of crude and product are shipped through East Coast waters. Nevertheless, exploration has been shut down by moratoria, environmental concerns, and international pricing. It is suggested that the province will be revisited in the future and that the geologic and environmental information that has been generated at great cost should be preserved for use by the next generation of explorationists and policy-makers.
Paper-Based Analytical Devices Relying on Visible-Light-Enhanced Glucose/Air Biofuel Cells.
Wu, Kaiqing; Zhang, Yan; Wang, Yanhu; Ge, Shenguang; Yan, Mei; Yu, Jinghua; Song, Xianrang
2015-11-04
A strategy that combines visible-light-enhanced biofuel cells (BFCs) and electrochemical immunosensor into paper-based analytical devices was proposed for sensitive detection of the carbohydrate antigen 15-3 (CA15-3). The gold nanoparticle modified paper electrode with large surface area and good conductibility was applied as an effective matrix for primary antibodies. The glucose dehydrogenase (GDH) modified gold-silver bimetallic nanoparticles were used as bioanodic biocatalyst and signal magnification label. Poly(terthiophene) (pTTh), a photoresponsive conducting polymer, served as catalyst in cathode for the reduction of oxygen upon illumination by visible light. In the bioanode, electrons were generated through the oxidation of glucose catalyzed by GDH. The amount of electrons is determined by the amount of GDH, which finally depended on the amount of CA15-3. In the cathode, electrons from the bioanode could combine with the generated holes in the HOMO energy level of cathode catalysts pTTh. Meanwhile, the high energy level photoexcited electrons were generated in the LUMO energy level and involved in the oxygen reduction reaction, finally resulting in an increasing current and a decreasing overpotential. According to the current signal, simple and efficient detection of CA15-3 was achieved.
A method for interactive specification of multiple-block topologies
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.; Mccann, Karen M.
1991-01-01
A method is presented for dealing with the vast amount of topological and other data which must be specified to generate a multiple-block computational grid. Specific uses of the graphical capabilities of a powerful scientific workstation are described which reduce the burden on the user of collecting and formatting such large amounts of data. A program to implement this method, 3DPREP, is described. A plotting transformation algorithm, some useful software tools, notes on programming, and a database organization are also presented. Example grids developed using the method are shown.
Fog-Based Two-Phase Event Monitoring and Data Gathering in Vehicular Sensor Networks
Yang, Fan; Su, Jinsong; Zhou, Qifeng; Wang, Tian; Zhang, Lu; Xu, Yifan
2017-01-01
Vehicular nodes are equipped with more and more sensing units, and a large amount of sensing data is generated. Recently, more and more research considers cooperative urban sensing as the heart of intelligent and green city traffic management. The key components of the platform will be a combination of a pervasive vehicular sensing system, as well as a central control and analysis system, where data-gathering is a fundamental component. However, the data-gathering and monitoring are also challenging issues in vehicular sensor networks because of the large amount of data and the dynamic nature of the network. In this paper, we propose an efficient continuous event-monitoring and data-gathering framework based on fog nodes in vehicular sensor networks. A fog-based two-level threshold strategy is adopted to suppress unnecessary data upload and transmissions. In the monitoring phase, nodes sense the environment in low cost sensing mode and generate sensed data. When the probability of the event is high and exceeds some threshold, nodes transfer to the event-checking phase, and some nodes would be selected to transfer to the deep sensing mode to generate more accurate data of the environment. Furthermore, it adaptively adjusts the threshold to upload a suitable amount of data for decision making, while at the same time suppressing unnecessary message transmissions. Simulation results showed that the proposed scheme could reduce more than 84 percent of the data transmissions compared with other existing algorithms, while it detects the events and gathers the event data. PMID:29286320
Research on grid connection control technology of double fed wind generator
NASA Astrophysics Data System (ADS)
Ling, Li
2017-01-01
The composition and working principle of variable speed constant frequency doubly fed wind power generation system is discussed in this thesis. On the basis of theoretical analysis and control on the modeling, the doubly fed wind power generation simulation control system is designed based on a TMS320F2407 digital signal processor (DSP), and has done a large amount of experimental research, which mainly include, variable speed constant frequency, constant pressure, Grid connected control experiment. The running results show that the design of simulation control system is reasonable and can meet the need of experimental research.
Removal of Heavy Metal Contamination from Peanut Skin Extracts by Waste Biomass Adsorbents
USDA-ARS?s Scientific Manuscript database
Each year, 3.6 million pounds of peanuts are harvested in the United States. Consequent processing, however, generates large amounts of waste biomass as only the seed portion of the fruit is consumed. The under-utilization of waste biomass is a lost economic opportunity to the industry. In particula...
A Study of Student Completion Strategies in a Likert-Type Course Evaluation Survey
ERIC Educational Resources Information Center
Gee, Nick
2017-01-01
This article investigates the motivations and strategies employed by respondents to a Likert-style course evaluation at a UK university. These attitude surveys, generating large amounts of quantitative data, are commonly used in quality assurance procedures across UK higher education institutions. Similar student survey results are now scrutinised…
ERIC Educational Resources Information Center
Andrade, Alejandro; Danish, Joshua A.; Maltese, Adam V.
2017-01-01
Interactive learning environments with body-centric technologies lie at the intersection of the design of embodied learning activities and multimodal learning analytics. Sensing technologies can generate large amounts of fine-grained data automatically captured from student movements. Researchers can use these fine-grained data to create a…
Long-term ecological research programs represent tremendous investments in human labor and capital. The amount of data generated is staggering and potentially beyond the capacity of most research teams to fully explore. Since the funding of these programs comes predominately fr...
USDA-ARS?s Scientific Manuscript database
The fermentation system of mixed ruminal bacteria is capable of generating large amounts of short-chain volatile fatty acids (VFA) via the carboxylate platform in vitro. These VFAs are subject to elongation to larger, more energy-dense products through reverse beta-oxidation. This study examined the...
USDA-ARS?s Scientific Manuscript database
A large amount of phenotypic trait data are being generated in regional trials that are implemented as part of the Specialty Crop Research Initiative (SCRI) project entitled “Establishing an Eastern Broccoli Industry”. These data are used to identify the best entries in the trials for inclusion in ...
Toward a Problematic of Silence in Action Research
ERIC Educational Resources Information Center
Mazzei, Lisa A.
2007-01-01
Action researchers often generate large amounts of textual material in the form of notes and transcripts, failing to account for those thoughts that we and our research participants silently voice. As such, action research that attempts to engage practitioners in self reflexivity and textual analysis is a fertile site for a consideration of how…
Human-Level Natural Language Understanding: False Progress and Real Challenges
ERIC Educational Resources Information Center
Bignoli, Perrin G.
2013-01-01
The field of Natural Language Processing (NLP) focuses on the study of how utterances composed of human-level languages can be understood and generated. Typically, there are considered to be three intertwined levels of structure that interact to create meaning in language: syntax, semantics, and pragmatics. Not only is a large amount of…
Catalytic Generation of Lift Gases for Balloons
NASA Technical Reports Server (NTRS)
Zubrin, Robert; Berggren, Mark
2011-01-01
A lift-gas cracker (LGC) is an apparatus that generates a low-molecular-weight gas (mostly hydrogen with smaller amounts of carbon monoxide and/or carbon dioxide) at low gauge pressure by methanol reforming. LGCs are undergoing development for use as sources of buoyant gases for filling zero-gauge-pressure meteorological and scientific balloons in remote locations where heavy, high-pressure helium cylinders are not readily available. LGCs could also be used aboard large, zero-gauge-pressure, stratospheric research balloons to extend the duration of flight.
Large dynamic range radiation detector and methods thereof
Marrs, Roscoe E [Livermore, CA; Madden, Norman W [Sparks, NV
2012-02-14
According to one embodiment, a radiation detector comprises a scintillator and a photodiode optically coupled to the scintillator. The radiation detector also includes a bias voltage source electrically coupled to the photodiode, a first detector operatively electrically coupled to the photodiode for generating a signal indicative of a level of a charge at an output of the photodiode, and a second detector operatively electrically coupled to the bias voltage source for generating a signal indicative of an amount of current flowing through the photodiode.
TransAtlasDB: an integrated database connecting expression data, metadata and variants
Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J
2018-01-01
Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361
Data Compression Algorithm Architecture for Large Depth-of-Field Particle Image Velocimeters
NASA Technical Reports Server (NTRS)
Bos, Brent; Memarsadeghi, Nargess; Kizhner, Semion; Antonille, Scott
2013-01-01
A large depth-of-field particle image velocimeter (PIV) is designed to characterize dynamic dust environments on planetary surfaces. This instrument detects lofted dust particles, and senses the number of particles per unit volume, measuring their sizes, velocities (both speed and direction), and shape factors when the particles are large. To measure these particle characteristics in-flight, the instrument gathers two-dimensional image data at a high frame rate, typically >4,000 Hz, generating large amounts of data for every second of operation, approximately 6 GB/s. To characterize a planetary dust environment that is dynamic, the instrument would have to operate for at least several minutes during an observation period, easily producing more than a terabyte of data per observation. Given current technology, this amount of data would be very difficult to store onboard a spacecraft, and downlink to Earth. Since 2007, innovators have been developing an autonomous image analysis algorithm architecture for the PIV instrument to greatly reduce the amount of data that it has to store and downlink. The algorithm analyzes PIV images and automatically reduces the image information down to only the particle measurement data that is of interest, reducing the amount of data that is handled by more than 10(exp 3). The state of development for this innovation is now fairly mature, with a functional algorithm architecture, along with several key pieces of algorithm logic, that has been proven through field test data acquired with a proof-of-concept PIV instrument.
Analyzing large scale genomic data on the cloud with Sparkhit
Huang, Liren; Krüger, Jan
2018-01-01
Abstract Motivation The increasing amount of next-generation sequencing data poses a fundamental challenge on large scale genomic analytics. Existing tools use different distributed computational platforms to scale-out bioinformatics workloads. However, the scalability of these tools is not efficient. Moreover, they have heavy run time overheads when pre-processing large amounts of data. To address these limitations, we have developed Sparkhit: a distributed bioinformatics framework built on top of the Apache Spark platform. Results Sparkhit integrates a variety of analytical methods. It is implemented in the Spark extended MapReduce model. It runs 92–157 times faster than MetaSpark on metagenomic fragment recruitment and 18–32 times faster than Crossbow on data pre-processing. We analyzed 100 terabytes of data across four genomic projects in the cloud in 21 h, which includes the run times of cluster deployment and data downloading. Furthermore, our application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 h, presenting an approach to easily associate large amounts of public datasets with reference data. Availability and implementation Sparkhit is freely available at: https://rhinempi.github.io/sparkhit/. Contact asczyrba@cebitec.uni-bielefeld.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253074
Generation of hydroxyl radicals by urban suspended particulate air matter. The role of iron ions
NASA Astrophysics Data System (ADS)
Valavanidis, Athanasios; Salika, Anastasia; Theodoropoulou, Anna
Recent epidemiologic studies showed statistical associations between particulate air pollution in urban areas and increased morbidity and mortality, even at levels well within current national air quality standards. Inhalable particulate matter (PM 10) can penetrate into the lower airways where they can cause acute and chronic lung injury by generating toxic oxygen free radicals. We tested inhalable total suspended particulates (TSP) from the Athens area, diesel and gasoline exhaust particles (DEP and GED), and urban street dusts, by Electron Paramagnetic Resonance (EPR). All particulates can generate hydroxyl radicals (HO ṡ), in aqueous buffered solutions, in the presence of hydrogen peroxide. Results showed that oxidant generating activity is related with soluble iron ions. Leaching studies showed that urban particulate matter can release large amounts of Fe 3+ and lesser amounts of Fe 2+, as it was shown from other studies. Direct evidence of HO ṡ was confirmed by spin trapping with DMPO and measurement of DMPO-OH adduct by EPR. Evidence was supported with the use of chelator (EDTA), which increases the EPR signal, and the inhibition of the radical generating activity by desferrioxamine or/and antioxidants ( D-mannitol, sodium benzoate).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehto, J.; Ikaeheimonen, T.K.; Salbu, B.
The fallout from a major nuclear accident at a nuclear plant may result in a wide-scale contamination of the environment. Cleanup of contaminated areas is of special importance if these areas are populated or cultivated. All cleanup measures generate high amounts of radioactive waste, which have to be treated and disposed of in a safe manner. Scenarios assessing the amounts and activity concentrations of radioactive wastes for various cleanup measures after severe nuclear accidents have been worked out for urban, forest and agricultural areas. These scenarios are based on contamination levels and ares of contaminated lands from a model accident,more » which simulates a worst case accident at a nuclear power plant. Amounts and activity concentrations of cleanup wastes are not only dependent on the contamination levels and areas of affected lands, but also on the type of deposition, wet or dry, on the time between the deposition and the cleanup work, on the season, at which the deposition took place, and finally on the level of cleanup work. In this study practically all types of cleanup wastes were considered, whether or not the corresponding cleanup measures are cost-effective or justified. All cleanup measures are shown to create large amounts of radioactive wastes, but the amounts, as well as the activity concentrations vary widely from case to case.« less
Determining national greenhouse gas emissions from waste-to-energy using the Balance Method.
Schwarzböck, Therese; Rechberger, Helmut; Cencic, Oliver; Fellner, Johann
2016-03-01
Different directives of the European Union require operators of waste-to-energy (WTE) plants to report the amount of electricity that is produced from biomass in the waste feed, as well as the amount of fossil CO2 emissions generated by the combustion of fossil waste materials. This paper describes the application of the Balance Method for determining the overall amount of fossil and thus climate relevant CO2 emissions from waste incineration in Austria. The results of 10 Austrian WTE plants (annual waste throughput of around 2,300 kt) demonstrate large seasonal variations in the specific fossil CO2 emissions of the plants as well as large differences between the facilities (annual means range from 32±2 to 51±3 kg CO(2,foss)/GJ heating value). An overall amount of around 924 kt/yr of fossil CO2 for all 10 WTE plants is determined. In comparison biogenic (climate neutral) CO2 emissions amount to 1,187 kt/yr, which corresponds to 56% of the total CO2 emissions from waste incineration. The total energy input via waste feed to the 10 facilities is about 22,500 TJ/yr, of which around 48% can be assigned to biogenic and thus renewable sources. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sharma, Davinder; Golla, Naresh; Singh, Dheer; Onteru, Suneel K
2018-03-01
The next-generation sequencing (NGS) based RNA sequencing (RNA-Seq) and transcriptome profiling offers an opportunity to unveil complex biological processes. Successful RNA-Seq and transcriptome profiling requires a large amount of high-quality RNA. However, NGS-quality RNA isolation is extremely difficult from recalcitrant adipose tissue (AT) with high lipid content and low cell numbers. Further, the amount and biochemical composition of AT lipid varies depending upon the animal species which can pose different degree of resistance to RNA extraction. Currently available approaches may work effectively in one species but can be almost unproductive in another species. Herein, we report a two step protocol for the extraction of NGS quality RNA from AT across a broad range of animal species. © 2017 Wiley Periodicals, Inc.
Frame Works: Using Metaphor in Theory and Practice in Information Literacy
ERIC Educational Resources Information Center
Holliday, Wendy
2017-01-01
The ACRL Framework for Information Literacy in Higher Education generated a large amount of discourse during its development and adoption. All of this discourse is rich in metaphoric language that can be used as a tool for critical reflection on teaching and learning, information literacy, and the nature and role of theory in the practice of…
The use of PacBio and Hi-C data in denovo assembly of the goat genome
USDA-ARS?s Scientific Manuscript database
Generating de novo reference genome assemblies for non-model organisms is a laborious task that often requires a large amount of data from several sequencing platforms and cytogenetic surveys. By using PacBio sequence data and new library creation techniques, we present a de novo, high quality refer...
Demands for quick and accurate life cycle assessments create a need for methods to rapidly generate reliable life cycle inventories (LCI). Data mining is a suitable tool for this purpose, especially given the large amount of available governmental data. These data are typically a...
Evidence for ultramafic lavas on Syrtis Major
NASA Technical Reports Server (NTRS)
Reyes, D. P.; Christensen, P. R.
1993-01-01
Pyroxene compositions from ISM data compared with pyroxene compositions of Apollo 12 pigeonite basalt, Shergotite meteorite, and pyroxenitic komatiite show that the Syrtis Major volcanic materials are consistent with pyroxenitic komatiite. Pyroxenitic komatiite is significant for the earth because it contains a large amount of MgO, implying generation under unique circumstances compared to typical basaltic compositions.
Yamashita, Tomoko; Miyamoto, Yuki; Bando, Yoshio; Ono, Takashi; Kobayashi, Sakurako; Doi, Ayano; Araki, Toshihiro; Kato, Yosuke; Shirakawa, Takayuki; Suzuki, Yutaka; Yamauchi, Junji; Yoshida, Shigetaka; Sato, Naoya
2017-01-01
Oligodendrocytes myelinate axons and form myelin sheaths in the central nervous system. The development of therapies for demyelinating diseases, including multiple sclerosis and leukodystrophies, is a challenge because the pathogenic mechanisms of disease remain poorly understood. Primate pluripotent stem cell-derived oligodendrocytes are expected to help elucidate the molecular pathogenesis of these diseases. Oligodendrocytes have been successfully differentiated from human pluripotent stem cells. However, it is challenging to prepare large amounts of oligodendrocytes over a short amount of time because of manipulation difficulties under conventional primate pluripotent stem cell culture methods. We developed a proprietary dissociated monolayer and feeder-free culture system to handle pluripotent stem cell cultures. Because the dissociated monolayer and feeder-free culture system improves the quality and growth of primate pluripotent stem cells, these cells could potentially be differentiated into any desired functional cells and consistently cultured in large-scale conditions. In the current study, oligodendrocyte progenitor cells and mature oligodendrocytes were generated within three months from monkey embryonic stem cells. The embryonic stem cell-derived oligodendrocytes exhibited in vitro myelinogenic potency with rat dorsal root ganglion neurons. Additionally, the transplanted oligodendrocyte progenitor cells differentiated into myelin basic protein-positive mature oligodendrocytes in the mouse corpus callosum. This preparative method was used for human induced pluripotent stem cells, which were also successfully differentiated into oligodendrocyte progenitor cells and mature oligodendrocytes that were capable of myelinating rat dorsal root ganglion neurons. Moreover, it was possible to freeze, thaw, and successfully re-culture the differentiating cells. These results showed that embryonic stem cells and human induced pluripotent stem cells maintained in a dissociated monolayer and feeder-free culture system have the potential to generate oligodendrocyte progenitor cells and mature oligodendrocytes in vitro and in vivo. This culture method could be applied to prepare large amounts of oligodendrocyte progenitor cells and mature oligodendrocytes in a relatively short amount of time.
Geist, E.; Yoshioka, S.
1996-01-01
The largest uncertainty in assessing hazards from local tsunamis along the Cascadia margin is estimating the possible earthquake source parameters. We investigate which source parameters exert the largest influence on tsunami generation and determine how each parameter affects the amplitude of the local tsunami. The following source parameters were analyzed: (1) type of faulting characteristic of the Cascadia subduction zone, (2) amount of slip during rupture, (3) slip orientation, (4) duration of rupture, (5) physical properties of the accretionary wedge, and (6) influence of secondary faulting. The effect of each of these source parameters on the quasi-static displacement of the ocean floor is determined by using elastic three-dimensional, finite-element models. The propagation of the resulting tsunami is modeled both near the coastline using the two-dimensional (x-t) Peregrine equations that includes the effects of dispersion and near the source using the three-dimensional (x-y-t) linear long-wave equations. The source parameters that have the largest influence on local tsunami excitation are the shallowness of rupture and the amount of slip. In addition, the orientation of slip has a large effect on the directivity of the tsunami, especially for shallow dipping faults, which consequently has a direct influence on the length of coastline inundated by the tsunami. Duration of rupture, physical properties of the accretionary wedge, and secondary faulting all affect the excitation of tsunamis but to a lesser extent than the shallowness of rupture and the amount and orientation of slip. Assessment of the severity of the local tsunami hazard should take into account that relatively large tsunamis can be generated from anomalous 'tsunami earthquakes' that rupture within the accretionary wedge in comparison to interplate thrust earthquakes of similar magnitude. ?? 1996 Kluwer Academic Publishers.
Lessons learnt on the analysis of large sequence data in animal genomics.
Biscarini, F; Cozzi, P; Orozco-Ter Wengel, P
2018-04-06
The 'omics revolution has made a large amount of sequence data available to researchers and the industry. This has had a profound impact in the field of bioinformatics, stimulating unprecedented advancements in this discipline. Mostly, this is usually looked at from the perspective of human 'omics, in particular human genomics. Plant and animal genomics, however, have also been deeply influenced by next-generation sequencing technologies, with several genomics applications now popular among researchers and the breeding industry. Genomics tends to generate huge amounts of data, and genomic sequence data account for an increasing proportion of big data in biological sciences, due largely to decreasing sequencing and genotyping costs and to large-scale sequencing and resequencing projects. The analysis of big data poses a challenge to scientists, as data gathering currently takes place at a faster pace than does data processing and analysis, and the associated computational burden is increasingly taxing, making even simple manipulation, visualization and transferring of data a cumbersome operation. The time consumed by the processing and analysing of huge data sets may be at the expense of data quality assessment and critical interpretation. Additionally, when analysing lots of data, something is likely to go awry-the software may crash or stop-and it can be very frustrating to track the error. We herein review the most relevant issues related to tackling these challenges and problems, from the perspective of animal genomics, and provide researchers that lack extensive computing experience with guidelines that will help when processing large genomic data sets. © 2018 Stichting International Foundation for Animal Genetics.
VALIDATION FOR THE PERMANGANATE DIGESTION OF REILLEX HPQ ANION RESIN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kyser, E.
2009-09-23
The flowsheet for the digestion of Reillex{trademark} HPQ was validated both under the traditional alkaline conditions and under strongly acidic conditions. Due to difficulty in performing a pH adjustment in the large tank where this flowsheet must be performed, the recommended digestion conditions were changed from pH 8-10 to 8 M HNO{sub 3}. Thus, no pH adjustment of the solution is required prior to performing the permanganate addition and digestion and the need to sample the digestion tank to confirm appropriate pH range for digestion may be avoided. Neutralization of the acidic digestion solution will be performed after completion ofmore » the resin digestion cycle. The amount of permanganate required for this type of resin (Reillex{trademark} HPQ) was increased from 1 kg/L resin to 4 kg/L resin to reduce the amount of residual resin solids to a minimal amount (<5%). The length of digestion time at 70 C remains unchanged at 15 hours. These parameters are not optimized but are expected to be adequate for the conditions. The flowsheet generates a significant amount of fine manganese dioxide (MnO{sub 2}) solids (1.71 kg/L resin) and involves the generation of a significant liquid volume due to the low solubility of permanganate. However, since only two batches of resin (40 L each) are expected to be digested, the total waste generated is limited.« less
Vasta, Robert; Crandell, Ian; Millican, Anthony; House, Leanna; Smith, Eric
2017-10-13
Microphone sensor systems provide information that may be used for a variety of applications. Such systems generate large amounts of data. One concern is with microphone failure and unusual values that may be generated as part of the information collection process. This paper describes methods and a MATLAB graphical interface that provides rapid evaluation of microphone performance and identifies irregularities. The approach and interface are described. An application to a microphone array used in a wind tunnel is used to illustrate the methodology.
NASA Astrophysics Data System (ADS)
Favata, Antonino; Micheletti, Andrea; Ryu, Seunghwa; Pugno, Nicola M.
2016-10-01
An analytical benchmark and a simple consistent Mathematica program are proposed for graphene and carbon nanotubes, that may serve to test any molecular dynamics code implemented with REBO potentials. By exploiting the benchmark, we checked results produced by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) when adopting the second generation Brenner potential, we made evident that this code in its current implementation produces results which are offset from those of the benchmark by a significant amount, and provide evidence of the reason.
Interferon γ limits the effectiveness of melanoma peptide vaccines.
Cho, Hyun-Il; Lee, Young-Ran; Celis, Esteban
2011-01-06
The development of effective therapeutic vaccines to generate tumor-reactive cytotoxic T lymphocytes (CTLs) continues to be a top research priority. However, in spite of some promising results, there are no clear examples of vaccines that eradicate established tumors. Most vaccines are ineffective because they generate low numbers of CTLs and because numerous immunosuppressive factors abound in tumor-bearing hosts. We designed a peptide vaccine that produces large numbers of tumor-reactive CTLs in a mouse model of melanoma. Surprisingly, CTL tumor recognition and antitumor effects decreased in the presence of interferon γ (IFNγ), a cytokine that can provide therapeutic benefit. Tumors exposed to IFNγ evade CTLs by inducing large amounts of noncognate major histocompatibility complex class I molecules, which limit T-cell activation and effector function. Our results demonstrate that peptide vaccines can eradicate large, established tumors in circumstances under which the inhibitory activities of IFNγ are curtailed.
Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.
Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio
2015-01-01
Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.
Discovering Related Clinical Concepts Using Large Amounts of Clinical Notes
Ganesan, Kavita; Lloyd, Shane; Sarkar, Vikren
2016-01-01
The ability to find highly related clinical concepts is essential for many applications such as for hypothesis generation, query expansion for medical literature search, search results filtering, ICD-10 code filtering and many other applications. While manually constructed medical terminologies such as SNOMED CT can surface certain related concepts, these terminologies are inadequate as they depend on expertise of several subject matter experts making the terminology curation process open to geographic and language bias. In addition, these terminologies also provide no quantifiable evidence on how related the concepts are. In this work, we explore an unsupervised graphical approach to mine related concepts by leveraging the volume within large amounts of clinical notes. Our evaluation shows that we are able to use a data driven approach to discovering highly related concepts for various search terms including medications, symptoms and diseases. PMID:27656096
Estabrook, R W; Shet, M S; Faulkner, K; Fisher, C W
1996-11-01
A method has been developed for the commercial application of the unique oxygen chemistry catalyzed by various cytochrome P450s. This is illustrated here for the synthesis of hydroxylated steroids. This method requires the preparation of large amounts of enzymatically functional P450 proteins that can serve as catalysts and a technique for providing electrons at an economically acceptable cost. To generate large amounts of enzymatically active recombinant P450s we have engineered the cDNAs for various P450s, including bovine adrenal P450c17, by linking them to a modified cDNA for rat NADPH-P450 reductase and placing them in the plasmid pCWori+. Transformation of E. coli results in the high level expression of an enzymatically active protein that can be easily purified by affinity chromatography. Incubation of the purified enzyme with steroid in a reaction vessel containing a platinum electrode and a Ag/AgCl electrode couple poised at -650 mV, together with the electromotively active redox mediator, cobalt sepulchrate, results in the 17 alpha-hydroxylation of progesterone at rates as high as 25 nmoles of progesterone hydroxylated/min/nmole of P450. Thus, high concentrations of hydroxylated steroids can be produced with incubation conditions of hours duration without the use of costly NADPH. Similar experiments have been carried out for the generation of the 6 beta-hydroxylation product of testosterone (using a fusion protein containing human P450 3A4). It is apparent that this method is applicable to many other P450 catalyzed reactions for the synthesis of large amounts of hydroxylated steroid metabolites. The electrochemical system is also applicable to drug discovery studies for the characterization of drug metabolites.
Data acquisition system issues for large experiments
NASA Astrophysics Data System (ADS)
Siskind, E. J.
2007-09-01
This talk consists of personal observations on two classes of data acquisition ("DAQ") systems for Silicon trackers in large experiments with which the author has been concerned over the last three or more years. The first half is a classic "lessons learned" recital based on experience with the high-level debug and configuration of the DAQ system for the GLAST LAT detector. The second half is concerned with a discussion of the promises and pitfalls of using modern (and future) generations of "system-on-a-chip" ("SOC") or "platform" field-programmable gate arrays ("FPGAs") in future large DAQ systems. The DAQ system pipeline for the 864k channels of Si tracker in the GLAST LAT consists of five tiers of hardware buffers which ultimately feed into the main memory of the (two-active-node) level-3 trigger processor farm. The data formats and buffer volumes of these tiers are briefly described, as well as the flow control employed between successive tiers. Lessons learned regarding data formats, buffer volumes, and flow control/data discard policy are discussed. The continued development of platform FPGAs containing large amounts of configurable logic fabric, embedded PowerPC hard processor cores, digital signal processing components, large volumes of on-chip buffer memory, and multi-gigabit serial I/O capability permits DAQ system designers to vastly increase the amount of data preprocessing that can be performed in parallel within the DAQ pipeline for detector systems in large experiments. The capabilities of some currently available FPGA families are reviewed, along with the prospects for next-generation families of announced, but not yet available, platform FPGAs. Some experience with an actual implementation is presented, and reconciliation between advertised and achievable specifications is attempted. The prospects for applying these components to space-borne Si tracker detectors are briefly discussed.
Computational fluid dynamics study of the variable-pitch split-blade fan concept
NASA Technical Reports Server (NTRS)
Kepler, C. E.; Elmquist, A. R.; Davis, R. L.
1992-01-01
A computational fluid dynamics study was conducted to evaluate the feasibility of the variable-pitch split-blade supersonic fan concept. This fan configuration was conceived as a means to enable a supersonic fan to switch from the supersonic through-flow type of operation at high speeds to a conventional fan with subsonic inflow and outflow at low speeds. During this off-design, low-speed mode of operation, the fan would operate with a substantial static pressure rise across the blade row like a conventional transonic fan; the front (variable-pitch) blade would be aligned with the incoming flow, and the aft blade would remain fixed in the position set by the supersonic design conditions. Because of these geometrical features, this low speed configuration would inherently have a large amount of turning and, thereby, would have the potential for a large total pressure increase in a single stage. Such a high-turning blade configuration is prone to flow separation; it was hoped that the channeling of the flow between the blades would act like a slotted wing and help alleviate this problem. A total of 20 blade configurations representing various supersonic and transonic configurations were evaluated using a Navier Stokes CFD program called ADAPTNS because of its adaptive grid features. The flow fields generated by this computational procedure were processed by another data reduction program which calculated average flow properties and simulated fan performance. These results were employed to make quantitative comparisons and evaluations of blade performance. The supersonic split-blade configurations generated performance comparable to a single-blade supersonic, through-flow fan configuration. Simulated rotor total pressure ratios of the order of 2.5 or better were achieved for Mach 2.0 inflow conditions. The corresponding fan efficiencies were approximately 75 percent or better. The transonic split-blade configurations having large amounts of turning were able to generate large amounts of total turning and achieve simulated total pressure ratios of 3.0 or better with subsonic inflow conditions. These configurations had large losses and low fan efficiencies in the 70's percent. They had large separated regions and low velocity wakes. Additional turning and diffusion of this flow in a subsequent stator row would probably be very inefficient. The high total pressure ratios indicated by the rotor performance would be substantially reduced by the stators, and the stage efficiency would be substantially lower. Such performance leaves this dual-mode fan concept less attractive than originally postulated.
Switch: a planning tool for power systems with large shares of intermittent renewable energy.
Fripp, Matthias
2012-06-05
Wind and solar power are highly variable, so it is it unclear how large a role they can play in future power systems. This work introduces a new open-source electricity planning model--Switch--that identifies the least-cost strategy for using renewable and conventional generators and transmission in a large power system over a multidecade period. Switch includes an unprecedented amount of spatial and temporal detail, making it possible to address a new type of question about the optimal design and operation of power systems with large amounts of renewable power. A case study of California for 2012-2027 finds that there is no maximum possible penetration of wind and solar power--these resources could potentially be used to reduce emissions 90% or more below 1990 levels without reducing reliability or severely raising the cost of electricity. This work also finds that policies that encourage customers to shift electricity demand to times when renewable power is most abundant (e.g., well-timed charging of electric vehicles) could make it possible to achieve radical emission reductions at moderate costs.
Thermal analysis and management of lithium-titanate batteries
NASA Astrophysics Data System (ADS)
Giuliano, Michael R.; Advani, Suresh G.; Prasad, Ajay K.
2011-08-01
Battery electric vehicles and hybrid electric vehicles demand batteries that can store large amounts of energy in addition to accommodating large charge and discharge currents without compromising battery life. Lithium-titanate batteries have recently become an attractive option for this application. High current thresholds allow these cells to be charged quickly as well as supply the power needed to drive such vehicles. These large currents generate substantial amounts of waste heat due to loss mechanisms arising from the cell's internal chemistry and ohmic resistance. During normal vehicle operation, an active cooling system must be implemented to maintain a safe cell temperature and improve battery performance and life. This paper outlines a method to conduct thermal analysis of lithium-titanate cells under laboratory conditions. Thermochromic liquid crystals were implemented to instantaneously measure the entire surface temperature field of the cell. The resulting temperature measurements were used to evaluate the effectiveness of an active cooling system developed and tested in our laboratory for the thermal management of lithium-titanate cells.
Gutman, David A; Khalilia, Mohammed; Lee, Sanghoon; Nalisnik, Michael; Mullen, Zach; Beezley, Jonathan; Chittajallu, Deepak R; Manthey, David; Cooper, Lee A D
2017-11-01
Tissue-based cancer studies can generate large amounts of histology data in the form of glass slides. These slides contain important diagnostic, prognostic, and biological information and can be digitized into expansive and high-resolution whole-slide images using slide-scanning devices. Effectively utilizing digital pathology data in cancer research requires the ability to manage, visualize, share, and perform quantitative analysis on these large amounts of image data, tasks that are often complex and difficult for investigators with the current state of commercial digital pathology software. In this article, we describe the Digital Slide Archive (DSA), an open-source web-based platform for digital pathology. DSA allows investigators to manage large collections of histologic images and integrate them with clinical and genomic metadata. The open-source model enables DSA to be extended to provide additional capabilities. Cancer Res; 77(21); e75-78. ©2017 AACR . ©2017 American Association for Cancer Research.
NASA Astrophysics Data System (ADS)
Yan, Yulong; Yang, Chao; Peng, Lin; Li, Rumei; Bai, Huiling
2016-10-01
Face the large electricity demand, thermal power generation still derives the main way of electricity supply in China, account for 78.19% of total electricity production in 2013. Three types of thermal power plants, including coal-fired power plant, coal gangue-fired power plant and biomass-fired power plant, were chosen to survey the source profile, chemical reactivity and emission factor of VOCs during the thermal power generation. The most abundant compounds generated during coal- and coal gangue-fired power generation were 1-Butene, Styrene, n-Hexane and Ethylene, while biomass-fired power generation were Propene, 1-Butenen, Ethyne and Ethylene. The ratios of B/T during thermal power generation in this study was 0.8-2.6, which could be consider as the characteristics of coal and biomass burning. The field tested VOCs emission factor from coal-, coal gangue- and biomass-fired power plant was determined to be 0.88, 0.38 and 3.49 g/GJ, or showed as 0.023, 0.005 and 0.057 g/kg, with the amount of VOCs emission was 44.07, 0.08, 0.45 Gg in 2013, respectively. The statistical results of previous emission inventory, which calculated the VOCs emission used previous emission factor, may overestimate the emission amount of VOCs from thermal power generation in China.
Examining the Impact of Culture and Human Elements on OLAP Tools Usefulness
ERIC Educational Resources Information Center
Sharoupim, Magdy S.
2010-01-01
The purpose of the present study was to examine the impact of culture and human-related elements on the On-line Analytical Processing (OLAP) usability in generating decision-making information. The use of OLAP technology has evolved rapidly and gained momentum, mainly due to the ability of OLAP tools to examine and query large amounts of data sets…
A Study on the Effects of Multiple Goal Orientation on Learning Motivation and Learning Behaviors
ERIC Educational Resources Information Center
Li, Jie-Yi; Shieh, Chich-Jen
2016-01-01
In such an era when the value is constantly restructured and information is rapidly changed, education reform should cater for new challenges. The role and function of teachers is encountering a new change. Coping with current information generation, people with high self-efficacy of selecting and mastering large amount of information and higher…
USDA-ARS?s Scientific Manuscript database
The increasing use of disposable nonwovens made of petroleum-based materials generates a large amount of non-biodegradable, solid waste in the environment. As an effort to enhance the usage of biodegradable cotton in nonwovens, this study analyzed the biodegradability of mechanically pre-cleaned gr...
Reduce--recycle--reuse: guidelines for promoting perioperative waste management.
Laustsen, Gary
2007-04-01
The perioperative environment generates large amounts of waste, which negatively affects local and global ecosystems. To manage this waste health care facility leaders must focus on identifying correctable issues, work with relevant stakeholders to promote solutions, and adopt systematic procedural changes. Nurses and managers can moderate negative environmental effects by promoting reduction, recycling, and reuse of materials in the perioperative setting.
Economic availability of logging residue in the Douglas-fir region.
Thomas C. Adams
1976-01-01
Large amounts of logging residue are generated each year in timber harvest operations in old-growth forests in the Douglas-fir region of western Washington and western Oregon. Economic availability of each piece, however, depends on its size, condition, location, and on the existence of markets, with an adequate price to cover costs. An estimated 185.8 million cubic...
Evaluation of the efficiency and fault density of software generated by code generators
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1993-01-01
Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.
Motor Controller System For Large Dynamic Range of Motor Operation
NASA Technical Reports Server (NTRS)
Howard, David E. (Inventor); Alhorn, Dean C. (Inventor); Smith, Dennis A. (Inventor); Dutton, Kenneth R. (Inventor); Paulson, Mitchell Scott (Inventor)
2006-01-01
A motor controller system uses a rotary sensor with a plurality of signal conditioning units, coupled to the rotary sensor. Each of these units, which is associated with a particular range of motor output shaft rotation rates, generate a feedback signal indicative of the position of the motor s output shaft. A controller (i) converts a selected motor output shaft rotation rate to a corresponding incremental amount of rotational movement for a selected fixed time period, (ii) selects, at periodic completions of the selected fixed time period, the feedback signal from one of the signal conditioning units for which the particular range of motor output shaft rotation rates associated therewith encompasses the selected motor output shaft rotation rate, and (iii) generates a motor drive signal based on a difference between the incremental amount of rotational movement and the feedback signal from the selected one of the signal conditioning Units.
NASA Astrophysics Data System (ADS)
Andhavarapu, A.; King, W.; Lindsay, A.; Byrns, B.; Knappe, D.; Fonteno, W.; Shannon, S.
2014-10-01
Plasma source generated nitrogen fertilizer is compared to conventional nitrogen fertilizers in water for plant growth. Root, shoot sizes, and weights are used to examine differences between plant treatment groups. With a simple coaxial structure creating a large-volume atmospheric glow discharge, a 162 MHz generator drives the air plasma. The VHF plasma source emits a steady state glow; the high drive frequency is believed to inhibit the glow-to-arc transition for non-thermal discharge generation. To create the plasma activated water (PAW) solutions used for plant treatment, the discharge is held over distilled water until a 100 ppm nitrate aqueous concentration is achieved. The discharge is used to incorporate nitrogen species into aqueous solution, which is used to fertilize radishes, marigolds, and tomatoes. In a four week experiment, these plants are watered with four different solutions: tap water, dissolved ammonium nitrate DI water, dissolved sodium nitrate DI water, and PAW. Ammonium nitrate solution has the same amount of total nitrogen as PAW; sodium nitrate solution has the same amount of nitrate as PAW. T-tests are used to determine statistical significance in plant group growth differences. PAW fertilization chemical mechanisms are presented.
Recursive partitioned inversion of large (1500 x 1500) symmetric matrices
NASA Technical Reports Server (NTRS)
Putney, B. H.; Brownd, J. E.; Gomez, R. A.
1976-01-01
A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.
Production of brown and black pigments by using flotation waste from copper slag.
Ozel, Emel; Turan, Servet; Coruh, Semra; Ergun, Osman Nuri
2006-04-01
One of the major problems in copper-producing countries is the treatment of the large amount of copper slag or copper flotation waste generated from copper slag which contains significant amounts of heavy metals such as Cu, Zn, Pb and Co. Dumping or disposal of such large quantities of flotation waste from copper slag causes environmental and space problems. In this study, the treatment of flotation waste from copper slag by a thermal method and its use as an iron source in the production of inorganic brown and black pigments that are used in the ceramic industry were investigated. The pigments were produced by calcining different amounts of flotation waste and chromite, Cr2O3, ZnO and CoO mixtures. The pigments obtained were added to transparent ceramic glazes and porcelainized tile bodies. Their colours were defined by L*a*b* measurements with a spectrophotometer. The results showed that flotation waste from copper slag could be used as an iron source to produce brown and black pigments in both ceramic body and glazes.
Dewil, Raf; Baeyens, Jan; Appels, Lise
2007-06-18
Power plant or cement kiln co-incineration are important disposal routes for the large amounts of waste activated sludge (WAS) which are generated annually. The presence of significant amounts of heavy metals in the sludge however poses serious problems since they are partly emitted with the flue gases (and collected in the flue gas dedusting) and partly incorporated in the ashes of the incinerator: in both cases, the disposal or reuse of the fly ash and bottom ashes can be jeopardized since subsequent leaching in landfill disposal can occur, or their "pozzolanic" incorporation in cement cannot be applied. The present paper studies some physicochemical methods for reducing the heavy metal content of WAS. The used techniques include acid and alkaline thermal hydrolysis and Fenton's peroxidation. By degrading the extracellular polymeric substances, binding sites for a large amount of heavy metals, the latter are released into the sludge water. The behaviour of several heavy metals (Cd, Cr, Cu, Hg, Pb, Ni, Zn) was assessed in laboratory tests. Results of these show a significant reduction of most heavy metals.
Reactive hydroxyl radical-driven oral bacterial inactivation by radio frequency atmospheric plasma
NASA Astrophysics Data System (ADS)
Kang, Sung Kil; Choi, Myeong Yeol; Koo, Il Gyo; Kim, Paul Y.; Kim, Yoonsun; Kim, Gon Jun; Mohamed, Abdel-Aleam H.; Collins, George J.; Lee, Jae Koo
2011-04-01
We demonstrated bacterial (Streptococcus mutans) inactivation by a radio frequency power driven atmospheric pressure plasma torch with H2O2 entrained in the feedstock gas. Optical emission spectroscopy identified substantial excited state •OH generation inside the plasma and relative •OH formation was verified by optical absorption. The bacterial inactivation rate increased with increasing •OH generation and reached a maximum 5-log10 reduction with 0.6% H2O2 vapor. Generation of large amounts of toxic ozone is drawback of plasma bacterial inactivation, thus it is significant that the ozone concentration falls within recommended safe allowable levels with addition of H2O2 vapor to the plasma.
Law, B.E.; Spencer, C.W.; Bostick, N.H.
1980-01-01
The onset of overpressuring occurs at c.3,500 m, near the base of the U. Cretaceous Lance Formation. The development of overpressuring may involve several processes; however, interpretation of the available information indicates that active generation of large amounts of wet gas is one of the more important processes. The present minimum temperature at the top of overpressuring is at least 88oC. The preservation of abnormally high pressures is due to presently active generation of gas in a thick interval of discontinuous, very low-permeability shales, siltstones, and sandstones. - from Authors
Magnetic circuit modifications in resonant vibration harvesters
NASA Astrophysics Data System (ADS)
Szabo, Zoltan; Fiala, Pavel; Dohnal, Premysl
2018-01-01
The paper discusses the conclusions obtained from a research centered on a vibration-powered milli- or micro generator (MG) operating as a harvester to yield the maximum amount of energy transferred by the vibration of an independent system. The investigation expands on the results proposed within papers that theoretically define the properties characterizing the basic configurations of a generator based on applied Faraday's law of induction. We compared two basic principles of circuit closing in a magnetic circuit that, fully or partially, utilizes a ferromagnetic material, and a large number of generator design solutions were examined and tested. In the given context, the article brings a compact survey of the rules facilitating energy transformation and the designing of harvesters.
Big Data and Analytics in Healthcare.
Tan, S S-L; Gao, G; Koch, S
2015-01-01
This editorial is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". The amount of data being generated in the healthcare industry is growing at a rapid rate. This has generated immense interest in leveraging the availability of healthcare data (and "big data") to improve health outcomes and reduce costs. However, the nature of healthcare data, and especially big data, presents unique challenges in processing and analyzing big data in healthcare. This Focus Theme aims to disseminate some novel approaches to address these challenges. More specifically, approaches ranging from efficient methods of processing large clinical data to predictive models that could generate better predictions from healthcare data are presented.
Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency
Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio
2015-01-01
Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB. PMID:26558254
ERIC Educational Resources Information Center
Thinyane, Hannah
2010-01-01
In 2001 Marc Prensky coined the phrase "digital natives" to refer to the new generation of students who have grown up surrounded by technology. His companion papers spurred large amounts of research, debating changes that are required to curricula and pedagogical models to cater for the changes in the student population. This article…
Integrative Genomics and Computational Systems Medicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDermott, Jason E.; Huang, Yufei; Zhang, Bing
The exponential growth in generation of large amounts of genomic data from biological samples has driven the emerging field of systems medicine. This field is promising because it improves our understanding of disease processes at the systems level. However, the field is still in its young stage. There exists a great need for novel computational methods and approaches to effectively utilize and integrate various omics data.
Generation M[superscript 2]: Media in the Lives of 8- to 18-Year-Olds
ERIC Educational Resources Information Center
Rideout, Victoria J.; Foehr, Ulla G.; Roberts, Donald F.
2010-01-01
This study is one of the largest and most comprehensive publicly available sources of information on the amount and nature of media use among American youth: (1) It includes a large national sample of more than 2,000 young people from across the country; (2) It covers children from ages 8 to 18, to track changes from childhood through the…
Woody residues and solid waste wood available for recovery in the United States, 2002
David B. McKeever; Robert H. Falk
2004-01-01
Large amounts of woody residues and solid wood waste are generated annually in the United States from the extraction of timber from forests, from forestry cultural operations, in the conversion of forest land to nonforest uses, in the initial processing of roundwood timber into usable products, in the construction and demolition of buildings and structures, and in the...
NASA Astrophysics Data System (ADS)
Latinović, T. S.; Preradović, D. M.; Barz, C. R.; Latinović, M. T.; Petrica, P. P.; Pop-Vadean, A.
2016-08-01
The amount of data at the global level has grown exponentially. Along with this phenomena, we have a need for a new unit of measure like exabyte, zettabyte, and yottabyte as the last unit measures the amount of data. The growth of data gives a situation where the classic systems for the collection, storage, processing, and visualization of data losing the battle with a large amount, speed, and variety of data that is generated continuously. Many of data that is created by the Internet of Things, IoT (cameras, satellites, cars, GPS navigation, etc.). It is our challenge to come up with new technologies and tools for the management and exploitation of these large amounts of data. Big Data is a hot topic in recent years in IT circles. However, Big Data is recognized in the business world, and increasingly in the public administration. This paper proposes an ontology of big data analytics and examines how to enhance business intelligence through big data analytics as a service by presenting a big data analytics services-oriented architecture. This paper also discusses the interrelationship between business intelligence and big data analytics. The proposed approach in this paper might facilitate the research and development of business analytics, big data analytics, and business intelligence as well as intelligent agents.
The supernova-regulated ISM. III. Generation of vorticity, helicity, and mean flows
NASA Astrophysics Data System (ADS)
Käpylä, M. J.; Gent, F. A.; Väisälä, M. S.; Sarson, G. R.
2018-03-01
Context. The forcing of interstellar turbulence, driven mainly by supernova (SN) explosions, is irrotational in nature, but the development of significant amounts of vorticity and helicity, accompanied by large-scale dynamo action, has been reported. Aim. Several earlier investigations examined vorticity production in simpler systems; here all the relevant processes can be considered simultaneously. We also investigate the mechanisms for the generation of net helicity and large-scale flow in the system. Methods: We use a three-dimensional, stratified, rotating and shearing local simulation domain of the size 1 × 1 × 2 kpc3, forced with SN explosions occurring at a rate typical of the solar neighbourhood in the Milky Way. In addition to the nominal simulation run with realistic Milky Way parameters, we vary the rotation and shear rates, but keep the absolute value of their ratio fixed. Reversing the sign of shear vs. rotation allows us to separate the rotation- and shear-generated contributions. Results: As in earlier studies, we find the generation of significant amounts of vorticity, the rotational flow comprising on average 65% of the total flow. The vorticity production can be related to the baroclinicity of the flow, especially in the regions of hot, dilute clustered supernova bubbles. In these regions, the vortex stretching acts as a sink of vorticity. In denser, compressed regions, the vortex stretching amplifies vorticity, but remains sub-dominant to baroclinicity. The net helicities produced by rotation and shear are of opposite signs for physically motivated rotation laws, with the solar neighbourhood parameters resulting in the near cancellation of the total net helicity. We also find the excitation of oscillatory mean flows, the strength and oscillation period of which depend on the Coriolis and shear parameters; we interpret these as signatures of the anisotropic-kinetic-α (AKA) effect. We use the method of moments to fit for the turbulent transport coefficients, and find αAKA values of the order 3-5 km s-1. Conclusions: Even in a weakly rotationally and shear-influenced system, small-scale anisotropies can lead to significant effects at large scales. Here we report on two consequences of such effects, namely on the generation of net helicity and on the emergence of large-scale flows by the AKA effect, the latter detected for the first time in a direct numerical simulation of a realistic astrophysical system.
Brumin, Marina; Levy, Maggie
2012-01-01
The whitefly Bemisia tabaci is a cosmopolitan insect pest that harbors Portiera aleyrodidarum, the primary obligatory symbiotic bacterium, and several facultative secondary symbionts. Secondary symbionts in B. tabaci are generally associated with the bacteriome, ensuring their vertical transmission; however, Rickettsia is an exception and occupies most of the body cavity, except the bacteriome. The mode of Rickettsia transfer between generations and its subcellular localization in insect organs have not been investigated. Using electron and fluorescence microscopy, we show that Rickettsia infects the digestive, salivary, and reproductive organs of the insect; however, it was not observed in the bacteriome. Rickettsia invades the oocytes during early developmental stages and resides in follicular cells and cytoplasm; it is mostly excluded when the egg matures; however, some bacterial cells remain in the egg, ensuring their transfer to subsequent generations. Rickettsia was localized to testicles and the spermatheca, suggesting a horizontal transfer between males and females during mating. The bacterium was further observed at large amounts in midgut cells, concentrating in vacuole-like structures, and was located in the hemolymph, specifically at exceptionally large amounts around bacteriocytes and in fat bodies. Organs further infected by Rickettsia included the primary salivary glands and stylets, sites of possible secretion of the bacterium outside the whitefly body. The close association between Rickettsia and the B. tabaci digestive system might be important for digestive purposes. The vertical transmission of Rickettsia to subsequent generations occurs via the oocyte and not, like other secondary symbionts, the bacteriome. PMID:22660706
Joseph, Aneeta Mary; Snellings, Ruben; Van den Heede, Philip; Matthys, Stijn
2018-01-01
Huge amounts of waste are being generated, and even though the incineration process reduces the mass and volume of waste to a large extent, massive amounts of residues still remain. On average, out of 1.3 billion tons of municipal solid wastes generated per year, around 130 and 2.1 million tons are incinerated in the world and in Belgium, respectively. Around 400 kT of bottom ash residues are generated in Flanders, out of which only 102 kT are utilized here, and the rest is exported or landfilled due to non-conformity to environmental regulations. Landfilling makes the valuable resources in the residues unavailable and results in more primary raw materials being used, increasing mining and related hazards. Identifying and employing the right pre-treatment technique for the highest value application is the key to attaining a circular economy. We reviewed the present pre-treatment and utilization scenarios in Belgium, and the advancements in research around the world for realization of maximum utilization are reported in this paper. Uses of the material in the cement industry as a binder and cement raw meal replacement are identified as possible effective utilization options for large quantities of bottom ash. Pre-treatment techniques that could facilitate this use are also discussed. With all the research evidence available, there is now a need for combined efforts from incineration and the cement industry for technical and economic optimization of the process flow. PMID:29337887
Lu, Yunlong; Wei, Liqin; Wang, Tai
2015-01-01
The development of sperm cells (SCs) from microspores involves a set of finely regulated molecular and cellular events and the coordination of these events. The mechanisms underlying these events and their interconnections remain a major challenge. Systems analysis of genome-wide molecular networks and functional modules with high-throughput "omics" approaches is crucial for understanding the mechanisms; however, this study is hindered because of the difficulty in isolating a large amount of cells of different types, especially generative cells (GCs), from the pollen. Here, we optimized the conditions of tomato pollen germination and pollen tube growth to allow for long-term growth of pollen tubes in vitro with SCs generated in the tube. Using this culture system, we developed methods for isolating GCs, SCs and vegetative cell nuclei (VN) from just-germinated tomato pollen grains and growing pollen tubes and their purification by Percoll density gradient centrifugation. The purity and viability of isolated GCs and SCs were confirmed by microscopy examination and fluorescein diacetate staining, respectively, and the integrity of VN was confirmed by propidium iodide staining. We could obtain about 1.5 million GCs and 2.0 million SCs each from 180 mg initiated pollen grains, and 10 million VN from 270 mg initiated pollen grains germinated in vitro in each experiment. These methods provide the necessary preconditions for systematic biology studies of SC development and differentiation in higher plants.
Quantities of Arsenic-Treated Wood in Demolition Debris Generated by Hurricane Katrina
Dubey, Brajesh; Solo-Gabriele, Helena M.; Townsend, Timothy G.
2008-01-01
The disaster debris from Hurricane Katrina is one of the largest in terms of volume and economic loss in American history. One of the major components of the demolition debris is wood waste of which a significant proportion is treated with preservatives, including preservatives containing arsenic. As a result of the large scale destruction of treated wood structures such as electrical poles, fences, decks, and homes a considerable amount of treated wood and consequently arsenic will be disposed as disaster debris. In this study an effort was made to estimate the quantity of arsenic disposed through demolition debris generated in the Louisiana and Mississippi area through Hurricane Katrina. Of the 72 million cubic meters of disaster debris generated, roughly 12 million cubic meters were in the form of construction and demolition wood resulting in an estimated 1740 metric tons of arsenic disposed. Management of disaster debris should consider the relatively large quantities of arsenic associated with pressure-treated wood. PMID:17396637
An algorithm of discovering signatures from DNA databases on a computer cluster.
Lee, Hsiao Ping; Sheu, Tzu-Fang
2014-10-05
Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.
Cozzolino, Daniel
2015-03-30
Vibrational spectroscopy encompasses a number of techniques and methods including ultra-violet, visible, Fourier transform infrared or mid infrared, near infrared and Raman spectroscopy. The use and application of spectroscopy generates spectra containing hundreds of variables (absorbances at each wavenumbers or wavelengths), resulting in the production of large data sets representing the chemical and biochemical wine fingerprint. Multivariate data analysis techniques are then required to handle the large amount of data generated in order to interpret the spectra in a meaningful way in order to develop a specific application. This paper focuses on the developments of sample presentation and main sources of error when vibrational spectroscopy methods are applied in wine analysis. Recent and novel applications will be discussed as examples of these developments. © 2014 Society of Chemical Industry.
NASA Technical Reports Server (NTRS)
Schwan, Karsten
1994-01-01
Atmospheric modeling is a grand challenge problem for several reasons, including its inordinate computational requirements and its generation of large amounts of data concurrent with its use of very large data sets derived from measurement instruments like satellites. In addition, atmospheric models are typically run several times, on new data sets or to reprocess existing data sets, to investigate or reinvestigate specific chemical or physical processes occurring in the earth's atmosphere, to understand model fidelity with respect to observational data, or simply to experiment with specific model parameters or components.
Optimized energy harvesting materials and generator design
NASA Astrophysics Data System (ADS)
Graf, Christian; Hitzbleck, Julia; Feller, Torsten; Clauberg, Karin; Wagner, Joachim; Krause, Jens; Maas, Jürgen
2013-04-01
Electroactive polymers are soft capacitors made of thin elastic and electrically insulating films coated with compliant electrodes offering a large amount of deformation. They can either be used as actuators by applying an electric charge or they can be used as energy converters based on the electrostatic principle. These unique properties enable the industrial development of highly efficient and environmentally sustainable energy converters, which opens up the possibility to further exploit large renewable and inexhaustible energy sources like wind and water that are widely unused otherwise. Compared to other electroactive polymer materials, polyurethanes, whose formulations have been systematically modified and optimized for energy harvesting applications, have certain advantages over silicones and acrylates. The inherently higher dipole content results in a significantly increased permittivity and the dielectric breakdown strength is higher, too, whereby the overall specific energy, a measure for the energy gain, is better by at least factor ten, i.e. more than ten times the energy can be gained out of the same amount of material. In order to reduce conduction losses on the electrode during charging and discharging, a highly conductive bidirectional stretchable electrode has been developed. Other important material parameters like stiffness and bulk resistivity have been optimized to fit the requirements. To realize high power energy harvesting systems, substantial amounts of electroactive polymer material are necessary as well as a smart mechanical and electrical design of the generator. In here we report on different measures to evaluate and improve electroactive polymer materials for energy harvesting by e.g. reducing the defect occurrence and improving the electrode behavior.
ACToR - Aggregated Computational Toxicology Resource ...
There are too many uncharacterized environmental chemicals to test with current in vivo protocols. Develop predictive in vitro screening assays that can be used to prioritize chemicals for detailed testing. ToxCast program requires large amounts of data: In vitro assays (mainly generated by ToxCast program) and In vivo data to develop and validate predictive signatures ACToR is compiling both sets of data for use in predictive algorithms.
Selected Papers on Low-Energy Antiprotons and Possible Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noble, Robert
1998-09-19
The only realistic means by which to create a facility at Fermilab to produce large amounts of low energy antiprotons is to use resources which already exist. There is simply too little money and manpower at this point in time to generate new accelerators on a time scale before the turn of the century. Therefore, innovation is required to modify existing equipment to provide the services required by experimenters.
The collapse of the World Trade Center (WTC) on September 11, 2001 generated large amounts of dust and smoke that settled in the surrounding indoor and outdoor environments in southern Manhattan. Sixteen dust samples were collected from undisturbed locations inside two unclean...
Transparency of an instantaneously created electron-positron-photon plasma
NASA Astrophysics Data System (ADS)
Bégué, D.; Vereshchagin, G. V.
2014-03-01
The problem of the expansion of a relativistic plasma generated when a large amount of energy is released in a small volume has been considered by many authors. We use the analytical solution of Bisnovatyi-Kogan and Murzina for the spherically symmetric relativistic expansion. The light curves and the spectra from transparency of an electron-positron-photon plasma are obtained. We compare our results with the work of Goodman.
In Silico Design of Smart Binders to Anthrax PA
2012-09-01
nanosecond(ns) molecular dynamics simulation in the NPT ensemble (constant particle number, pressure, and temperature) at 300K, with the CHARMM force...protective antigen (PA). Before the docking runs, the DS23 peptide was simulated using molecular dynamics to generate an ensemble of structures...structure), we do not see a large amount of structural change when using molecular dynamics after Rosetta docking. We note that this RMSD does not take
Regional water consumption for hydro and thermal electricity generation in the United States
Lee, Uisung; Han, Jeongwoo; Elgowainy, Amgad; ...
2017-05-18
Water is an essential resource for most electric power generation technologies. Thermal power plants typically require a large amount of cooling water whose evaporation is regarded to be consumed. Hydropower plants result in evaporative water loss from the large surface areas of the storing reservoirs. This paper estimated the regional water consumption factors (WCFs) for thermal and hydro electricity generation in the United States, because the WCFs of these power plants vary by region and water supply and demand balance are of concern in many regions. For hydropower, total WCFs were calculated using a reservoir’s surface area, state-level water evaporation,more » and background evapotranspiration. Then, for a multipurpose reservoir, a fraction of its WCF was allocated to hydropower generation based on the share of the economic valuation of hydroelectricity among benefits from all purposes of the reservoir. For thermal power plants, the variations in WCFs by type of cooling technology, prime mover technology, and by region were addressed. The results show that WCFs for electricity generation vary significantly by region. Finally, the generation-weighted average WCFs of thermoelectricity and hydropower are 1.25 (range of 0.18–2.0) and 16.8 (range of 0.67–1194) L/kWh, respectively, and the generation-weighted average WCF by the U.S. generation mix in 2015 is estimated at 2.18 L/kWh.« less
Regional water consumption for hydro and thermal electricity generation in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Uisung; Han, Jeongwoo; Elgowainy, Amgad
Water is an essential resource for most electric power generation technologies. Thermal power plants typically require a large amount of cooling water whose evaporation is regarded to be consumed. Hydropower plants result in evaporative water loss from the large surface areas of the storing reservoirs. This paper estimated the regional water consumption factors (WCFs) for thermal and hydro electricity generation in the United States, because the WCFs of these power plants vary by region and water supply and demand balance are of concern in many regions. For hydropower, total WCFs were calculated using a reservoir’s surface area, state-level water evaporation,more » and background evapotranspiration. Then, for a multipurpose reservoir, a fraction of its WCF was allocated to hydropower generation based on the share of the economic valuation of hydroelectricity among benefits from all purposes of the reservoir. For thermal power plants, the variations in WCFs by type of cooling technology, prime mover technology, and by region were addressed. The results show that WCFs for electricity generation vary significantly by region. Finally, the generation-weighted average WCFs of thermoelectricity and hydropower are 1.25 (range of 0.18–2.0) and 16.8 (range of 0.67–1194) L/kWh, respectively, and the generation-weighted average WCF by the U.S. generation mix in 2015 is estimated at 2.18 L/kWh.« less
NASA Astrophysics Data System (ADS)
Di Giulio, R.; Maietti, F.; Piaia, E.; Medici, M.; Ferrari, F.; Turillazzi, B.
2017-02-01
The generation of high quality 3D models can be still very time-consuming and expensive, and the outcome of digital reconstructions is frequently provided in formats that are not interoperable, and therefore cannot be easily accessed. This challenge is even more crucial for complex architectures and large heritage sites, which involve a large amount of data to be acquired, managed and enriched by metadata. In this framework, the ongoing EU funded project INCEPTION - Inclusive Cultural Heritage in Europe through 3D semantic modelling proposes a workflow aimed at the achievements of efficient 3D digitization methods, post-processing tools for an enriched semantic modelling, web-based solutions and applications to ensure a wide access to experts and non-experts. In order to face these challenges and to start solving the issue of the large amount of captured data and time-consuming processes in the production of 3D digital models, an Optimized Data Acquisition Protocol (DAP) has been set up. The purpose is to guide the processes of digitization of cultural heritage, respecting needs, requirements and specificities of cultural assets.
Hexamethyldisilazane Removal with Mesoporous Materials Prepared from Calcium Fluoride Sludge.
Kao, Ching-Yang; Lin, Min-Fa; Nguyen, Nhat-Thien; Tsai, Hsiao-Hsin; Chang, Luh-Maan; Chen, Po-Han; Chang, Chang-Tang
2018-05-01
A large amount of calcium fluoride sludge is generated by the semiconductor industry every year. It also requires a high amount of fuel consumption using rotor concentrators and thermal oxidizers to treat VOCs. The mesoporous adsorbent prepared by calcium fluoride sludge was used for VOCs treatment. The semiconductor industry employs HMDS to promote the adhesion of photo-resistant material to oxide(s) due to the formation of silicon dioxide, which blocks porous adsorbents. The adsorption of HMDS (Hexamethyldisiloxane) was tested with mesoporous silica materials synthesized from calcium fluoride (CF-MCM). The resulting samples were characterized by XRD, XRF, FTIR, N2-adsorption-desorption techniques. The prepared samples possessed high specific surface area, large pore volume and large pore diameter. The crystal patterns of CF-MCM were similar with Mobil composite matter (MCM-41) from TEM image. The adsorption capacity of HMDS with CF-MCM was 40 and 80 mg g-1, respectively, under 100 and 500 ppm HMDS. The effects of operation parameters, such as contact time and mixture concentration, on the performance of CF-MCM were also discussed in this study.
Divett, T; Vennell, R; Stevens, C
2013-02-28
At tidal energy sites, large arrays of hundreds of turbines will be required to generate economically significant amounts of energy. Owing to wake effects within the array, the placement of turbines within will be vital to capturing the maximum energy from the resource. This study presents preliminary results using Gerris, an adaptive mesh flow solver, to investigate the flow through four different arrays of 15 turbines each. The goal is to optimize the position of turbines within an array in an idealized channel. The turbines are represented as areas of increased bottom friction in an adaptive mesh model so that the flow and power capture in tidally reversing flow through large arrays can be studied. The effect of oscillating tides is studied, with interesting dynamics generated as the tidal current reverses direction, forcing turbulent flow through the array. The energy removed from the flow by each of the four arrays is compared over a tidal cycle. A staggered array is found to extract 54 per cent more energy than a non-staggered array. Furthermore, an array positioned to one side of the channel is found to remove a similar amount of energy compared with an array in the centre of the channel.
Present status of recycling waste mobile phones in China: a review.
Li, Jingying; Ge, Zhongying; Liang, Changjin; An, Ni
2017-07-01
A large number of waste mobile phones have already been generated and are being generated. Various countries around the world have all been positively exploring the way of recycling and reuse when facing such a large amount of waste mobile phones. In some countries, processing waste mobile phones has been forming a complete industrial chain, which can not only recycle waste mobile phones to reduce their negative influence on the environment but also turn waste into treasure to acquire economic benefits dramatically. However, the situation of recycling waste mobile phones in China is not going well. Waste mobile phones are not formally covered by existing regulations and policies for the waste electric and electronic equipment in China. In order to explore an appropriate system to recover waste mobile phones, the mobile phone production and the amount of waste mobile phones are introduced in this paper, and status of waste mobile phones recycling is described; then, the disposal technology of electronic waste that would be most likely to be used for processing of electronic waste in industrial applications in the near future is reviewed. Finally, rationalization proposals are put forward based on the current recovery status of waste mobile phones for the purpose of promoting the development of recycling waste mobile phones in developing countries with a special emphasis on China.
Liang, Sai; Qu, Shen; Xu, Ming
2016-02-02
To develop industry-specific policies for mitigating environmental pressures, previous studies primarily focus on identifying sectors that directly generate large amounts of environmental pressures (a.k.a. production-based method) or indirectly drive large amounts of environmental pressures through supply chains (e.g., consumption-based method). In addition to those sectors as important environmental pressure producers or drivers, there exist sectors that are also important to environmental pressure mitigation as transmission centers. Economy-wide environmental pressure mitigation might be achieved by improving production efficiency of these key transmission sectors, that is, using less upstream inputs to produce unitary output. We develop a betweenness-based method to measure the importance of transmission sectors, borrowing the betweenness concept from network analysis. We quantify the betweenness of sectors by examining supply chain paths extracted from structural path analysis that pass through a particular sector. We take China as an example and find that those critical transmission sectors identified by betweenness-based method are not always identifiable by existing methods. This indicates that betweenness-based method can provide additional insights that cannot be obtained with existing methods on the roles individual sectors play in generating economy-wide environmental pressures. Betweenness-based method proposed here can therefore complement existing methods for guiding sector-level environmental pressure mitigation strategies.
Huls, Peter G; Vischer, Norbert O E; Woldringh, Conrad L
2018-01-01
According to the recently-revived adder model for cell size control, newborn cells of Escherichia coli will grow and divide after having added a constant size or length, ΔL , irrespective of their size at birth. Assuming exponential elongation, this implies that large newborns will divide earlier than small ones. The molecular basis for the constant size increment is still unknown. As DNA replication and cell growth are coordinated, the constant ΔL could be based on duplication of an equal amount of DNA, ΔG , present in newborn cells. To test this idea, we measured amounts of DNA and lengths of nucleoids in DAPI-stained cells growing in batch culture at slow and fast rates. Deeply-constricted cells were divided in two subpopulations of longer and shorter lengths than average; these were considered to represent large and small prospective daughter cells, respectively. While at slow growth, large and small prospective daughter cells contained similar amounts of DNA, fast growing cells with multiforked replicating chromosomes, showed a significantly higher amount of DNA (20%) in the larger cells. This observation precludes the hypothesis that Δ L is based on the synthesis of a constant ΔG . Growth curves were constructed for siblings generated by asymmetric division and growing according to the adder model. Under the assumption that all cells at the same growth rate exhibit the same time between initiation of DNA replication and cell division (i.e., constant C+D -period), the constructions predict that initiation occurs at different sizes ( Li ) and that, at fast growth, large newborn cells transiently contain more DNA than small newborns, in accordance with the observations. Because the state of segregation, measured as the distance between separated nucleoids, was found to be more advanced in larger deeply-constricted cells, we propose that in larger newborns nucleoid separation occurs faster and at a shorter length, allowing them to divide earlier. We propose a composite model in which both differential initiation and segregation leads to an adder-like behavior of large and small newborn cells.
Challenges dealing with depleted uranium in Germany - Reuse or disposal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moeller, Kai D.
2007-07-01
During enrichment large amounts of depleted Uranium are produced. In Germany every year 2.800 tons of depleted uranium are generated. In Germany depleted uranium is not classified as radioactive waste but a resource for further enrichment. Therefore since 1996 depleted Uranium is sent to ROSATOM in Russia. However it still has to be dealt with the second generation of depleted Uranium. To evaluate the alternative actions in case a solution has to be found in Germany, several studies have been initiated by the Federal Ministry of the Environment. The work that has been carried out evaluated various possibilities to dealmore » with depleted uranium. The international studies on this field and the situation in Germany have been analyzed. In case no further enrichment is planned the depleted uranium has to be stored. In the enrichment process UF{sub 6} is generated. It is an international consensus that for storage it should be converted to U{sub 3}O{sub 8}. The necessary technique is well established. If the depleted Uranium would have to be characterized as radioactive waste, a final disposal would become necessary. For the planned Konrad repository - a repository for non heat generating radioactive waste - the amount of Uranium is limited by the licensing authority. The existing license would not allow the final disposal of large amounts of depleted Uranium in the Konrad repository. The potential effect on the safety case has not been roughly analyzed. As a result it may be necessary to think about alternatives. Several possibilities for the use of depleted uranium in the industry have been identified. Studies indicate that the properties of Uranium would make it useful in some industrial fields. Nevertheless many practical and legal questions are open. One further option may be the use as shielding e.g. in casks for transport or disposal. Possible techniques for using depleted Uranium as shielding are the use of the metallic Uranium as well as the inclusion in concrete. Another possibility could be the use of depleted uranium for the blending of High enriched Uranium (HEU) or with Plutonium to MOX-elements. (authors)« less
Halftoning method for the generation of motion stimuli
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Stone, Leland S.
1989-01-01
This paper describes a novel computer-graphic technique for the generation of a broad class of motion stimuli for vision research, which uses color table animation in conjunction with a single base image. Using this technique, contrast and temporal frequency can be varied with a negligible amount of computation, once a single-base image is produced. Since only two-bit planes are needed to display a single drifting grating, an eight-bit/pixel display can be used to generate four-component plaids, in which each component of the plaid has independently programmable contrast and temporal frequency. Because the contrast and temporal frequencies of the various components are mutually independent, a large number of two-dimensional stimulus motions can be produced from a single image file.
Monte Carlo simulation of single accident airport risk profile
NASA Technical Reports Server (NTRS)
1979-01-01
A computer simulation model was developed for estimating the potential economic impacts of a carbon fiber release upon facilities within an 80 kilometer radius of a major airport. The model simulated the possible range of release conditions and the resulting dispersion of the carbon fibers. Each iteration of the model generated a specific release scenario, which would cause a specific amount of dollar loss to the surrounding community. By repeated iterations, a risk profile was generated, showing the probability distribution of losses from one accident. Using accident probability estimates, the risks profile for annual losses was derived. The mechanics are described of the simulation model, the required input data, and the risk profiles generated for the 26 large hub airports.
An effective online data monitoring and saving strategy for large-scale climate simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin
Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less
An effective online data monitoring and saving strategy for large-scale climate simulations
Xian, Xiaochen; Archibald, Rick; Mayer, Benjamin; ...
2018-01-22
Large-scale climate simulation models have been developed and widely used to generate historical data and study future climate scenarios. These simulation models often have to run for a couple of months to understand the changes in the global climate over the course of decades. This long-duration simulation process creates a huge amount of data with both high temporal and spatial resolution information; however, how to effectively monitor and record the climate changes based on these large-scale simulation results that are continuously produced in real time still remains to be resolved. Due to the slow process of writing data to disk,more » the current practice is to save a snapshot of the simulation results at a constant, slow rate although the data generation process runs at a very high speed. This study proposes an effective online data monitoring and saving strategy over the temporal and spatial domains with the consideration of practical storage and memory capacity constraints. Finally, our proposed method is able to intelligently select and record the most informative extreme values in the raw data generated from real-time simulations in the context of better monitoring climate changes.« less
Lightweight Innovative Solar Array (LISA): Providing Higher Power to Small Spacecraft
NASA Technical Reports Server (NTRS)
Johnson, Les; Carr, John; Fabisinski, Leo; Russell,Tiffany; Smith, Leigh
2015-01-01
Affordable and convenient access to electrical power is essential for all spacecraft and is a critical design driver for the next generation of smallsats, including cubesats, which are currently extremely power limited. The Lightweight Innovative Solar Array (LISA), a concept designed, prototyped, and tested at the NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama provides an affordable, lightweight, scalable, and easily manufactured approach for power generation in space. This flexible technology has many wide-ranging applications from serving small satellites to providing abundant power to large spacecraft in GEO and beyond. By using very thin, ultra-flexible solar arrays adhered to an inflatable structure, a large area (and thus large amount of power) can be folded and packaged into a relatively small volume. The LISA array comprises a launch-stowed, orbit-deployed structure on which lightweight photovoltaic devices and, potentially, transceiver elements are embedded. The system will provide a 2.5 to 5 fold increase in specific power generation (Watts/kilogram) coupled with a >2x enhancement of stowed volume (Watts/cubic-meter) and a decrease in cost (dollars/Watt) when compared to state-of-the-art solar arrays.
Stock, P G; Ascher, N L; Platt, J L; Kaufman, D B; Chen, S; Field, M J; Sutherland, D E
1989-01-01
In vitro manipulation of pancreatic islets to decrease islet immunogenicity before transplantation has largely been directed at eliminating the major histocompatibility complex (MHC) class II-positive passenger leukocytes from the islets. The mixed islet-lymphocyte coculture (MILC) system was used to quantitate the efficacy of immunodepletion of MHC class II-positive cells from pancreatic islets in terms of reducing immunogenicity. With these experiments we compared the in vitro immunogenicity of MHC class II-depleted islets with untreated islets. B10.BR (H-2k) islets were treated with anti-Iak alloserum followed by complement. This treatment successfully eliminated MHC class II-positive cells from the islets, as demonstrated by indirect immunofluorescence techniques. Depleted islets generated slightly lower amounts of allospecific cytotoxic T-lymphocyte (CTL) activity when exposed to C57BL/6 (H-2b) splenocytes in the MILC than untreated control islets. Although the amount of CTL generated by the depleted islets was slightly less than that generated by untreated islets, there was significant stimulation of CTL by the MHC class II-depleted islets. Therefore, the presence or absence of MHC class II cells within the islet is unlikely to be the decisive factor contributing to islet immunogenicity.
Generation of Natural-Language Textual Summaries from Longitudinal Clinical Records.
Goldstein, Ayelet; Shahar, Yuval
2015-01-01
Physicians are required to interpret, abstract and present in free-text large amounts of clinical data in their daily tasks. This is especially true for chronic-disease domains, but holds also in other clinical domains. We have recently developed a prototype system, CliniText, which, given a time-oriented clinical database, and appropriate formal abstraction and summarization knowledge, combines the computational mechanisms of knowledge-based temporal data abstraction, textual summarization, abduction, and natural-language generation techniques, to generate an intelligent textual summary of longitudinal clinical data. We demonstrate our methodology, and the feasibility of providing a free-text summary of longitudinal electronic patient records, by generating summaries in two very different domains - Diabetes Management and Cardiothoracic surgery. In particular, we explain the process of generating a discharge summary of a patient who had undergone a Coronary Artery Bypass Graft operation, and a brief summary of the treatment of a diabetes patient for five years.
Development of large engineered cartilage constructs from a small population of cells.
Brenner, Jillian M; Kunz, Manuela; Tse, Man Yat; Winterborn, Andrew; Bardana, Davide D; Pang, Stephen C; Waldman, Stephen D
2013-01-01
Confronted with articular cartilage's limited capacity for self-repair, joint resurfacing techniques offer an attractive treatment for damaged or diseased tissue. Although tissue engineered cartilage constructs can be created, a substantial number of cells are required to generate sufficient quantities of tissue for the repair of large defects. As routine cell expansion methods tend to elicit negative effects on chondrocyte function, we have developed an approach to generate phenotypically stable, large-sized engineered constructs (≥3 cm(2) ) directly from a small amount of donor tissue or cells (as little as 20,000 cells to generate a 3 cm(2) tissue construct). Using rabbit donor tissue, the bioreactor-cultivated constructs were hyaline-like in appearance and possessed a biochemical composition similar to native articular cartilage. Longer bioreactor cultivation times resulted in increased matrix deposition and improved mechanical properties determined over a 4 week period. Additionally, as the anatomy of the joint will need to be taken in account to effectively resurface large affected areas, we have also explored the possibility of generating constructs matched to the shape and surface geometry of a defect site through the use of rapid-prototyped defect tissue culture molds. Similar hyaline-like tissue constructs were developed that also possessed a high degree of shape correlation to the original defect mold. Future studies will be aimed at determining the effectiveness of this approach to the repair of cartilage defects in an animal model and the creation of large-sized osteochondral constructs. Copyright © 2012 American Institute of Chemical Engineers (AIChE).
Sources for Leads: Natural Products and Libraries.
van Herwerden, Eric F; Süssmuth, Roderich D
2016-01-01
Natural products have traditionally been a major source of leads in the drug discovery process. However, the development of high-throughput screening led to an increased interest in synthetic methods that enabled the rapid construction of large libraries of molecules. This resulted in the termination or downscaling of many natural product research programs, but the chemical libraries did not necessarily produce a larger amount of drug leads. On one hand, this chapter explores the current state of natural product research within the drug discovery process. On the other hand it evaluates the efforts made to increase the amount of leads generated from chemical libraries and considers what role natural products could play here.
Pattern Recognition for a Flight Dynamics Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; Hurtado, John E.
2011-01-01
The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.
The biochemical consequences of hypoxia.
Alberti, K G
1977-01-01
The various phases of energy production have been described. These include glycolysis which is unique in its ability to produce ATP anaerobically, the tricarboxylic acid cycle with its major contribution to ATP production coming through the generation of NADH, and the cytochrome system at which reducing equivalents are converted to water, the released energy being incorporated into high-energy phosphates. The regulation of these pathways has been briefly described and the importance of the small amount of ATP generated anaerobically emphasized. The adaptation of muscle to periods of hypoxia through the presence of myoglobin, creatine phosphate and large amounts of glycogen is then discussed. The role of pH in limiting anaerobic glycolysis in muscle and the importance of the circulation in providing oxygen for exercising muscle are outlined. The effects of hypoxia on certain other tissues such as liver and brain have been detailed and finally methods for assessment of tissue hypoxia in man such as the measurement of the lactate:pyruvate ratio in blood are presented. PMID:198434
Big Data Challenges in Climate Science: Improving the Next-Generation Cyberinfrastructure
NASA Technical Reports Server (NTRS)
Schnase, John L.; Lee, Tsengdar J.; Mattmann, Chris A.; Lynnes, Christopher S.; Cinquini, Luca; Ramirez, Paul M.; Hart, Andre F.; Williams, Dean N.; Waliser, Duane; Rinsland, Pamela;
2016-01-01
The knowledge we gain from research in climate science depends on the generation, dissemination, and analysis of high-quality data. This work comprises technical practice as well as social practice, both of which are distinguished by their massive scale and global reach. As a result, the amount of data involved in climate research is growing at an unprecedented rate. Climate model intercomparison (CMIP) experiments, the integration of observational data and climate reanalysis data with climate model outputs, as seen in the Obs4MIPs, Ana4MIPs, and CREATE-IP activities, and the collaborative work of the Intergovernmental Panel on Climate Change (IPCC) provide examples of the types of activities that increasingly require an improved cyberinfrastructure for dealing with large amounts of critical scientific data. This paper provides an overview of some of climate science's big data problems and the technical solutions being developed to advance data publication, climate analytics as a service, and interoperability within the Earth System Grid Federation (ESGF), the primary cyberinfrastructure currently supporting global climate research activities.
NASA Astrophysics Data System (ADS)
Trubilowicz, J. W.; Moore, D.
2015-12-01
Snowpack dynamics and runoff generation in coastal mountain regions are complicated by rain-on-snow (ROS) events. During major ROS events associated with warm, moist air and strong winds, turbulent heat fluxes can produce substantial melt to supplement rainfall, but previous studies suggest this may not be true for smaller, more frequent events. The internal temperature and water content of the snowpack are also expected to influence runoff generation during ROS events: a cold snowpack with no liquid water content will have the ability to store significant amounts of rainfall, whereas a 'ripe' snowpack may begin to melt and generate outflow with little rain input. However, it is not well understood how antecedent snowpack conditions and energy fluxes differ between ROS events that cause large runoff events and those that do not, in large part because major flood-producing ROS events occur infrequently, and thus are often not sampled during short-term research projects. To generate greater understanding of runoff generation over the spectrum of ROS magnitudes and frequencies, we analyzed data from Automated Snow Pillow (ASP) sites, which record hourly air temperature, precipitation and snowpack water equivalent and offer up to several decades of data at each site. We supplemented the ASP data with output from the North American Regional Reanalysis (NARR) product to support point scale snow modeling for 335 ROS event records from six ASP sites in southwestern BC from 2003 to 2013. Our analysis reconstructed the weather conditions, surface energy exchanges, internal mass and energy states of the snowpack, and generation of snow melt and water available for runoff (WAR) for each ROS event. Results indicate that WAR generation during large events is largely independent of the snowpack conditions, but for smaller events, the antecedent snow conditions play a significant role in either damping or enhancing WAR generation.
Li, J L; Deng, H; Lai, D B; Xu, F; Chen, J; Gao, G; Recker, R R; Deng, H W
2001-07-01
To efficiently manipulate large amounts of genotype data generated with fluorescently labeled dinucleotide markers, we developed a Microsoft database management system, named. offers several advantages. First, it accommodates the dynamic nature of the accumulations of genotype data during the genotyping process; some data need to be confirmed or replaced by repeat lab procedures. By using, the raw genotype data can be imported easily and continuously and incorporated into the database during the genotyping process that may continue over an extended period of time in large projects. Second, almost all of the procedures are automatic, including autocomparison of the raw data read by different technicians from the same gel, autoadjustment among the allele fragment-size data from cross-runs or cross-platforms, autobinning of alleles, and autocompilation of genotype data for suitable programs to perform inheritance check in pedigrees. Third, provides functions to track electrophoresis gel files to locate gel or sample sources for any resultant genotype data, which is extremely helpful for double-checking consistency of raw and final data and for directing repeat experiments. In addition, the user-friendly graphic interface of renders processing of large amounts of data much less labor-intensive. Furthermore, has built-in mechanisms to detect some genotyping errors and to assess the quality of genotype data that then are summarized in the statistic reports automatically generated by. The can easily handle >500,000 genotype data entries, a number more than sufficient for typical whole-genome linkage studies. The modules and programs we developed for the can be extended to other database platforms, such as Microsoft SQL server, if the capability to handle still greater quantities of genotype data simultaneously is desired.
Analysis on the accommodation of renewable energy in northeast China
NASA Astrophysics Data System (ADS)
Liu, Jun; Zhang, Jinfang; Tian, Feng; Mi, Zhe
2017-01-01
The accommodation and curtailment of renewable energy in northeast China have attracted much attention with the rapid growth of wind and solar power generation. Large amount of wind power has been curtailed or abandoned in northeast China due to several reasons, such as, the redundancy of power supplies, inadequate power demands, imperfect power structure with less flexibility and limited cross-regional transmission capacity. In this paper, we use multi-area production simulation to analyse the accommodation of renewable energy in northeast China by 2020. Furthermore, we suggest the measures that could be adopted in generation, grid and load side to reduce curtailment of renewables.
Status of wind-energy conversion
NASA Technical Reports Server (NTRS)
Thomas, R. L.; Savino, J. M.
1973-01-01
The utilization of wind energy is technically feasible as evidenced by the many past demonstrations of wind generators. The cost of energy from the wind has been high compared to fossil fuel systems; a sustained development effort is needed to obtain economical systems. The variability of the wind makes it an unreliable source on a short term basis. However, the effects of this variability can be reduced by storage systems or connecting wind generators to: (1) fossil fuel systems; (2) hydroelectric systems; or (3) dispersing them throughout a large grid network. Wind energy appears to have the potential to meet a significant amount of our energy needs.
NASA Astrophysics Data System (ADS)
Min, Sun-Hong; Kwon, Ohjoon; Sattorov, Matlabjon; Baek, In-Keun; Kim, Seontae; Hong, Dongpyo; Jeong, Jin-Young; Jang, Jungmin; Bera, Anirban; Barik, Ranjan Kumar; Bhattacharya, Ranajoy; Cho, Ilsung; Kim, Byungsu; Park, Chawon; Jung, Wongyun; Park, Seunghyuk; Park, Gun-Sik
2018-02-01
When a semiconductor element is irradiated with radiation in the form of a transient pulse emitted from a nuclear explosion, a large amount of charge is generated in a short time in the device. A photocurrent amplified in a certain direction by these types of charges cause the device to break down and malfunction or in extreme cases causes them to burn out. In this study, a pulse-type γ-ray generator based on a relativistic electron beam accelerator (γ=2.2, β=0.89) which functions by means of tungsten impingement was constructed and tested in an effort to investigate the process and effects of the photocurrent formed by electron hole pairs (EHP) generated in a pMOSFET device when a transient radiation pulse is incident in the device. The pulse-type γ-ray irradiating device used here to generate the electron beam current in a short time was devised to allow an increase in the irradiation dose. A precise signal processing circuit was constructed to measure the photocurrent of the small signal generated by the pMOSFET due to the electron beam accelerator pulse signal from the large noise stemming from the electromagnetic field around the relativistic electron beam accelerator. The pulse-type γ-ray generator was installed to meet the requirements of relativistic electron beam accelerators, and beam irradiation was conducted after a beam commissioning step.
A study on the power generation potential of mini wind turbine in east coast of Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Basrawi, Firdaus; Ismail, Izwan; Ibrahim, Thamir Khalil; Idris, Daing Mohamad Nafiz Daing; Anuar, Shahrani
2017-03-01
A small-scale wind turbine is an attractive renewable energy source, but its economic viability depends on wind speed. The aim of this study is to determine economic viability of small-scale wind turbine in East Coast of Peninsular Malaysia. The potential energy generated has been determined by wind speed data and power curved of. Hourly wind speed data of Kuantan throughout 2015 was collected as the input. Then, a model of wind turbine was developed based on a commercial a 300W mini wind turbine. It was found that power generation is 3 times higher during northeast monsoon season at 15 m elevation. This proved that the northeast monsoon season has higher potential in generating power by wind turbine in East Coast of Peninsular Malaysia. However, only a total of 153.4 kWh/year of power can be generated at this condition. The power generator utilization factor PGUI or capacity ratio was merely 0.06 and it is not technically viable. By increasing the height of wind turbine to 60 m elevation, power generation amount drastically increased to 344 kWh/year, with PGUI of 0.13. This is about two-thirds of PGUI for photovoltaic technology which is 0.21 at this site. If offshore condition was considered, power generation amount further increased to 1,328 kWh/year with PGUI of 0.51. Thus, for a common use of mini wind turbine that is usually installed on-site at low elevation, it has low power generation potential. But, if high elevation as what large wind turbine needed is implemented, it is technically viable option in East Coast of Peninsular Malaysia.
Synthesis and Labeling of RNA In Vitro
Huang, Chao; Yu, Yi-Tao
2013-01-01
This unit discusses several methods for generating large amounts of uniformly labeled, end-labeled, and site-specifically labeled RNAs in vitro. The methods involve a number of experimental procedures, including RNA transcription, 5′ dephosphorylation and rephosphorylation, 3′ terminal nucleotide addition (via ligation), site-specific RNase H cleavage directed by 2′-O-methyl RNA-DNA chimeras, and 2-piece splint ligation. The applications of these RNA radiolabeling approaches are also discussed. PMID:23547015
2014-09-01
resources, and generate large amounts of food and solid waste daily. Almost all Contingency Basecamp (CB) DFACs provide individual paper and plastic ware...which is costly in terms of purchase, transportation, and disposal. This work analyzed the effects of replacing paper and plastic ware with...reusable materials, and of adding industrial dishwashers to re- duce the logistical burden of using paper and plastic ware. Additional en- hancements
East Europe Report: Economic and Industrial Affairs, No. 2416
1983-06-28
that 5 percent of the interest on export credits is refunded and that credits are extended to entrepreneurs requiring a greater amount of capital. To pay...foreign customers. Thus the greater need for capital on the part of major entrepreneurs may in large part be financed through preferred credits-both...which reduce the effectiveness of management. In 1975, despite some increse in employment, the growth of non-agricultural generated income was equal to
Standard/Handbook for RF Ionization Breakdown Prevention in Spacecraft Components
2015-06-19
localized glow discharge of the plasma ( corona ) while RF power is being applied. 8.4.3 RF Performance Changes If a breakdown occurs and damages the...in spacecraft components and systems. Ionization breakdown is a high-energy radio frequency (RF) discharge that can occur when the insulating media...energy can be discharged in a small volume, releasing large amounts of heat, melting local surfaces, and generating debris, all of which will likely
Standard/Handbook for RF Ionization Breakdown Prevention in Spacecraft Components
2015-06-19
localized glow discharge of the plasma ( corona ) while RF power is being applied. 8.4.3 RF Performance Changes If a breakdown occurs and damages the part...in spacecraft components and systems. Ionization breakdown is a high-energy radio frequency (RF) discharge that can occur when the insulating media...energy can be discharged in a small volume, releasing large amounts of heat, melting local surfaces, and generating debris, all of which will likely
Safe harbor: protecting ports with shipboard fuel cells.
Taylor, David A
2006-04-01
With five of the largest harbors in the United States, California is beginning to take steps to manage the large amounts of pollution generated by these bustling centers of transport and commerce. One option for reducing diesel emissions is the use of fuel cells, which run cleaner than diesel and other internal combustion engines. Other technologies being explored by harbor officials are diesel-electric hybrid and gas turbine locomotives for moving freight within port complexes.
RF-Embedding of Energy-Autonomous Sensors and Actuators into Wireless Sensor Networks
2006-10-01
ABSTRACT The principle of energy harvesting , i. e. gleaning of extremely small amounts of energy from the environment, has been around for a long time...technically highly interesting product bears a mention: the Atmos produced by the Swiss company Jaeger-LeCoultre since 1936 [3]. This clock has an expansion...UNLIMITED 2.2 Vibration Energy Using vibration converters, relatively large outputs can be generated, even with small masses, if acceleration is
Cultural Factors in Managing an FMS Case Program: Saudi Arabian Army Ordnance Corps (SOCP) Program
1977-11-01
which included the purchase of large amounts of US;--’,oducee current generation self-Dromelled artillery, personnel earri- ero, tanks, mortar carriers...exores:ecd when attempting, to discuss 13 complex, sophisticated technical material with senior counterparts who possessed relative fluency in...i.ored -:ith ’ mop ity; they crnnot be rvoided; the: can to a rrroat extent be anticipated as critical man- cement factors. Bfy anticipating and preparing
Botelho, Anabela
2013-10-01
This study empirically evaluates whether the increasingly large numbers of private outpatient healthcare facilities (HCFs) within the European Union (EU) countries comply with the existing European waste legislation, and whether compliance with such legislation affects the fraction of healthcare waste (HCW) classified as hazardous. To that end, this study uses data collected by a large survey of more than 700 small private HCFs distributed throughout Portugal, a full member of the EU since 1986, where 50% of outpatient care is currently dominated by private operators. The collected data are then used to estimate a hurdle model, i.e. a statistical specification in which there are two processes: one is the process by which some HCFs generate zero or some positive fraction of hazardous HCW, and another is the process by which HCFs generate a specific positive fraction of hazardous HCW conditional on producing any. Taken together, the results show that although compliance with the law is far from ideal, it is the strongest factor influencing hazardous waste generation. In particular, it is found that higher compliance has a small and insignificant effect on the probability of generating (or reporting) positive amounts of hazardous waste, but it does have a large and significant effect on the fraction of hazardous waste produced, conditional on producing any, with a unit increase in the compliance rate leading to an estimated decrease in the fraction of hazardous HCW by 16.3 percentage points.
The meteorite impact-induced tsunami hazard.
Wünnemann, K; Weiss, R
2015-10-28
When a cosmic object strikes the Earth, it most probably falls into an ocean. Depending on the impact energy and the depth of the ocean, a large amount of water is displaced, forming a temporary crater in the water column. Large tsunami-like waves originate from the collapse of the cavity in the water and the ejecta splash. Because of the far-reaching destructive consequences of such waves, an oceanic impact has been suggested to be more severe than a similar-sized impact on land; in other words, oceanic impacts may punch over their weight. This review paper summarizes the process of impact-induced wave generation and subsequent propagation, whether the wave characteristic differs from tsunamis generated by other classical mechanisms, and what methods have been applied to quantify the consequences of an oceanic impact. Finally, the impact-induced tsunami hazard will be evaluated by means of the Eltanin impact event. © 2015 The Author(s).
Multi-objective model of waste transportation management for crude palm oil industry
NASA Astrophysics Data System (ADS)
Silalahi, Meslin; Mawengkang, Herman; Irsa Syahputri, Nenna
2018-02-01
The crude palm oil industry is an agro-industrial commodity. The global market of this industry has experienced rapid growth in recent years, such that it has a strategic value to be developed for Indonesian economy. Despite these economic benefits there are a number of environmental problems at the factories, such as high water consumption, the generation of a large amount of wastewater with a high organic content, and the generation of a large quantity of solid wastes and air pollution. In terms of waste transportation, we propose a multiobjective programming model for managing business environmental risk in a crude palm oil manufacture which gives the best possible configuration of waste management facilities and allocates wastes to these facilities. Then we develop an interactive approach for tackling logistics and environmental risk production planning problem for the crude palm oil industry.
Primordial black holes for the LIGO events in the axionlike curvaton model
NASA Astrophysics Data System (ADS)
Ando, Kenta; Inomata, Keisuke; Kawasaki, Masahiro; Mukaida, Kyohei; Yanagida, Tsutomu T.
2018-06-01
We review primordial black hole (PBH) formation in the axionlike curvaton model and investigate whether PBHs formed in this model can be the origin of the gravtitational wave (GW) signals detected by the Advanced LIGO. In this model, small-scale curvature perturbations with large amplitude are generated, which is essential for PBH formation. On the other hand, large curvature perturbations also become a source of primordial GWs by their second-order effects. Severe constraints are imposed on such GWs by pulsar timing array (PTA) experiments. We also check the consistency of the model with these constraints. In this analysis, it is important to take into account the effect of non-Gaussianity, which is generated easily in the curvaton model. We see that, if there are non-Gaussianities, the fixed amount of PBHs can be produced with a smaller amplitude of the primordial power spectrum.
Ambient-aware continuous care through semantic context dissemination.
Ongenae, Femke; Famaey, Jeroen; Verstichel, Stijn; De Zutter, Saar; Latré, Steven; Ackaert, Ann; Verhoeve, Piet; De Turck, Filip
2014-12-04
The ultimate ambient-intelligent care room contains numerous sensors and devices to monitor the patient, sense and adjust the environment and support the staff. This sensor-based approach results in a large amount of data, which can be processed by current and future applications, e.g., task management and alerting systems. Today, nurses are responsible for coordinating all these applications and supplied information, which reduces the added value and slows down the adoption rate.The aim of the presented research is the design of a pervasive and scalable framework that is able to optimize continuous care processes by intelligently reasoning on the large amount of heterogeneous care data. The developed Ontology-based Care Platform (OCarePlatform) consists of modular components that perform a specific reasoning task. Consequently, they can easily be replicated and distributed. Complex reasoning is achieved by combining the results of different components. To ensure that the components only receive information, which is of interest to them at that time, they are able to dynamically generate and register filter rules with a Semantic Communication Bus (SCB). This SCB semantically filters all the heterogeneous care data according to the registered rules by using a continuous care ontology. The SCB can be distributed and a cache can be employed to ensure scalability. A prototype implementation is presented consisting of a new-generation nurse call system supported by a localization and a home automation component. The amount of data that is filtered and the performance of the SCB are evaluated by testing the prototype in a living lab. The delay introduced by processing the filter rules is negligible when 10 or fewer rules are registered. The OCarePlatform allows disseminating relevant care data for the different applications and additionally supports composing complex applications from a set of smaller independent components. This way, the platform significantly reduces the amount of information that needs to be processed by the nurses. The delay resulting from processing the filter rules is linear in the amount of rules. Distributed deployment of the SCB and using a cache allows further improvement of these performance results.
Mutated hilltop inflation revisited
NASA Astrophysics Data System (ADS)
Pal, Barun Kumar
2018-05-01
In this work we re-investigate pros and cons of mutated hilltop inflation. Applying Hamilton-Jacobi formalism we solve inflationary dynamics and find that inflation goes on along the {W}_{-1} branch of the Lambert function. Depending on the model parameter mutated hilltop model renders two types of inflationary solutions: one corresponds to small inflaton excursion during observable inflation and the other describes large field inflation. The inflationary observables from curvature perturbation are in tune with the current data for a wide range of the model parameter. The small field branch predicts negligible amount of tensor to scalar ratio r˜ O(10^{-4}), while the large field sector is capable of generating high amplitude for tensor perturbations, r˜ O(10^{-1}). Also, the spectral index is almost independent of the model parameter along with a very small negative amount of scalar running. Finally we find that the mutated hilltop inflation closely resembles the α -attractor class of inflationary models in the limit of α φ ≫ 1.
A large-scale solar dynamics observatory image dataset for computer vision applications.
Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A
2017-01-01
The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.
Principles of gene microarray data analysis.
Mocellin, Simone; Rossi, Carlo Riccardo
2007-01-01
The development of several gene expression profiling methods, such as comparative genomic hybridization (CGH), differential display, serial analysis of gene expression (SAGE), and gene microarray, together with the sequencing of the human genome, has provided an opportunity to monitor and investigate the complex cascade of molecular events leading to tumor development and progression. The availability of such large amounts of information has shifted the attention of scientists towards a nonreductionist approach to biological phenomena. High throughput technologies can be used to follow changing patterns of gene expression over time. Among them, gene microarray has become prominent because it is easier to use, does not require large-scale DNA sequencing, and allows for the parallel quantification of thousands of genes from multiple samples. Gene microarray technology is rapidly spreading worldwide and has the potential to drastically change the therapeutic approach to patients affected with tumor. Therefore, it is of paramount importance for both researchers and clinicians to know the principles underlying the analysis of the huge amount of data generated with microarray technology.
Increase of the spontaneous mutation rate in a long-term experiment with Drosophila melanogaster.
Avila, Victoria; Chavarrías, David; Sánchez, Enrique; Manrique, Antonio; López-Fanjul, Carlos; García-Dorado, Aurora
2006-05-01
In a previous experiment, the effect of 255 generations of mutation accumulation (MA) on the second chromosome viability of Drosophila melanogaster was studied using 200 full-sib MA1 lines and a large C1 control, both derived from a genetically homogeneous base population. At generation 265, one of those MA1 lines was expanded to start 150 new full-sib MA2 lines and a new C2 large control. After 46 generations, the rate of decline in mean viability in MA2 was approximately 2.5 times that estimated in MA1, while the average degree of dominance of mutations was small and nonsignificant by generation 40 and moderate by generation 80. In parallel, the inbreeding depression rate for viability and the amount of additive variance for two bristle traits in C2 were 2-3 times larger than those in C1. The results are consistent with a mutation rate in the line from which MA2 and C2 were derived about 2.5 times larger than that in MA1. The mean viability of C2 remained roughly similar to that of C1, but the rate of MA2 line extinction increased progressively, leading to mutational collapse, which can be ascribed to accelerated mutation and/or synergy after important deleterious accumulation.
Pitch sensation involves stochastic resonance
Martignoli, Stefan; Gomez, Florian; Stoop, Ruedi
2013-01-01
Pitch is a complex hearing phenomenon that results from elicited and self-generated cochlear vibrations. Read-off vibrational information is relayed higher up the auditory pathway, where it is then condensed into pitch sensation. How this can adequately be described in terms of physics has largely remained an open question. We have developed a peripheral hearing system (in hardware and software) that reproduces with great accuracy all salient pitch features known from biophysical and psychoacoustic experiments. At the level of the auditory nerve, the system exploits stochastic resonance to achieve this performance, which may explain the large amount of noise observed in the working auditory nerve. PMID:24045830
Summary of results of January climate simulations with the GISS coarse-mesh model
NASA Technical Reports Server (NTRS)
Spar, J.; Cohen, C.; Wu, P.
1981-01-01
The large scale climates generated by extended runs of the model are relatively independent of the initial atmospheric conditions, if the first few months of each simulation are discarded. The perpetual January simulations with a specified SST field produced excessive snow accumulation over the continents of the Northern Hemisphere. Mass exchanges between the cold (warm) continents and the warm (cold) adjacent oceans produced significant surface pressure changes over the oceans as well as over the land. The effect of terrain and terrain elevation on the amount of precipitation was examined. The evaporation of continental moisture was calculated to cause large increases in precipitation over the continents.
Advances in Proteomics Data Analysis and Display Using an Accurate Mass and Time Tag Approach
Zimmer, Jennifer S.D.; Monroe, Matthew E.; Qian, Wei-Jun; Smith, Richard D.
2007-01-01
Proteomics has recently demonstrated utility in understanding cellular processes on the molecular level as a component of systems biology approaches and for identifying potential biomarkers of various disease states. The large amount of data generated by utilizing high efficiency (e.g., chromatographic) separations coupled to high mass accuracy mass spectrometry for high-throughput proteomics analyses presents challenges related to data processing, analysis, and display. This review focuses on recent advances in nanoLC-FTICR-MS-based proteomics approaches and the accompanying data processing tools that have been developed to display and interpret the large volumes of data being produced. PMID:16429408
Big Data Application in Biomedical Research and Health Care: A Literature Review.
Luo, Jake; Wu, Min; Gopukumar, Deepika; Zhao, Yiqing
2016-01-01
Big data technologies are increasingly used for biomedical and health-care informatics research. Large amounts of biological and clinical data have been generated and collected at an unprecedented speed and scale. For example, the new generation of sequencing technologies enables the processing of billions of DNA sequence data per day, and the application of electronic health records (EHRs) is documenting large amounts of patient data. The cost of acquiring and analyzing biomedical data is expected to decrease dramatically with the help of technology upgrades, such as the emergence of new sequencing machines, the development of novel hardware and software for parallel computing, and the extensive expansion of EHRs. Big data applications present new opportunities to discover new knowledge and create novel methods to improve the quality of health care. The application of big data in health care is a fast-growing field, with many new discoveries and methodologies published in the last five years. In this paper, we review and discuss big data application in four major biomedical subdisciplines: (1) bioinformatics, (2) clinical informatics, (3) imaging informatics, and (4) public health informatics. Specifically, in bioinformatics, high-throughput experiments facilitate the research of new genome-wide association studies of diseases, and with clinical informatics, the clinical field benefits from the vast amount of collected patient data for making intelligent decisions. Imaging informatics is now more rapidly integrated with cloud platforms to share medical image data and workflows, and public health informatics leverages big data techniques for predicting and monitoring infectious disease outbreaks, such as Ebola. In this paper, we review the recent progress and breakthroughs of big data applications in these health-care domains and summarize the challenges, gaps, and opportunities to improve and advance big data applications in health care.
Big Data Application in Biomedical Research and Health Care: A Literature Review
Luo, Jake; Wu, Min; Gopukumar, Deepika; Zhao, Yiqing
2016-01-01
Big data technologies are increasingly used for biomedical and health-care informatics research. Large amounts of biological and clinical data have been generated and collected at an unprecedented speed and scale. For example, the new generation of sequencing technologies enables the processing of billions of DNA sequence data per day, and the application of electronic health records (EHRs) is documenting large amounts of patient data. The cost of acquiring and analyzing biomedical data is expected to decrease dramatically with the help of technology upgrades, such as the emergence of new sequencing machines, the development of novel hardware and software for parallel computing, and the extensive expansion of EHRs. Big data applications present new opportunities to discover new knowledge and create novel methods to improve the quality of health care. The application of big data in health care is a fast-growing field, with many new discoveries and methodologies published in the last five years. In this paper, we review and discuss big data application in four major biomedical subdisciplines: (1) bioinformatics, (2) clinical informatics, (3) imaging informatics, and (4) public health informatics. Specifically, in bioinformatics, high-throughput experiments facilitate the research of new genome-wide association studies of diseases, and with clinical informatics, the clinical field benefits from the vast amount of collected patient data for making intelligent decisions. Imaging informatics is now more rapidly integrated with cloud platforms to share medical image data and workflows, and public health informatics leverages big data techniques for predicting and monitoring infectious disease outbreaks, such as Ebola. In this paper, we review the recent progress and breakthroughs of big data applications in these health-care domains and summarize the challenges, gaps, and opportunities to improve and advance big data applications in health care. PMID:26843812
NASA Astrophysics Data System (ADS)
Newcomer, Adam
Increasing demand for electricity and an aging fleet of generators are the principal drivers behind an increasing need for a large amount of capital investments in the US electric power sector in the near term. The decisions (or lack thereof) by firms, regulators and policy makers in response to this challenge have long lasting consequences, incur large economic and environmental risks, and must be made despite large uncertainties about the future operating and business environment. Capital investment decisions are complex: rates of return are not guaranteed; significant uncertainties about future environmental legislation and regulations exist at both the state and national levels---particularly about carbon dioxide emissions; there is an increasing number of shareholder mandates requiring public utilities to reduce their exposure to potentially large losses from stricter environmental regulations; and there are significant concerns about electricity and fuel price levels, supplies, and security. Large scale, low carbon electricity generation facilities using coal, such as integrated gasification combined cycle (IGCC) facilities coupled with carbon capture and sequestration (CCS) technologies, have been technically proven but are unprofitable in the current regulatory and business environment where there is no explicit or implicit price on carbon dioxide emissions. The paper examines two separate scenarios that are actively discussed by policy and decision makers at corporate, state and national levels: a future US electricity system where coal plays a role; and one where the role of coal is limited or nonexistent. The thesis intends to provide guidance for firms and policy makers and outline applications and opportunities for public policies and for private investment decisions to limit financial risks of electricity generation capital investments under carbon constraints.
Preparation of Small RNAs Using Rolling Circle Transcription and Site-Specific RNA Disconnection.
Wang, Xingyu; Li, Can; Gao, Xiaomeng; Wang, Jing; Liang, Xingguo
2015-01-13
A facile and robust RNA preparation protocol was developed by combining rolling circle transcription (RCT) with RNA cleavage by RNase H. Circular DNA with a complementary sequence was used as the template for promoter-free transcription. With the aid of a 2'-O-methylated DNA, the RCT-generated tandem repeats of the desired RNA sequence were disconnected at the exact end-to-end position to harvest the desired RNA oligomers. Compared with the template DNA, more than 4 × 10(3) times the amount of small RNA products were obtained when modest cleavage was carried out during transcription. Large amounts of RNA oligomers could easily be obtained by simply increasing the reaction volume.
Simultaneous generation and scattering of internal tides by ocean floor topography
NASA Astrophysics Data System (ADS)
Mathur, Manikandan
2015-11-01
Internal waves play a significant role in the global energy budget of the ocean, with internal tides potentially contributing to the conversion of a large amount of mechanical energy into heat in the deep ocean. Several studies in the past decade have investigated internal tide generation and internal tide scattering by ocean floor topography, but by treating them as two separate, independent processes. In this talk, we use the recently developed Green function model (Mathur et al., J. Geophys. Res. Oceans, 119, 2165-2182, 2014), sans the WKB approximation, to quantify the extent to which internal tide generation (scattering) that results from barotropic (baroclinic) forcing on small- and large-scale topography in uniform and nonuniform stratifications is modified by the presence of a background baroclinic (barotropic) tide. Results on idealized topography, stratification and forcing will first be presented, followed by a discussion on the relevance of our studies in the real ocean scenario. The author thanks the Ministry of Earth Sciences, Government of India for financial support under the Monsoon Mission Grant MM/2014/IND-002.
Generation and management of waste electric vehicle batteries in China.
Xu, ChengJian; Zhang, Wenxuan; He, Wenzhi; Li, Guangming; Huang, Juwen; Zhu, Haochen
2017-09-01
With the increasing adoption of EVs (electric vehicles), a large number of waste EV LIBs (electric vehicle lithium-ion batteries) were generated in China. Statistics showed generation of waste EV LIBs in 2016 reached approximately 10,000 tons, and the amount of them would be growing rapidly in the future. In view of the deleterious effects of waste EV LIBs on the environment and the valuable energy storage capacity or materials that can be reused in them, China has started emphasizing the management, reuse, and recycling of them. This paper presented the generation trend of waste EV LIBs and focused on interrelated management development and experience in China. Based on the situation of waste EV LIBs management in China, existing problems were analyzed and summarized. Some recommendations were made for decision-making organs to use as valuable references to improve the management of waste EV LIBs and promote the sustainable development of EVs.
Aging, mortality, and the fast growth trade-off of Schizosaccharomyces pombe
Nakaoka, Hidenori; Wakamoto, Yuichi
2017-01-01
Replicative aging has been demonstrated in asymmetrically dividing unicellular organisms, seemingly caused by unequal damage partitioning. Although asymmetric segregation and inheritance of potential aging factors also occur in symmetrically dividing species, it nevertheless remains controversial whether this results in aging. Based on large-scale single-cell lineage data obtained by time-lapse microscopy with a microfluidic device, in this report, we demonstrate the absence of replicative aging in old-pole cell lineages of Schizosaccharomyces pombe cultured under constant favorable conditions. By monitoring more than 1,500 cell lineages in 7 different culture conditions, we showed that both cell division and death rates are remarkably constant for at least 50–80 generations. Our measurements revealed that the death rate per cellular generation increases with the division rate, pointing to a physiological trade-off with fast growth under balanced growth conditions. We also observed the formation and inheritance of Hsp104-associated protein aggregates, which are a potential aging factor in old-pole cell lineages, and found that these aggregates exhibited a tendency to preferentially remain at the old poles for several generations. However, the aggregates were eventually segregated from old-pole cells upon cell division and probabilistically allocated to new-pole cells. We found that cell deaths were typically preceded by sudden acceleration of protein aggregation; thus, a relatively large amount of protein aggregates existed at the very ends of the dead cell lineages. Our lineage tracking analyses, however, revealed that the quantity and inheritance of protein aggregates increased neither cellular generation time nor cell death initiation rates. Furthermore, our results demonstrated that unusually large amounts of protein aggregates induced by oxidative stress exposure did not result in aging; old-pole cells resumed normal growth upon stress removal, despite the fact that most of them inherited significant quantities of aggregates. These results collectively indicate that protein aggregates are not a major determinant of triggering cell death in S. pombe and thus cannot be an appropriate molecular marker or index for replicative aging under both favorable and stressful environmental conditions. PMID:28632741
NASA Astrophysics Data System (ADS)
Banjo, Daisuke; Tamura, Hiroyuki; Murata, Tadahiko
In this paper, we propose a method of determining the pension in the generation-based funding scheme. In this proposal, we include two types of pensions in the scheme. One is the payment-amount related pension and the other is the payment-frequency related pension. We set the ratio of the total amount of payment-amount related pension to the total amount of both pensions, and simulate income gaps and the relationship between contributions and benefits for each individual when the proposed method is applied.
Harada, Yoichiro; Masahara-Negishi, Yuki; Suzuki, Tadashi
2015-11-01
During asparagine (N)-linked protein glycosylation, eukaryotic cells generate considerable amounts of free oligosaccharides (fOSs) in the cytosol. It is generally assumed that such fOSs are produced by the deglycosylation of misfolded N-glycoproteins that are destined for proteasomal degradation or as the result of the degradation of dolichol-linked oligosaccharides (DLOs), which serve as glycan donor substrates in N-glycosylation reactions. The findings reported herein show that the majority of cytosolic fOSs are generated by a peptide:N-glycanase (PNGase) and an endo-β-N-acetylglucosaminidase (ENGase)-independent pathway in mammalian cells. The ablation of the cytosolic deglycosylating enzymes, PNGase and ENGase, in mouse embryonic fibroblasts had little effect on the amount of cytosolic fOSs generated. Quantitative analyses of fOSs using digitonin-permeabilized cells revealed that they are generated by the degradation of fully assembled Glc3Man9GlcNAc2-pyrophosphate-dolichol (PP-Dol) in the lumen of the endoplasmic reticulum. Because the degradation of Glc3Man9GlcNAc2-PP-Dol is greatly inhibited in the presence of an N-glycosylation acceptor peptide that is recognized by the oligosaccharyltransferase (OST), the OST-mediated hydrolysis of DLO is the most likely mechanism responsible for the production of a large fraction of the cytosolic fOSs. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Mahmud, Mufti; Vassanelli, Stefano
2016-01-01
In recent years multichannel neuronal signal acquisition systems have allowed scientists to focus on research questions which were otherwise impossible. They act as a powerful means to study brain (dys)functions in in-vivo and in in-vitro animal models. Typically, each session of electrophysiological experiments with multichannel data acquisition systems generate large amount of raw data. For example, a 128 channel signal acquisition system with 16 bits A/D conversion and 20 kHz sampling rate will generate approximately 17 GB data per hour (uncompressed). This poses an important and challenging problem of inferring conclusions from the large amounts of acquired data. Thus, automated signal processing and analysis tools are becoming a key component in neuroscience research, facilitating extraction of relevant information from neuronal recordings in a reasonable time. The purpose of this review is to introduce the reader to the current state-of-the-art of open-source packages for (semi)automated processing and analysis of multichannel extracellular neuronal signals (i.e., neuronal spikes, local field potentials, electroencephalogram, etc.), and the existing Neuroinformatics infrastructure for tool and data sharing. The review is concluded by pinpointing some major challenges that are being faced, which include the development of novel benchmarking techniques, cloud-based distributed processing and analysis tools, as well as defining novel means to share and standardize data. PMID:27313507
Estimation of global plastic loads delivered by rivers into the sea
NASA Astrophysics Data System (ADS)
Schmidt, Christian; Krauth, Tobias; Klöckner, Phillipp; Römer, Melina-Sophie; Stier, Britta; Reemtsma, Thorsten; Wagner, Stephan
2017-04-01
A considerable fraction of marine plastic debris likely originates from land-based sources. Transport of plastics by rivers is a potential mechanism that connects plastic debris generated on land with the marine environment. We analyze existing and experimental data of plastic loads in rivers and relate these to the amount of mismanaged plastic waste (MMPW) generated in the river catchments. We find a positive relationship between the plastic load in rivers and the amount of MMPW. Using our empirical MMPW-plastic river load-relationship we estimated the annual plastic load for 1494 rivers, ranging from small first order streams to large rivers, which have an outlet to the sea. We estimate that the global load of plastic debris delivered by rivers to the sea is 39000 tons per year with a large 95% prediction interval between 247 tons per year and 16.7 million tons per year, respectively. Our best estimate is considerably lower than the estimated total land-based inputs which range between 4.8-12.7 million tons anually (Jambeck et al. 2015). Approximately 75% of the total load is transported by the 10 top-ranked rivers which are predominantly located in Asia. These river catchments encompass countries with a large population and high economic growth but an insufficient waste infrastructure. Reducing the plastic loads in these rivers by 50% would reduce the global inputs by 37%. Of the total MMPW generated within river catchments, only a small fraction of about 0.05 % has been found to be mobile in rivers. Thus, either only a small fraction of MMPW enters the river systems, or a substantial fraction of plastic debris accumulates in river systems world wide. References: Jambeck, J. R., R. Geyer, C. Wilcox, T. R. Siegler, M. Perryman, A. Andrady, R. Narayan, and K. L. Law (2015), Plastic waste inputs from land into the ocean, Science, 347(6223), 768-771, doi:10.1126/science.1260352.
NASA Astrophysics Data System (ADS)
Iyer, Karthik; Schmid, Daniel W.; Planke, Sverre; Millett, John
2017-06-01
Vent structures are intimately associated with sill intrusions in sedimentary basins globally and are thought to have been formed contemporaneously due to overpressure generated by gas generation during thermogenic breakdown of kerogen or boiling of water. Methane and other gases generated during this process may have driven catastrophic climate change in the geological past. In this study, we present a 2D FEM/FVM model that accounts for 'explosive' vent formation by fracturing of the host rock based on a case study in the Harstad Basin, offshore Norway. Overpressure generated by gas release during kerogen breakdown in the sill thermal aureole causes fracture formation. Fluid focusing and overpressure migration towards the sill tips results in vent formation after only few tens of years. The size of the vent depends on the region of overpressure accessed by the sill tip. Overpressure migration occurs in self-propagating waves before dissipating at the surface. The amount of methane generated in the system depends on TOC content and also on the type of kerogen present in the host rock. Generated methane moves with the fluids and vents at the surface through a single, large vent structure at the main sill tip matching first-order observations. Violent degassing takes place within the first couple of hundred years and occurs in bursts corresponding to the timing of overpressure waves. The amount of methane vented through a single vent is only a fraction (between 5 and 16%) of the methane generated at depth. Upscaling to the Vøring and Møre Basins, which are a part of the North Atlantic Igneous Province, and using realistic host rock carbon content and kerogen values results in a smaller amount of methane vented than previously estimated for the PETM. Our study, therefore, suggests that the negative carbon isotope excursion (CIE) observed in the fossil record could not have been caused by intrusions within the Vøring and Møre Basins alone and that a contribution from other regions in the NAIP is also required to drive catastrophic climate change.
Knowledge sharing and collaboration in translational research, and the DC-THERA Directory
Gündel, Michaela; Austyn, Jonathan M.; Cavalieri, Duccio; Scognamiglio, Ciro; Brandizi, Marco
2011-01-01
Biomedical research relies increasingly on large collections of data sets and knowledge whose generation, representation and analysis often require large collaborative and interdisciplinary efforts. This dimension of ‘big data’ research calls for the development of computational tools to manage such a vast amount of data, as well as tools that can improve communication and access to information from collaborating researchers and from the wider community. Whenever research projects have a defined temporal scope, an additional issue of data management arises, namely how the knowledge generated within the project can be made available beyond its boundaries and life-time. DC-THERA is a European ‘Network of Excellence’ (NoE) that spawned a very large collaborative and interdisciplinary research community, focusing on the development of novel immunotherapies derived from fundamental research in dendritic cell immunobiology. In this article we introduce the DC-THERA Directory, which is an information system designed to support knowledge management for this research community and beyond. We present how the use of metadata and Semantic Web technologies can effectively help to organize the knowledge generated by modern collaborative research, how these technologies can enable effective data management solutions during and beyond the project lifecycle, and how resources such as the DC-THERA Directory fit into the larger context of e-science. PMID:21969471
Vianna, Juliana A.; Noll, Daly; Mura-Jornet, Isidora; Valenzuela-Guerra, Paulina; González-Acuña, Daniel; Navarro, Cristell; Loyola, David E.; Dantas, Gisele P. M.
2017-01-01
Abstract Microsatellites are valuable molecular markers for evolutionary and ecological studies. Next generation sequencing is responsible for the increasing number of microsatellites for non-model species. Penguins of the Pygoscelis genus are comprised of three species: Adélie (P. adeliae), Chinstrap (P. antarcticus) and Gentoo penguin (P. papua), all distributed around Antarctica and the sub-Antarctic. The species have been affected differently by climate change, and the use of microsatellite markers will be crucial to monitor population dynamics. We characterized a large set of genome-wide microsatellites and evaluated polymorphisms in all three species. SOLiD reads were generated from the libraries of each species, identifying a large amount of microsatellite loci: 33,677, 35,265 and 42,057 for P. adeliae, P. antarcticus and P. papua, respectively. A large number of dinucleotide (66,139), trinucleotide (29,490) and tetranucleotide (11,849) microsatellites are described. Microsatellite abundance, diversity and orthology were characterized in penguin genomes. We evaluated polymorphisms in 170 tetranucleotide loci, obtaining 34 polymorphic loci in at least one species and 15 polymorphic loci in all three species, which allow to perform comparative studies. Polymorphic markers presented here enable a number of ecological, population, individual identification, parentage and evolutionary studies of Pygoscelis, with potential use in other penguin species. PMID:28898354
Osmotic generation of 'anomalous' fluid pressures in geological environments
Neuzii, C.E.
2000-01-01
Osmotic pressures are generated by differences in chemical potential of a solution across a membrane. But whether osmosis can have a significant effect on the pressure of fluids in geological environments has been controversial, because the membrane properties of geological media are poorly understood. 'Anomalous' pressures - large departures from hydrostatic pressure that are not explicable in terms of topographic or fluid-density effects are widely found in geological settings, and are commonly considered to result from processes that alter the pore or fluid volume, which in turn implies crustal changes happening at a rate too slow to observe directly. Yet if osmosis can explain some anomalies, there is no need to invoke such dynamic geological processes in those cases. Here I report results of a nine- year in situ measurement of fluid pressures and solute concentrations in shale that are consistent with the generation of large (up to 20 MPa) osmotic-pressure anomalies which could persist for tens of millions of years. Osmotic pressures of this magnitude and duration can explain many of the pressure anomalies observed in geological settings. The require, however, small shale porosity and large contrasts in the amount of dissolved solids in the pore waters - criteria that may help to distinguish between osmotic and crystal-dynamic origins of anomalous pressures.
Tool Support for Parametric Analysis of Large Software Simulation Systems
NASA Technical Reports Server (NTRS)
Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony
2008-01-01
The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.
Dynamic Droop–Based Inertial Control of a Doubly-Fed Induction Generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, Min; Muljadi, Eduard; Park, Jung-Wook
2016-07-01
If a large disturbance occurs in a power grid, two auxiliary loops for the inertial control of a wind turbine generator have been used: droop loop and rate of change of frequency (ROCOF) loop. Because their gains are fixed, difficulties arise in determining them suitable for all grid and wind conditions. This paper proposes a dynamic droop-based inertial control scheme of a doubly-fed induction generator (DFIG). The scheme aims to improve the frequency nadir (FN) and ensure stable operation of a DFIG. To achieve the first goal, the scheme uses a droop loop, but it dynamically changes its gain basedmore » on the ROCOF to release a large amount of kinetic energy during the initial stage of a disturbance. To do this, a shaping function that relates the droop to the ROCOF is used. To achieve the second goal, different shaping functions, which depend on rotor speeds, are used to give a large contribution in high wind conditions and prevent over-deceleration in low wind conditions during inertial control. The performance of the proposed scheme was investigated under various wind conditions using an EMTP-RV simulator. The results indicate that the scheme improves the FN and ensures stable operation of a DFIG.« less
Gadkar, Vijay J; Filion, Martin
2013-06-01
In various experimental systems, limiting available amounts of RNA may prevent a researcher from performing large-scale analyses of gene transcripts. One way to circumvent this is to 'pre-amplify' the starting RNA/cDNA, so that sufficient amounts are available for any downstream analysis. In the present study, we report the development of a novel protocol for constructing amplified cDNA libraries using the Phi29 DNA polymerase based multiple displacement amplification (MDA) system. Using as little as 200 ng of total RNA, we developed a linear concatenation strategy to make the single-stranded cDNA template amenable for MDA. The concatenation, made possible by the template switching property of the reverse transcriptase enzyme, resulted in the amplified cDNA library with intact 5' ends. MDA generated micrograms of template, allowing large-scale polymerase chain reaction analyses or other large-scale downstream applications. As the amplified cDNA library contains intact 5' ends, it is also compatible with 5' RACE analyses of specific gene transcripts. Empirical validation of this protocol is demonstrated on a highly characterized (tomato) and an uncharacterized (corn gromwell) experimental system.
Effect of wave-current interactions on sediment resuspension in large shallow Lake Taihu, China.
Li, Yiping; Tang, Chunyan; Wang, Jianwei; Acharya, Kumud; Du, Wei; Gao, Xiaomeng; Luo, Liancong; Li, Huiyun; Dai, Shujun; Mercy, Jepkirui; Yu, Zhongbo; Pan, Baozhu
2017-02-01
The disturbance of the water-sediment interface by wind-driven currents and waves plays a critical role in sediment resuspension and internal nutrient release in large, shallow lakes. This study analyzed the effects of the interactions between wind-induced currents an1d waves on the driving mechanism of sediment resuspension in Lake Taihu, the third largest freshwater lake in China, using acoustic and optic techniques to collect long-term, high-frequency, synchronous in situ measurements of wind, currents, waves, and suspended solid concentrations (SSCs). The results suggested that water turbidity started to increase at wind speeds of approximately 4 m/s and significantly increased when wind speeds exceeded 6 m/s. In most cases, wind-induced waves were the main energy source for changes in turbidity. Wave-generated shear stress contributed more than 95% to sediment resuspension and that only in weak wind conditions (<4 m/s) did the lake bottom shear stresses generated by currents and waves contributed equally. The relationship between SSC and bottom shear stress generated by wave was established by fitting the observed results. The processes of sediment dynamics were divided into four stages (A through D) according to three shear-stress thresholds. In stage A, SSC remained stable (about 45 mg/L) and τ w was less than 0.02 N/m 2 . In stage B, the sediment bed was starting to be activated (SSC 45∼60 mg/L) and τ w was in the range of 0.02∼0.07 N/m 2 . In stage C, a medium amount of sediment was suspended (SSC 60∼150 mg/L) and τ w ranged from 0.07 to 0.3 N/m 2 . In stage D, large amount of sediment was suspended (SSC 150∼300 mg/L) and τ w was larger than 0.3 N/m 2 . The findings of this paper reveal the driving mechanism of sediment resuspension, which may further help to evaluate internal nutrient release in large shallow Lake Taihu.
Committed CO2 Emissions of China's Coal-fired Power Plants
NASA Astrophysics Data System (ADS)
Suqin, J.
2016-12-01
The extent of global warming is determined by the cumulative effects of CO2 in the atmosphere. Coal-fired power plants, the largest anthropogenic source of CO2 emissions, produce large amount of CO2 emissions during their lifetimes of operation (committed emissions), which thus influence the future carbon emission space under specific targets on mitigating climate change (e.g., the 2 degree warming limit relative to pre-industrial levels). Comprehensive understanding of committed CO2 emissions for coal-fired power generators is urgently needed in mitigating global climate change, especially in China, the largest global CO2emitter. We calculated China's committed CO2 emissions from coal-fired power generators installed during 1993-2013 and evaluated their impact on future emission spaces at the provincial level, by using local specific data on the newly installed capacities. The committed CO2 emissions are calculated as the product of the annual coal consumption from newly installed capacities, emission factors (CO2emissions per unit crude coal consumption) and expected lifetimes. The sensitivities about generators lifetimes and the drivers on provincial committed emissions are also analyzed. Our results show that these relatively recently installed coal-fired power generators will lead to 106 Gt of CO2 emissions over the course of their lifetimes, which is more than three times the global CO2 emissions from fossil fuels in 2010. More than 80% (85 Gt) of their total committed CO2 will be emitted after 2013, which are referred to as the remaining emissions. Due to the uncertainties of generators lifetime, these remaining emissions would increase by 45 Gt if the lifetimes of China's coal-fired power generators were prolonged by 15 years. Furthermore, the remaining emissions are very different among various provinces owing to local developments and policy disparities. Provinces with large amounts of secondary industry and abundant coal reserves have higher committed emissions. The national and provincial CO2 emission mitigation objectives might be greatly restricted by existing and planned power plants in China. The policy implications of our results have also been discussed.
[Contribution and challenges of Big Data in oncology].
Saintigny, Pierre; Foy, Jean-Philippe; Ferrari, Anthony; Cassier, Philippe; Viari, Alain; Puisieux, Alain
2017-03-01
Since the first draft of the human genome sequence published in 2001, the cost of sequencing has dramatically decreased. The development of new technologies such as next generation sequencing led to a comprehensive characterization of a large number of tumors of various types as well as to significant advances in precision medicine. Despite the valuable information this technological revolution has allowed to produce, the vast amount of data generated resulted in the emergence of new challenges for the biomedical community, such as data storage, processing and mining. Here, we describe the contribution and challenges of Big Data in oncology. Copyright © 2016 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.
Single-friction-surface triboelectric generator with human body conduit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Bo; Cheng, Xiaoliang; Zhang, Xiaosheng
2014-03-10
We present a transparent single-friction-surface triboelectric generator (STEG) employing human body as the conduit, making the applications of STEG in portable electronics much more practical and leading to a significant output improvement. The STEG with micro-patterned polydimethylsiloxane surface achieved an output voltage of over 200 V with a current density of 4.7 μA/cm{sup 2}. With human body conduit, the output current increased by 39% and the amount of charge that transferred increased by 34% compared to the results with grounded electrode. A larger increment of 210% and 81% was obtained in the case of STEG with a large-size flat polyethylenemore » terephthalate surface.« less
Atomic cobalt on nitrogen-doped graphene for hydrogen generation
Fei, Huilong; Dong, Juncai; Arellano-Jiménez, M. Josefina; Ye, Gonglan; Dong Kim, Nam; Samuel, Errol L.G.; Peng, Zhiwei; Zhu, Zhuan; Qin, Fan; Bao, Jiming; Yacaman, Miguel Jose; Ajayan, Pulickel M.; Chen, Dongliang; Tour, James M.
2015-01-01
Reduction of water to hydrogen through electrocatalysis holds great promise for clean energy, but its large-scale application relies on the development of inexpensive and efficient catalysts to replace precious platinum catalysts. Here we report an electrocatalyst for hydrogen generation based on very small amounts of cobalt dispersed as individual atoms on nitrogen-doped graphene. This catalyst is robust and highly active in aqueous media with very low overpotentials (30 mV). A variety of analytical techniques and electrochemical measurements suggest that the catalytically active sites are associated with the metal centres coordinated to nitrogen. This unusual atomic constitution of supported metals is suggestive of a new approach to preparing extremely efficient single-atom catalysts. PMID:26487368
The Requirements Generation System: A tool for managing mission requirements
NASA Technical Reports Server (NTRS)
Sheppard, Sylvia B.
1994-01-01
Historically, NASA's cost for developing mission requirements has been a significant part of a mission's budget. Large amounts of time have been allocated in mission schedules for the development and review of requirements by the many groups who are associated with a mission. Additionally, tracing requirements from a current document to a parent document has been time-consuming and costly. The Requirements Generation System (RGS) is a computer-supported cooperative-work tool that assists mission developers in the online creation, review, editing, tracing, and approval of mission requirements as well as in the production of requirements documents. This paper describes the RGS and discusses some lessons learned during its development.
Li, Ben; Li, Yunxiao; Qin, Zhaohui S
2017-06-01
Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical 'large p , small n ' problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package "adaptiveHM", which is freely available from https://github.com/benliemory/adaptiveHM.
The use of satellite data to determine the distribution of ozone in the troposphere
NASA Technical Reports Server (NTRS)
Fishman, Jack; Watson, Catherine E.; Brackett, Vincent G.; Fakhruzzaman, Khan; Veiga, Robert E.
1991-01-01
Measurements from two independent satellite data sets have been used to derive the climatology of the integrated amount of ozone in the troposphere. These data have led to the finding that large amounts of ozone pollution are generated by anthropogenic activity originating from both the industrialized regions of the Northern Hemisphere and from the southern tropical regions of Africa. To verify the existence of this ozone anomaly at low latitudes, an ozonesonde capability has been established at Ascension Island (8 deg S, 15 deg W) since July 1990. According to the satellite analyses, Ascension Island is located downwind of the primary source region of this ozone pollution, which likely results from the photochemical oxidation of emissions emanating from the widespread burning of savannas and other biomass. These in situ measurements confirm the existence of large amounts of ozone in the lower atmosphere. A summary of these ozonesonde data to date will be presented. In addition, we will present some ozone profile measurements from SAGE II which can be used to provide upper tropospheric ozone measurements directly in the tropical troposphere. A preliminary comparison between the satellite observations and the ozonesonde profiles in the upper troposphere and lower stratosphere will also be presented.
Wang, QuanQiu; Li, Li; Xu, Rong
2018-04-18
Colorectal cancer (CRC) is the second leading cause of cancer-related deaths. It is estimated that about half the cases of CRC occurring today are preventable. Recent studies showed that human gut microbiota and their collective metabolic outputs play important roles in CRC. However, the mechanisms by which human gut microbial metabolites interact with host genetics in contributing CRC remain largely unknown. We hypothesize that computational approaches that integrate and analyze vast amounts of publicly available biomedical data have great potential in better understanding how human gut microbial metabolites are mechanistically involved in CRC. Leveraging vast amount of publicly available data, we developed a computational algorithm to predict human gut microbial metabolites for CRC. We validated the prediction algorithm by showing that previously known CRC-associated gut microbial metabolites ranked highly (mean ranking: top 10.52%; median ranking: 6.29%; p-value: 3.85E-16). Moreover, we identified new gut microbial metabolites likely associated with CRC. Through computational analysis, we propose potential roles for tartaric acid, the top one ranked metabolite, in CRC etiology. In summary, our data-driven computation-based study generated a large amount of associations that could serve as a starting point for further experiments to refute or validate these microbial metabolite associations in CRC cancer.
Li, Ben; Li, Yunxiao; Qin, Zhaohui S.
2016-01-01
Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical ‘large p, small n’ problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package “adaptiveHM”, which is freely available from https://github.com/benliemory/adaptiveHM. PMID:28919931
On performing semantic queries in small devices
NASA Astrophysics Data System (ADS)
Costea, C.; Petrovan, A.; Neamţ, L.; Chiver, O.
2016-08-01
The sensors have a well-defined role in control or monitoring industrial processes; the data given by them can generate valuable information of the trend of the systems to which they belong, but to store a large volume of data and then analysis offline is not always practical. One solution is on-line analysis, preferably as close to the place where data have been generated (edge computing). An increasing amount of data generated by a growing number of devices connected to the Internet resulted in processing data sensors to the edge of the network, in a middle layer where smart entities should interoperate. Diversity of communication technologies outlined the idea of using intermediate devices such as gateways in sensor networks and for this reason the paper examines the functionality of a SPARQL endpoint in the Raspberry Pi device.
Yamamoto, N
1996-10-01
Incubation of human vitamin D3-binding protein (Gc protein), with a mixture of immobilized beta-galactosidase and sialidase, efficiently generated a potent macrophage activating factor, a protein with N-acetylgalactosamine as the remaining sugar. Stepwise incubation of Gc protein with immobilized beta-galactosidase and sialidase, and isolation of the intermediates with immobilized lectins, revealed that either sequence of hydrolysis of Gc glycoprotein by these glycosidases yields the macrophage-activating factor, implying that Gc protein carries a trisaccharide composed of N-acetylgalactosamine and dibranched galactose and sialic acid termini. A 3 hr incubation of mouse peritoneal macrophages with picomolar amounts of the enzymatically generated macrophage-activating factor (GcMAF) resulted in a greatly enhanced phagocytic activity. Administration of a minute amount (10-50 pg/mouse) of GcMAF resulted in a seven- to nine-fold enhanced phagocytic activity of macrophages. Injection of sheep red blood cells (SRBC) along with GcMAF into mice produced a large number of anti-SRBC antibody secreting splenic cells in 2-4 days.
Nanoparticle-Based Manipulation of Antigen-Presenting Cells for Cancer Immunotherapy.
Fang, Ronnie H; Kroll, Ashley V; Zhang, Liangfang
2015-11-04
Immunotherapeutic approaches for treating cancer overall have been receiving a considerable amount of interest due to the recent approval of several clinical formulations. Among the different modalities, anticancer vaccination acts by training the body to endogenously generate a response against tumor cells. However, despite the large amount of work that has gone into the development of such vaccines, the near absence of clinically approved formulations highlights the many challenges facing those working in the field. The generation of potent endogenous anticancer responses poses unique challenges due to the similarity between cancer cells and normal, healthy cells. As researchers continue to tackle the limited efficacy of vaccine formulations, fresh and novel approaches are being sought after to address many of the underlying problems. Here the application of nanoparticle technology towards the development of anticancer vaccines is discussed. Specifically, there is a focus on the benefits of using such strategies to manipulate antigen presenting cells (APCs), which are essential to the vaccination process, and how nanoparticle-based platforms can be rationally engineered to elicit appropriate downstream immune responses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Combustion Of Poultry-Derived Fuel in a CFBC
NASA Astrophysics Data System (ADS)
Jia, Lufei; Anthony, Edward J.
Poultry farming generates large quantities of waste. Current disposal practice is to spread the poultry wastes onto farmland as fertilizer. However, as the factory farms for poultry grow both in numbers and size, the amount of poultry wastes generated has increased significandy in recent years. In consequence, excessive application of poultry wastes on farmland is resulting in more and more contaminants entering the surface water. One of the options being considered is the use of poultry waste as power plant fuel. Since poultry-derived fuel (PDF) is biomass, its co-firing will have the added advantage of reducing greenhouse gas emissions from power generation. To evaluate the combustion characteristics of co-firing PDF with coal, combustion tests of mixtures of coal and PDF were conducted in CanmetENERGY's pilot-scale CFBC. The goal of the tests was to verify that PDF can be co-fired with coal and, more importantly, that emissions from the combustion process are not adversely affected by the presence of PDF in the fuel feed. The test results were very promising and support the view that co-firing in an existing coal-fired CFBC is an effective method of utilizing this potential fuel, both resolving a potential waste disposal problem and reducing the amount of CO2 released by the boiler.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haugen, Carl C.; Forget, Benoit; Smith, Kord S.
Most high performance computing systems being deployed currently and envisioned for the future are based on making use of heavy parallelism across many computational nodes and many concurrent cores. These types of heavily parallel systems often have relatively little memory per core but large amounts of computing capability. This places a significant constraint on how data storage is handled in many Monte Carlo codes. This is made even more significant in fully coupled multiphysics simulations, which requires simulations of many physical phenomena be carried out concurrently on individual processing nodes, which further reduces the amount of memory available for storagemore » of Monte Carlo data. As such, there has been a move towards on-the-fly nuclear data generation to reduce memory requirements associated with interpolation between pre-generated large nuclear data tables for a selection of system temperatures. Methods have been previously developed and implemented in MIT’s OpenMC Monte Carlo code for both the resolved resonance regime and the unresolved resonance regime, but are currently absent for the thermal energy regime. While there are many components involved in generating a thermal neutron scattering cross section on-the-fly, this work will focus on a proposed method for determining the energy and direction of a neutron after a thermal incoherent inelastic scattering event. This work proposes a rejection sampling based method using the thermal scattering kernel to determine the correct outgoing energy and angle. The goal of this project is to be able to treat the full S (a, ß) kernel for graphite, to assist in high fidelity simulations of the TREAT reactor at Idaho National Laboratory. The method is, however, sufficiently general to be applicable in other thermal scattering materials, and can be initially validated with the continuous analytic free gas model.« less
ERIC Educational Resources Information Center
Bivens, Josh; García, Emma; Gould, Elise; Weiss, Elaine; Wilson, Valerie
2016-01-01
Nearly 7 years into the recovery from the Great Recession, two glaring problems remain in the U.S. economy. One is a significant slowdown in the growth of productivity (the amount of output and income generated in an average hour of work). The other is the destructive rise in income inequality in recent decades due largely to big corporations and…
New World Vistas: Air and Space Power for the 21st Century. Directed Energy Volume
1995-01-01
single mode diode pumped Thulium doped glass fiber laser. Full scale 5-10 watt devices have operated in the laboratory at overall efficiencies of 10...operating in the 900-950 nm range together with the development of ytterbium (Yb) doped laser crystals. The Yb ion generates roughly one third as much...mirror in the high power oscillator resonator . Since a potentially large amount of power is dissipated in the nonlinear medium, careful attention to
Ueda, Michiko; Mori, Kota; Matsubayashi, Tetsuya; Sawada, Yasuyuki
2017-09-01
A substantial amount of evidence indicates that news coverage of suicide deaths by celebrities is followed by an increase in suicide rates, suggesting a copycat behavior. However, the underlying process by which celebrity status and media coverage leads to increases in subsequent suicides is still unclear. This study collected over 1 million individual messages ("tweets") posted on Twitter that were related to 26 prominent figures in Japan who died by suicide between 2010 and 2014 and investigated whether media reports on suicide deaths that generated a greater level of reactions by the public are likely to be followed by a larger increase in actual suicides. We also compared the number of Twitter posts and the number of media reports in newspaper and on television to understand whether the number of messages on Twitter in response to the deaths corresponds to the amount of coverage in the traditional media. Using daily data from Japan's national death registry between 2010 and 2014, our analysis found an increase in actual suicides only when suicide deaths generated a large reaction from Twitter users. In contrast, no discernible increase in suicide counts was observed when the analysis included suicide deaths to which Twitter users did not show much interest, even when these deaths were covered considerably by the traditional media. This study also found suicides by relatively young entertainers generated a large number of posts on Twitter. This sharply contrasts with the relatively smaller volume of reaction to them generated by traditional forms of media, which focuses more on the deaths of non-entertainers. The results of this study strongly suggest that it is not sufficient to examine only traditional news media when investigating the impact of media reports on actual suicides. Copyright © 2017 Elsevier Ltd. All rights reserved.
Spaceport Command and Control System Software Development
NASA Technical Reports Server (NTRS)
Glasser, Abraham
2017-01-01
The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires a large amount of intensive testing that will properly measure the capabilities of the system. Automating the test procedures would save the project money from human labor costs, as well as making the testing process more efficient. Therefore, the Exploration Systems Division (formerly the Electrical Engineering Division) at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.
Data-Rich Astronomy: Mining Sky Surveys with PhotoRApToR
NASA Astrophysics Data System (ADS)
Cavuoti, Stefano; Brescia, Massimo; Longo, Giuseppe
2014-05-01
In the last decade a new generation of telescopes and sensors has allowed the production of a very large amount of data and astronomy has become a data-rich science. New automatic methods largely based on machine learning are needed to cope with such data tsunami. We present some results in the fields of photometric redshifts and galaxy classification, obtained using the MLPQNA algorithm available in the DAMEWARE (Data Mining and Web Application Resource) for the SDSS galaxies (DR9 and DR10). We present PhotoRApToR (Photometric Research Application To Redshift): a Java based desktop application capable to solve regression and classification problems and specialized for photo-z estimation.
Using Planning, Scheduling and Execution for Autonomous Mars Rover Operations
NASA Technical Reports Server (NTRS)
Estlin, Tara A.; Gaines, Daniel M.; Chouinard, Caroline M.; Fisher, Forest W.; Castano, Rebecca; Judd, Michele J.; Nesnas, Issa A.
2006-01-01
With each new rover mission to Mars, rovers are traveling significantly longer distances. This distance increase raises not only the opportunities for science data collection, but also amplifies the amount of environment and rover state uncertainty that must be handled in rover operations. This paper describes how planning, scheduling and execution techniques can be used onboard a rover to autonomously generate and execute rover activities and in particular to handle new science opportunities that have been identified dynamically. We also discuss some of the particular challenges we face in supporting autonomous rover decision-making. These include interaction with rover navigation and path-planning software and handling large amounts of uncertainty in state and resource estimations. Finally, we describe our experiences in testing this work using several Mars rover prototypes in a realistic environment.
Increase of the Spontaneous Mutation Rate in a Long-Term Experiment With Drosophila melanogaster
Ávila, Victoria; Chavarrías, David; Sánchez, Enrique; Manrique, Antonio; López-Fanjul, Carlos; García-Dorado, Aurora
2006-01-01
In a previous experiment, the effect of 255 generations of mutation accumulation (MA) on the second chromosome viability of Drosophila melanogaster was studied using 200 full-sib MA1 lines and a large C1 control, both derived from a genetically homogeneous base population. At generation 265, one of those MA1 lines was expanded to start 150 new full-sib MA2 lines and a new C2 large control. After 46 generations, the rate of decline in mean viability in MA2 was ∼2.5 times that estimated in MA1, while the average degree of dominance of mutations was small and nonsignificant by generation 40 and moderate by generation 80. In parallel, the inbreeding depression rate for viability and the amount of additive variance for two bristle traits in C2 were 2–3 times larger than those in C1. The results are consistent with a mutation rate in the line from which MA2 and C2 were derived about 2.5 times larger than that in MA1. The mean viability of C2 remained roughly similar to that of C1, but the rate of MA2 line extinction increased progressively, leading to mutational collapse, which can be ascribed to accelerated mutation and/or synergy after important deleterious accumulation. PMID:16547099
Design of a ``Digital Atlas Vme Electronics'' (DAVE) module
NASA Astrophysics Data System (ADS)
Goodrick, M.; Robinson, D.; Shaw, R.; Postranecky, M.; Warren, M.
2012-01-01
ATLAS-SCT has developed a new ATLAS trigger card, 'Digital Atlas Vme Electronics' (``DAVE''). The unit is designed to provide a versatile array of interface and logic resources, including a large FPGA. It interfaces to both VME bus and USB hosts. DAVE aims to provide exact ATLAS CTP (ATLAS Central Trigger Processor) functionality, with random trigger, simple and complex deadtime, ECR (Event Counter Reset), BCR (Bunch Counter Reset) etc. being generated to give exactly the same conditions in standalone running as experienced in combined runs. DAVE provides additional hardware and a large amount of free firmware resource to allow users to add or change functionality. The combination of the large number of individually programmable inputs and outputs in various formats, with very large external RAM and other components all connected to the FPGA, also makes DAVE a powerful and versatile FPGA utility card.
Combined Ankle-Foot Energetics are Conserved When Distal Foot Energy Absorption is Minimized.
Arch, Elisa S; Fylstra, Bretta L
2016-12-01
The large, late-stance energy generated by the ankle is believed to be critical during gait. However, the distal foot absorbs/dissipates a considerable amount of energy during the same phase. Thus, the energy generated by the combined ankle-foot system is more modest, which raises questions regarding the necessity of such a large ankle power and the interplay between foot and ankle energetics. This study aimed to evaluate our conservation of energy hypothesis, which predicted if distal foot energy absorption/dissipation was reduced, then less energy would be generated at the ankle and thus the same combined ankle-foot energetics would be achieved. Motion analysis data were collected as healthy subjects walked under 2 conditions (Shoes, Footplate). In the Footplate condition, the shoe was replaced with a customized, rigid footplate with a rocker profile. In support of the hypothesis, there was significantly less positive ankle and less negative distal foot work with footplate use, resulting in very similar combined ankle-foot work between conditions. These findings suggest that there is an interplay between the energy generated by the ankle and absorbed by the foot. This interplay should be considered when designing orthotic and prosthetic ankle-foot systems and rehabilitation programs for individuals with weakened ankle muscles.
Nakamura, Yoshihiro; Hasegawa, Osamu
2017-01-01
With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.
Smol, Marzena; Kulczycka, Joanna; Kowalski, Zygmunt
2016-12-15
The aim of this research is to present the possibility of using the sewage sludge ash (SSA) generated in incineration plants as a secondary source of phosphorus (P). The importance of issues related to P recovery from waste materials results from European Union (UE) legislation, which indicated phosphorus as a critical raw material (CRM). Due to the risks of a shortage of supply and its impact on the economy, which is greater than other raw materials, the proper management of phosphorus resources is required in order to achieve global P security. Based on available databases and literature, an analysis of the potential use of SSA for P-recovery in Poland was conducted. Currently, approx. 43,000 Mg/year of SSA is produced in large and small incineration plants and according to in the Polish National Waste Management Plan 2014 (NWMP) further steady growth is predicted. This indicates a great potential to recycle phosphorus from SSA and to reintroduce it again into the value chain as a component of fertilisers which can be applied directly on fields. The amount of SSA generated in installations, both large and small, varies and this contributes to the fact that new and different P recovery technology solutions must be developed and put into use in the years to come (e.g. mobile/stationary P recovery installations). The creation of a database focused on the collection and sharing of data about the amount of P recovered in EU and Polish installations is identified as a helpful tool in the development of an efficient P management model for Poland. Copyright © 2016 Elsevier Ltd. All rights reserved.
Temporal Cyber Attack Detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingram, Joey Burton; Draelos, Timothy J.; Galiardi, Meghan
Rigorous characterization of the performance and generalization ability of cyber defense systems is extremely difficult, making it hard to gauge uncertainty, and thus, confidence. This difficulty largely stems from a lack of labeled attack data that fully explores the potential adversarial space. Currently, performance of cyber defense systems is typically evaluated in a qualitative manner by manually inspecting the results of the system on live data and adjusting as needed. Additionally, machine learning has shown promise in deriving models that automatically learn indicators of compromise that are more robust than analyst-derived detectors. However, to generate these models, most algorithms requiremore » large amounts of labeled data (i.e., examples of attacks). Algorithms that do not require annotated data to derive models are similarly at a disadvantage, because labeled data is still necessary when evaluating performance. In this work, we explore the use of temporal generative models to learn cyber attack graph representations and automatically generate data for experimentation and evaluation. Training and evaluating cyber systems and machine learning models requires significant, annotated data, which is typically collected and labeled by hand for one-off experiments. Automatically generating such data helps derive/evaluate detection models and ensures reproducibility of results. Experimentally, we demonstrate the efficacy of generative sequence analysis techniques on learning the structure of attack graphs, based on a realistic example. These derived models can then be used to generate more data. Additionally, we provide a roadmap for future research efforts in this area.« less
An economic and ecological perspective of ethanol production from renewable agro waste: a review
2012-01-01
Agro-industrial wastes are generated during the industrial processing of agricultural products. These wastes are generated in large amounts throughout the year, and are the most abundant renewable resources on earth. Due to the large availability and composition rich in compounds that could be used in other processes, there is a great interest on the reuse of these wastes, both from economical and environmental view points. The economic aspect is based on the fact that such wastes may be used as low-cost raw materials for the production of other value-added compounds, with the expectancy of reducing the production costs. The environmental concern is because most of the agro-industrial wastes contain phenolic compounds and/or other compounds of toxic potential; which may cause deterioration of the environment when the waste is discharged to the nature. Although the production of bioethanol offers many benefits, more research is needed in the aspects like feedstock preparation, fermentation technology modification, etc., to make bioethanol more economically viable. PMID:23217124
Zhang, Jun-Jun; Wang, Hong-Hui; Zhao, Tian-Jian; Zhang, Ke-Xin; Wei, Xiao; Jiang, Zhi-Dong; Hirano, Shin-Ichi; Li, Xin-Hao; Chen, Jie-Sheng
2017-07-21
Oxygen vacancies can help to capture oxygen-containing species and act as active centers for oxygen evolution reaction (OER). Unfortunately, effective methods for generating a high amount of oxygen vacancies on the surface of various nanocatalysts are rather limited. Here, we described an effective way to generate oxygen-vacancy-rich surface of transition metal oxides, exemplified with Co 3 O 4 , simply by constructing highly coupled interface of ultrafine Co 3 O 4 nanocrystals and metallic Ti. Impressively, the amounts of oxygen vacancy on the surface of Co 3 O 4 /Ti surpassed the reported values of the Co 3 O 4 modified even under highly critical conditions. The Co 3 O 4 /Ti electrode could provide a current density of 23 mA cm -2 at an OER overpotential of 570 mV, low Tafel slope, and excellent durability in neutral medium. Because of the formation of a large amount of oxygen vacancies as the active centers for OER on the surface, the TOF value of the Co 3 O 4 @Ti electrode was optimized to be 3238 h -1 at an OER overpotential of 570 mV, which is 380 times that of the state-of-the-art non-noble nanocatalysts in the literature. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yan, Xiaojing; Sun, Liangliang; Zhu, Guijie; Cox, Olivia F.; Dovichi, Norman J.
2016-01-01
A tryptic digest generated from Xenopus laevis fertilized embryos was fractionated by reversed phase liquid chromatography. One set of 30 fractions was analyzed by 100-min CZE-ESI-MS/MS separations (50 hr total instrument time), and a second set of 15 fractions was analyzed by 3-hr UPLC-ESI-MS/MS separations (45 hr total instrument time). CZE-MS/MS produced 70% as many protein IDs (4,134 vs. 5,787) and 60% as many peptide IDs (22,535 vs. 36,848) as UPLC-MS/MS with similar instrument time (50 h vs. 45 h) but with 50 times smaller total consumed sample amount (1.5 μg vs. 75 μg). Surprisingly, CZE generated peaks that were 25% more intense than UPLC for peptides that were identified by both techniques, despite the 50-fold lower loading amount; this high sensitivity reflects the efficient ionization produced by the electrokinetically-pumped nanospray interface used in CZE. This report is the first comparison of CZE-MS/MS and UPLC-MS/MS for large-scale eukaryotic proteomic analysis. The numbers of protein and peptide identifications produced by CZE-ESI-MS/MS approach those produced by UPLC-MS/MS, but with nearly two orders of magnitude lower sample amounts. PMID:27723263
Western Wind and Solar Integration Study Phase 3A: Low Levels of Synchronous Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Nicholas W.; Leonardi, Bruno; D'Aquila, Robert
The stability of the North American electric power grids under conditions of high penetrations of wind and solar is a significant concern and possible impediment to reaching renewable energy goals. The 33% wind and solar annual energy penetration considered in this study results in substantial changes to the characteristics of the bulk power system. This includes different power flow patterns, different commitment and dispatch of existing synchronous generation, and different dynamic behavior from wind and solar generation. The Western Wind and Solar Integration Study (WWSIS), sponsored by the U.S. Department of Energy, is one of the largest regional solar andmore » wind integration studies to date. In multiple phases, it has explored different aspects of the question: Can we integrate large amounts of wind and solar energy into the electric power system of the West? The work reported here focused on the impact of low levels of synchronous generation on the transient stability performance in one part of the region in which wind generation has displaced synchronous thermal generation under highly stressed, weak system conditions. It is essentially an extension of WWSIS-3. Transient stability, the ability of the power system to maintain synchronism among all elements following disturbances, is a major constraint on operations in many grids, including the western U.S. and Texas systems. These constraints primarily concern the performance of the large-scale bulk power system. But grid-wide stability concerns with high penetrations of wind and solar are still not thoroughly understood. This work focuses on 'traditional' fundamental frequency stability issues, such as maintaining synchronism, frequency, and voltage. The objectives of this study are to better understand the implications of low levels of synchronous generation and a weak grid on overall system performance by: 1) Investigating the Western Interconnection under conditions of both high renewable generation (e.g., wind and solar) and low synchronous generation (e.g., significant coal power plant decommitment or retirement); and 2) Analyzing both the large-scale stability of the Western Interconnection and regional stability issues driven by more geographically dispersed renewable generation interacting with a transmission grid that evolved with large, central station plants at key nodes. As noted above, the work reported here is an extension of the research performed in WWSIS-3.« less
NASA Astrophysics Data System (ADS)
Chaban, Vitaly V.; Andreeva, Nadezhda A.
2017-12-01
Energy generation and storage are at the center of modern civilization. Energetic materials constitute quite a large class of compounds with a high amount of stored chemical energy that can be released. We hereby use a combination of quantum chemistry methods to investigate feasibility and properties of carbon-nitrogen cubanes and multi-charged polynitrogen cations in the context of their synthesis and application as unprecedented energetic materials. We show that the stored energy increases gradually with the nitrogen content increase. Nitrogen-poor cubanes retain their stabilities in vacuum, even at elevated temperatures. Such molecules will be probably synthesized at some point. In turn, polynitrogen cations are highly unstable, except N8H+, despite they are isoelectronic to all-carbon cubane. Kinetic stability of the cation decays drastically as its total charge increases. High-level thermodynamic calculations revealed that large amounts of energy are liberated upon decompositions of polynitrogen cations, which produce molecular nitrogen, acetylene, and protons. The present results bring a substantial insights to the design of novel high-energy compounds.
Bidirectional Response of Runoff to Changes in Snowmelt Rate, Timing, and Amount
NASA Astrophysics Data System (ADS)
Barnhart, T. B.; Molotch, N. P.; Tague, C.
2016-12-01
The mountain snowpack is important for runoff generation across the western United States and for one sixth of Earth's population. Climate change induced near surface warming alters the amount of precipitation that falls as snow causing changes in the amount, rate, and timing of snowmelt. Recent work links snowmelt rate to streamflow production across the western United States. Snowmelt rate has also been linked to snowpack magnitude and snowmelt timing. This work seeks to disentangle the relationships between snowmelt rate, timing, and amount to reveal the dominant streamflow generating factor and the physical mechanism through which snowmelt becomes runoff. We use co-located observations of evapotranspiration and snowmelt from Niwot Ridge, CO (3023 m), the Valles Caldera, NM (3030 m), and Providence Creek, CA (2015 m) as well as the Regional Hydro-Ecologic Simulation System (RHESSys) to assess the linkage between snowmelt rate, amount, timing, and runoff. We conducted 100,000 RHESSys simulations at each site varying the timing, amount, and rate of snowmelt based on the observational record. Analyses of observational data show that years with large peak SWE partition more snowmelt to runoff than to evapotranspiration (r2=0.82, p=0.005). For example water year 2011 with a peak SWE of 0.43 m and a snowmelt rate of 0.62 cm d-1 partitioned 34% of snowmelt to ET. Conversely, water year 2006 with a peak SWE of 0.32 m and a snowmelt rate of 0.1 cm d-1 partitioned 54% of snowmelt to ET. Our simulation results show a bidirectional response between snowmelt rate and timing and runoff efficiency where early, slow snowmelt results in a low runoff efficiency while early, rapid snowmelt results in high runoff efficiency because of a mismatch in water availability and demand (a). Simulation results show a strong relationship between runoff efficiency and snowmelt suggesting that rapid snowmelt is better able to bring the root zone to field capacity and move water to the shallow groundwater system. Indeed, there is strong correspondence between runoff efficiency and root zone drainage showing that rapid snowmelt is better able to generate runoff than slow snowmelt by inducing recharge below the root zone (b). Furthermore, as climate warming decreases the mountain snowpack and causes earlier snowmelt, runoff is likely to decrease.
Veldhuizen, R A; Inchley, K; Hearn, S A; Lewis, J F; Possmayer, F
1993-01-01
Pulmonary surfactant obtained from lung lavages can be separated by differential centrifugation into two distinct subfractions known as large surfactant aggregates and small surfactant aggregates. The large-aggregate fraction is the precursor of the small-aggregate fraction. The ratio of the small non-surface-active to large surface-active surfactant aggregates increases after birth and in several types of lung injury. We have utilized an in vitro system, surface area cycling, to study the conversion of large into small aggregates. Small aggregates generated by surface area cycling were separated from large aggregates by centrifugation at 40,000 g for 15 min rather than by the normal sucrose gradient centrifugation. This new separation method was validated by morphological studies. Surface-tension-reducing activity of total surfactant extracts, as measured with a pulsating-bubble surfactometer, was impaired after surface area cycling. This impairment was related to the generation of small aggregates. Immunoblot analysis of large and small aggregates separated by sucrose gradient centrifugation revealed the presence of detectable amounts of surfactant-associated protein B (SP-B) in large aggregates but not in small aggregates. SP-A was detectable in both large and small aggregates. PAGE of cycled and non-cycled surfactant showed a reduction in SP-B after surface area cycling. We conclude that SP-B is degraded during the formation of small aggregates in vitro and that a change in surface area appears to be necessary for exposing SP-B to protease activity. Images Figure 2 Figure 5 Figure 6 Figure 7 PMID:8216208
NASA Astrophysics Data System (ADS)
Sommer, Philipp; Kaplan, Jed
2016-04-01
Accurate modelling of large-scale vegetation dynamics, hydrology, and other environmental processes requires meteorological forcing on daily timescales. While meteorological data with high temporal resolution is becoming increasingly available, simulations for the future or distant past are limited by lack of data and poor performance of climate models, e.g., in simulating daily precipitation. To overcome these limitations, we may temporally downscale monthly summary data to a daily time step using a weather generator. Parameterization of such statistical models has traditionally been based on a limited number of observations. Recent developments in the archiving, distribution, and analysis of "big data" datasets provide new opportunities for the parameterization of a temporal downscaling model that is applicable over a wide range of climates. Here we parameterize a WGEN-type weather generator using more than 50 million individual daily meteorological observations, from over 10'000 stations covering all continents, based on the Global Historical Climatology Network (GHCN) and Synoptic Cloud Reports (EECRA) databases. Using the resulting "universal" parameterization and driven by monthly summaries, we downscale mean temperature (minimum and maximum), cloud cover, and total precipitation, to daily estimates. We apply a hybrid gamma-generalized Pareto distribution to calculate daily precipitation amounts, which overcomes much of the inability of earlier weather generators to simulate high amounts of daily precipitation. Our globally parameterized weather generator has numerous applications, including vegetation and crop modelling for paleoenvironmental studies.
Costa, Rafael M; Filgueira, Fernando P; Tostes, Rita C; Carvalho, Maria Helena C; Akamine, Eliana H; Lobato, Nubia S
2016-09-01
The perivascular adipose tissue (PVAT) releases a variety of factors that affect vascular function. PVAT in the thoracic aorta shares characteristics with the brown adipose tissue, including a large amount of mitochondria. PVAT-derived factors influence both endothelial and smooth muscle function via several signaling mechanisms including the release/generation of reactive nitrogen and oxygen species. Considering the importance of reactive oxygen species (ROS) on vascular function and that mitochondria are an important source of ROS, we hypothesized that mitochondria-derived ROS in the PVAT modulates vascular reactivity. Vascular reactivity to norephinephrine (NE) was evaluated in thoracic aortic rings, with or without endothelium and/or PVAT, from male Wistar rats. Mitochondrial uncoupling, as well as hydrogen peroxide (H2O2) removal, increased the contraction in vessels surrounded by PVAT. PVAT stimulated with NE exhibited increased protein expression, determined by Western blot analysis, of manganese superoxide dismutase (Mn-SOD) and decreased protein expression of catalase. Ultimately, NE increased superoxide anion (O2(-)) generation in PVAT via increases in intracellular calcium. These results clearly demonstrate that mitochondrial electron transport chain (mETC) in PVAT contributes to modulation of aortic muscle contraction by generating higher amounts of O2(-) that is, in turn, dismutated to hydrogen peroxide, which then acts as a pivotal signaling molecule regulating vascular smooth muscle contraction. Copyright © 2015 Elsevier Inc. All rights reserved.
Constructing DNA Barcode Sets Based on Particle Swarm Optimization.
Wang, Bin; Zheng, Xuedong; Zhou, Shihua; Zhou, Changjun; Wei, Xiaopeng; Zhang, Qiang; Wei, Ziqi
2018-01-01
Following the completion of the human genome project, a large amount of high-throughput bio-data was generated. To analyze these data, massively parallel sequencing, namely next-generation sequencing, was rapidly developed. DNA barcodes are used to identify the ownership between sequences and samples when they are attached at the beginning or end of sequencing reads. Constructing DNA barcode sets provides the candidate DNA barcodes for this application. To increase the accuracy of DNA barcode sets, a particle swarm optimization (PSO) algorithm has been modified and used to construct the DNA barcode sets in this paper. Compared with the extant results, some lower bounds of DNA barcode sets are improved. The results show that the proposed algorithm is effective in constructing DNA barcode sets.
Single-shot stand-off chemical identification of powders using random Raman lasing
Hokr, Brett H.; Bixler, Joel N.; Noojin, Gary D.; Thomas, Robert J.; Rockwell, Benjamin A.; Yakovlev, Vladislav V.; Scully, Marlan O.
2014-01-01
The task of identifying explosives, hazardous chemicals, and biological materials from a safe distance is the subject we consider. Much of the prior work on stand-off spectroscopy using light has been devoted to generating a backward-propagating beam of light that can be used drive further spectroscopic processes. The discovery of random lasing and, more recently, random Raman lasing provide a mechanism for remotely generating copious amounts of chemically specific Raman scattered light. The bright nature of random Raman lasing renders directionality unnecessary, allowing for the detection and identification of chemicals from large distances in real time. In this article, the single-shot remote identification of chemicals at kilometer-scale distances is experimentally demonstrated using random Raman lasing. PMID:25114231
Knowledge network model of the energy consumption in discrete manufacturing system
NASA Astrophysics Data System (ADS)
Xu, Binzi; Wang, Yan; Ji, Zhicheng
2017-07-01
Discrete manufacturing system generates a large amount of data and information because of the development of information technology. Hence, a management mechanism is urgently required. In order to incorporate knowledge generated from manufacturing data and production experience, a knowledge network model of the energy consumption in the discrete manufacturing system was put forward based on knowledge network theory and multi-granularity modular ontology technology. This model could provide a standard representation for concepts, terms and their relationships, which could be understood by both human and computer. Besides, the formal description of energy consumption knowledge elements (ECKEs) in the knowledge network was also given. Finally, an application example was used to verify the feasibility of the proposed method.
Demand Response and Energy Storage Integration Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Ookie; Cheung, Kerry; Olsen, Daniel J.
2016-03-01
Demand response and energy storage resources present potentially important sources of bulk power system services that can aid in integrating variable renewable generation. While renewable integration studies have evaluated many of the challenges associated with deploying large amounts of variable wind and solar generation technologies, integration analyses have not yet fully incorporated demand response and energy storage resources. This report represents an initial effort in analyzing the potential integration value of demand response and energy storage, focusing on the western United States. It evaluates two major aspects of increased deployment of demand response and energy storage: (1) Their operational valuemore » in providing bulk power system services and (2) Market and regulatory issues, including potential barriers to deployment.« less
A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Thirer, Nonel
2013-05-01
With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.
Demand Response and Energy Storage Integration Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Ookie; Cheung, Kerry
Demand response and energy storage resources present potentially important sources of bulk power system services that can aid in integrating variable renewable generation. While renewable integration studies have evaluated many of the challenges associated with deploying large amounts of variable wind and solar generation technologies, integration analyses have not yet fully incorporated demand response and energy storage resources. This report represents an initial effort in analyzing the potential integration value of demand response and energy storage, focusing on the western United States. It evaluates two major aspects of increased deployment of demand response and energy storage: (1) Their operational valuemore » in providing bulk power system services and (2) Market and regulatory issues, including potential barriers to deployment.« less
Estimate of the Potential Amount of Low-Level Waste from the Fukushima Prefecture - 12370
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, Carolyn; Olson, Eric A.J.; Elmer, John
2012-07-01
The amount of waste generated by the cleanup of the Fukushima Prefecture (Fukushima-ken) following the releases from the Fukushima Daiichi nuclear power plant accident (March 2011) is dependent on many factors, including: - Contamination amounts; - Cleanup levels determined for the radioisotopes contaminating the area; - Future land use expectations and human exposure scenarios; - Groundwater contamination considerations; - Costs and availability of storage areas, and eventually disposal areas for the waste; and - Decontamination and volume reduction techniques and technologies used. For the purposes of estimating these waste volumes, Fukushima-ken is segregated into zones of similar contamination level andmore » expected future use. Techniques for selecting the appropriate cleanup methods for each area are shown in a decision tree format. This approach is broadly applied to the 20 km evacuation zone and the total amounts and types of waste are estimated; waste resulting from cleanup efforts outside of the evacuation zone is not considered. Some of the limits of future use and potential zones where residents must be excluded within the prefecture are also described. The size and design of the proposed intermediate storage facility is also discussed and the current situation, cleanup, waste handling, and waste storage issues in Japan are described. The method for estimating waste amounts outlined above illustrates the large amount of waste that could potentially be generated by remediation of the 20 km evacuation zone (619 km{sup 2} total) if the currently proposed cleanup goals are uniformly applied. The Japanese environment ministry estimated in early October that the 1 mSv/year exposure goal would make the government responsible for decontaminating about 8,000 km{sup 2} within Fukushima-ken and roughly 4,900 km{sup 2} in areas outside the prefecture. The described waste volume estimation method also does not give any consideration to areas with localized hot spots. Land use and area dose rate estimates for the 20 km evacuation zone indicate there are large areas where doses to the public can be mitigated through methods other than removal and disposal of soil and other wastes. Several additional options for waste reduction can also be considered, including: - Recycling/reusing or disposing of as municipal waste material that can be unconditionally cleared; - Establishing additional precautionary (e.g., liners) and monitoring requirements for municipal landfills to dispose of some conditionally-cleared material; and - Using slightly-contaminated material in construction of reclamations, banks and roads. Waste estimates for cleanup will continue to evolve as decontamination plans are drafted and finalized. (authors)« less
Systematic Approach to Better Understanding Integration Costs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stark, Gregory B.
2015-09-01
This research presents a systematic approach to evaluating the costs of integrating new generation and operational procedures into an existing power system, and the methodology is independent of the type of change or nature of the generation. The work was commissioned by the U.S. Department of Energy and performed by the National Renewable Energy Laboratory to investigate three integration cost-related questions: (1) How does the addition of new generation affect a system's operational costs, (2) How do generation mix and operating parameters and procedures affect costs, and (3) How does the amount of variable generation (non-dispatchable wind and solar) impactmore » the accuracy of natural gas orders? A detailed operational analysis was performed for seven sets of experiments: variable generation, large conventional generation, generation mix, gas prices, fast-start generation, self-scheduling, and gas supply constraints. For each experiment, four components of integration costs were examined: cycling costs, non-cycling VO&M costs, fuel costs, and reserves provisioning costs. The investigation was conducted with PLEXOS production cost modeling software utilizing an updated version of the Institute of Electrical and Electronics Engineers 118-bus test system overlaid with projected operating loads from the Western Electricity Coordinating Council for the Sacramento Municipal Utility District, Puget Sound Energy, and Public Service Colorado in the year 2020. The test system was selected in consultation with an industry-based technical review committee to be a reasonable approximation of an interconnection yet small enough to allow the research team to investigate a large number of scenarios and sensitivity combinations. The research should prove useful to market designers, regulators, utilities, and others who want to better understand how system changes can affect production costs.« less
NASA Astrophysics Data System (ADS)
Tanikawa, Ataru
2018-05-01
We demonstrate tidal detonation during a tidal disruption event (TDE) of a helium (He) white dwarf (WD) with 0.45 M ⊙ by an intermediate mass black hole using extremely high-resolution simulations. Tanikawa et al. have shown tidal detonation in results of previous studies from unphysical heating due to low-resolution simulations, and such unphysical heating occurs in three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations even with 10 million SPH particles. In order to avoid such unphysical heating, we perform 3D SPH simulations up to 300 million SPH particles, and 1D mesh simulations using flow structure in the 3D SPH simulations for 1D initial conditions. The 1D mesh simulations have higher resolutions than the 3D SPH simulations. We show that tidal detonation occurs and confirm that this result is perfectly converged with different space resolution in both 3D SPH and 1D mesh simulations. We find that detonation waves independently arise in leading parts of the WD, and yield large amounts of 56Ni. Although detonation waves are not generated in trailing parts of the WD, the trailing parts would receive detonation waves generated in the leading parts and would leave large amounts of Si group elements. Eventually, this He WD TDE would synthesize 56Ni of 0.30 M ⊙ and Si group elements of 0.08 M ⊙, and could be observed as a luminous thermonuclear transient comparable to SNe Ia.
Rapid protein production from stable CHO cell pools using plasmid vector and the cumate gene-switch.
Poulain, Adeline; Perret, Sylvie; Malenfant, Félix; Mullick, Alaka; Massie, Bernard; Durocher, Yves
2017-08-10
To rapidly produce large amounts of recombinant proteins, the generation of stable Chinese Hamster Ovary (CHO) cell pools represents a useful alternative to large-scale transient gene expression (TGE). We have developed a cell line (CHO BRI/rcTA ) allowing the inducible expression of recombinant proteins, based on the cumate gene switch. After the identification of optimal plasmid DNA topology (supercoiled vs linearized plasmid) for PEIpro™ mediated transfection and of optimal conditions for methionine sulfoximine (MSX) selection, we were able to generate CHO BRI/rcTA pools producing high levels of recombinant proteins. Volumetric productivities of up to 900mg/L were reproducibly achieved for a Fc fusion protein and up to 350mg/L for an antibody after 14days post-induction in non-optimized fed-batch cultures. In addition, we show that CHO pool volumetric productivities are not affected by a freeze-thaw cycle or following maintenance in culture for over one month in the presence of MSX. Finally, we demonstrate that volumetric protein production with the CR5 cumate-inducible promoter is three- to four-fold higher than with the human CMV or hybrid EF1α-HTLV constitutive promoters. These results suggest that the cumate-inducible CHO BRI/rcTA stable pool platform is a powerful and robust system for the rapid production of gram amounts of recombinant proteins. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Future Market Share of Space Solar Electric Power Under Open Competition
NASA Astrophysics Data System (ADS)
Smith, S. J.; Mahasenan, N.; Clarke, J. F.; Edmonds, J. A.
2002-01-01
This paper assesses the value of Space Solar Power deployed under market competition with a full suite of alternative energy technologies over the 21st century. Our approach is to analyze the future energy system under a number of different scenarios that span a wide range of possible future demographic, socio-economic, and technological developments. Scenarios both with, and without, carbon dioxide concentration stabilization policies are considered. We use the comprehensive set of scenarios created for the Intergovernmental Panel on Climate Change Special Report on Emissions Scenarios (Nakicenovic and Swart 2000). The focus of our analysis will be the cost of electric generation. Cost is particularly important when considering electric generation since the type of generation is, from a practical point of view, largely irrelevant to the end-user. This means that different electricity generation technologies must compete on the basis of price. It is important to note, however, that even a technology that is more expensive than average can contribute to the overall generation mix due to geographical and economic heterogeneity (Clarke and Edmonds 1993). This type of competition is a central assumption of the modeling approach used here. Our analysis suggests that, under conditions of full competition of all available technologies, Space Solar Power at 7 cents per kW-hr could comprise 5-10% of global electric generation by the end of the century, with a global total generation of 10,000 TW-hr. The generation share of Space Solar Power is limited due to competition with lower-cost nuclear, biomass, and terrestrial solar PV and wind. The imposition of a carbon constraint does not significantly increase the total amount of power generated by Space Solar Power in cases where a full range of advanced electric generation technologies are also available. Potential constraints on the availability of these other electric generation options can increase the amount of electricity generated by Space Solar Power. In agreement with previous work on this subject, we note that launch costs are a significant impediment for the widespread implementation of Space Solar Power. KEY WORDS: space satellite power, advanced electric generation, electricity price, climate change
Long-term variability of wind patterns at hub-height over Texas
NASA Astrophysics Data System (ADS)
Jung, J.; Jeon, W.; Choi, Y.; Souri, A.
2017-12-01
Wind energy is getting more attention because of its environmentally friendly attributes. Texas is a state with significant capacity and number of wind turbines. Wind power generation is significantly affected by wind patterns, and it is important to understand this seasonal and decadal variability for long-term power generation from wind turbines. This study focused on the trends of changes in wind pattern and its strength at two hub-heights (80 m and 110 m) over 30-years (1986 to 2015). We only analyzed summer data(June to September) because of concentrated electricity usage in Texas. We extracted hub-height wind data (U and V components) from the three-hourly National Centers for Environmental Prediction-North American Regional Reanalysis (NCEP-NARR) and classified wind patterns properly by using nonhierarchical K-means method. Hub-height wind patterns in summer seasons of 1986 to 2015 were classified in six classes at day and seven classes at night. Mean wind speed was 4.6 ms-1 at day and 5.4 ms-1 at night, but showed large variability in time and space. We combined each cluster's frequencies and wind speed tendencies with large scale atmospheric circulation features and quantified the amount of wind power generation.
Comparing memory-efficient genome assemblers on stand-alone and cloud infrastructures.
Kleftogiannis, Dimitrios; Kalnis, Panos; Bajic, Vladimir B
2013-01-01
A fundamental problem in bioinformatics is genome assembly. Next-generation sequencing (NGS) technologies produce large volumes of fragmented genome reads, which require large amounts of memory to assemble the complete genome efficiently. With recent improvements in DNA sequencing technologies, it is expected that the memory footprint required for the assembly process will increase dramatically and will emerge as a limiting factor in processing widely available NGS-generated reads. In this report, we compare current memory-efficient techniques for genome assembly with respect to quality, memory consumption and execution time. Our experiments prove that it is possible to generate draft assemblies of reasonable quality on conventional multi-purpose computers with very limited available memory by choosing suitable assembly methods. Our study reveals the minimum memory requirements for different assembly programs even when data volume exceeds memory capacity by orders of magnitude. By combining existing methodologies, we propose two general assembly strategies that can improve short-read assembly approaches and result in reduction of the memory footprint. Finally, we discuss the possibility of utilizing cloud infrastructures for genome assembly and we comment on some findings regarding suitable computational resources for assembly.
Glaser, Bruno
2007-02-28
Terra Preta soils of central Amazonia exhibit approximately three times more soil organic matter, nitrogen and phosphorus and 70 times more charcoal compared to adjacent infertile soils. The Terra Preta soils were generated by pre-Columbian native populations by chance or intentionally adding large amounts of charred residues (charcoal), organic wastes, excrements and bones. In this paper, it is argued that generating new Terra Preta sites ('Terra Preta nova') could be the basis for sustainable agriculture in the twenty-first century to produce food for billions of people, and could lead to attaining three Millennium Development Goals: (i) to combat desertification, (ii) to sequester atmospheric CO2 in the long term, and (iii) to maintain biodiversity hotspots such as tropical rainforests. Therefore, large-scale generation and utilization of Terra Preta soils would decrease the pressure on primary forests that are being extensively cleared for agricultural use with only limited fertility and sustainability and, hence, only providing a limited time for cropping. This would maintain biodiversity while mitigating both land degradation and climate change. However, it should not be overlooked that the infertility of most tropical soils (and associated low population density) is what could have prevented tropical forests undergoing large-scale clearance for agriculture. Increased fertility may increase the populations supported by shifting cultivation, thereby maintaining and increasing pressure on forests.
A distributed pipeline for DIDSON data processing
Li, Liling; Danner, Tyler; Eickholt, Jesse; McCann, Erin L.; Pangle, Kevin; Johnson, Nicholas
2018-01-01
Technological advances in the field of ecology allow data on ecological systems to be collected at high resolution, both temporally and spatially. Devices such as Dual-frequency Identification Sonar (DIDSON) can be deployed in aquatic environments for extended periods and easily generate several terabytes of underwater surveillance data which may need to be processed multiple times. Due to the large amount of data generated and need for flexibility in processing, a distributed pipeline was constructed for DIDSON data making use of the Hadoop ecosystem. The pipeline is capable of ingesting raw DIDSON data, transforming the acoustic data to images, filtering the images, detecting and extracting motion, and generating feature data for machine learning and classification. All of the tasks in the pipeline can be run in parallel and the framework allows for custom processing. Applications of the pipeline include monitoring migration times, determining the presence of a particular species, estimating population size and other fishery management tasks.
Hydropower Modeling Challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoll, Brady; Andrade, Juan; Cohen, Stuart
Hydropower facilities are important assets for the electric power sector and represent a key source of flexibility for electric grids with large amounts of variable generation. As variable renewable generation sources expand, understanding the capabilities and limitations of the flexibility from hydropower resources is important for grid planning. Appropriately modeling these resources, however, is difficult because of the wide variety of constraints these plants face that other generators do not. These constraints can be broadly categorized as environmental, operational, and regulatory. This report highlights several key issues involving incorporating these constraints when modeling hydropower operations in terms of production costmore » and capacity expansion. Many of these challenges involve a lack of data to adequately represent the constraints or issues of model complexity and run time. We present several potential methods for improving the accuracy of hydropower representation in these models to allow for a better understanding of hydropower's capabilities.« less
Tubular-Type Hydroturbine Performance for Variable Guide Vane Opening by CFD
NASA Astrophysics Data System (ADS)
Kim, Y. T.; Nam, S. H.; Cho, Y. J.; Hwang, Y. C.; Choi, Y. D.; Nam, C. D.; Lee, Y. H.
Micro hydraulic power generation which has output of less or equal to 100kW is attracting considerable attention. This is because of its small, simple, renewable, and large amount of energy resources. By using a small hydro power generator of which main concept is based on using differential water pressures in pipe lines, energy which was initially wasted by use of a reducing valve at an end of the pipeline, is collected by a turbine in the hydro power generator. A propeller shaped hydroturbine has been used in order to make use of this renewable pressure energy. In this study, in order to acquire basic design data of tubular type hydroturbine, output power, head, and efficiency characteristics due to the guide vane opening angle are examined in detail. Moreover, influences of pressure, tangential and axial velocity distributions on turbine performance are investigated by using a commercial CFD code.
NASA Astrophysics Data System (ADS)
Yang, Z.; Li, X.; Li, J.; Long, J. D.; Lan, C. H.; Wang, T.; Dong, P.; He, J. L.
2017-03-01
A large amount of back streaming electrons will bring about a part of current drain on power supply, cause sparking or high-voltage breakdowns, and affect the neutron yield and waveform for a compact sealed-tube pulsed neutron generator. A novel idea which uses a ZnO varistor to provide a constant self-biased voltage to suppress the secondary electrons is introduced. The I-V curve for the ZnO varistor was measured in the experiment. The effects of suppressing the secondary electrons were investigated using a ZnO varistor, linear resistors, and an independent power supply, respectively. The results show that the secondary electrons are suppressed effectively by the compact ZnO varistor, while not increasing the size and the component of the device. It is a promising design for compact sealed-tube neutron generators.
[Medical image compression: a review].
Noreña, Tatiana; Romero, Eduardo
2013-01-01
Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.
A Protein Chimera Strategy Supports Production of a Model "Difficult-to-Express" Recombinant Target.
Hussain, Hirra; Fisher, David I; Roth, Robert G; Abbott, W Mark; Carballo-Amador, Manuel Alejandro; Warwicker, Jim; Dickson, Alan J
2018-06-22
Due in part to the needs of the biopharmaceutical industry, there has been an increased drive to generate high quality recombinant proteins in large amounts. However, achieving high yields can be a challenge as the novelty and increased complexity of new targets often makes them 'difficult-to-express'. This study aimed to define the molecular features that restrict the production of a model 'difficult-to-express' recombinant protein, Tissue Inhibitor Metalloproteinase-3 (TIMP-3). Building from experimental data, computational approaches were used to rationalise the re-design of this recombinant target to generate a chimera with enhanced secretion. The results highlight the importance of early identification of unfavourable sequence attributes, enabling the generation of engineered protein forms that bypass 'secretory' bottlenecks and result in efficient recombinant protein production. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Bennett, Jerry M.; Cortes, Peter M.
1985-01-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367
Bennett, J M; Cortes, P M
1985-09-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.
Calderas produced by hydromagmatic eruptions through permafrost in northwest Alaska
NASA Technical Reports Server (NTRS)
Beget, J. E.
1993-01-01
Most hydromagmatic eruptions on Earth are generated by interactions of lava and ground or surface water. This eruptive process typically produces craters 0.1-1 km in diameter, although a few as large as 1-2 km were described. In contrast, a series of Pleistocene hydromagmatic eruptions through 80-100-m-thick permafrost on the Seward Peninsula of Alaska produced four craters 3-8 km in diameter. These craters, called the Espenberg maars, are the four largest maars known on Earth. The thermodynamic properties of ground ice influence the rate and amount of water melted during the course of the eruption. Large quantities of water are present, but only small amounts can be melted at any time to interact with magma. This would tend to produce sustained and highly explosive low water/magma (fuel-coolant) ratios during the eruptions. An area of 400 km(sub 2) around the Alaskan maars shows strong reductions in the density of thaw lakes, ground ice, and other surface manifestations of permafrost because of deep burial by coeval tephra falls. The unusually large Espenberg maars are the first examples of calderas produced by hydromagmatic eruptions. These distinctive landforms can apparently be used as an indicator of the presence of permafrost at the time of eruption.
Tradeoffs and synergies between biofuel production and large-scale solar infrastructure in deserts
NASA Astrophysics Data System (ADS)
Ravi, S.; Lobell, D. B.; Field, C. B.
2012-12-01
Solar energy installations in deserts are on the rise, fueled by technological advances and policy changes. Deserts, with a combination of high solar radiation and availability of large areas unusable for crop production are ideal locations for large scale solar installations. For efficient power generation, solar infrastructures require large amounts of water for operation (mostly for cleaning panels and dust suppression), leading to significant moisture additions to desert soil. A pertinent question is how to use the moisture inputs for sustainable agriculture/biofuel production. We investigated the water requirements for large solar infrastructures in North American deserts and explored the possibilities for integrating biofuel production with solar infrastructure. In co-located systems the possible decline in yields due to shading by solar panels may be offsetted by the benefits of periodic water addition to biofuel crops, simpler dust management and more efficient power generation in solar installations, and decreased impacts on natural habitats and scarce resources in deserts. In particular, we evaluated the potential to integrate solar infrastructure with biomass feedstocks that grow in arid and semi-arid lands (Agave Spp), which are found to produce high yields with minimal water inputs. To this end, we conducted detailed life cycle analysis for these coupled agave biofuel - solar energy systems to explore the tradeoffs and synergies, in the context of energy input-output, water use and carbon emissions.
Genes2WordCloud: a quick way to identify biological themes from gene lists and free text.
Baroukh, Caroline; Jenkins, Sherry L; Dannenfelser, Ruth; Ma'ayan, Avi
2011-10-13
Word-clouds recently emerged on the web as a solution for quickly summarizing text by maximizing the display of most relevant terms about a specific topic in the minimum amount of space. As biologists are faced with the daunting amount of new research data commonly presented in textual formats, word-clouds can be used to summarize and represent biological and/or biomedical content for various applications. Genes2WordCloud is a web application that enables users to quickly identify biological themes from gene lists and research relevant text by constructing and displaying word-clouds. It provides users with several different options and ideas for the sources that can be used to generate a word-cloud. Different options for rendering and coloring the word-clouds give users the flexibility to quickly generate customized word-clouds of their choice. Genes2WordCloud is a word-cloud generator and a word-cloud viewer that is based on WordCram implemented using Java, Processing, AJAX, mySQL, and PHP. Text is fetched from several sources and then processed to extract the most relevant terms with their computed weights based on word frequencies. Genes2WordCloud is freely available for use online; it is open source software and is available for installation on any web-site along with supporting documentation at http://www.maayanlab.net/G2W. Genes2WordCloud provides a useful way to summarize and visualize large amounts of textual biological data or to find biological themes from several different sources. The open source availability of the software enables users to implement customized word-clouds on their own web-sites and desktop applications.
Genes2WordCloud: a quick way to identify biological themes from gene lists and free text
2011-01-01
Background Word-clouds recently emerged on the web as a solution for quickly summarizing text by maximizing the display of most relevant terms about a specific topic in the minimum amount of space. As biologists are faced with the daunting amount of new research data commonly presented in textual formats, word-clouds can be used to summarize and represent biological and/or biomedical content for various applications. Results Genes2WordCloud is a web application that enables users to quickly identify biological themes from gene lists and research relevant text by constructing and displaying word-clouds. It provides users with several different options and ideas for the sources that can be used to generate a word-cloud. Different options for rendering and coloring the word-clouds give users the flexibility to quickly generate customized word-clouds of their choice. Methods Genes2WordCloud is a word-cloud generator and a word-cloud viewer that is based on WordCram implemented using Java, Processing, AJAX, mySQL, and PHP. Text is fetched from several sources and then processed to extract the most relevant terms with their computed weights based on word frequencies. Genes2WordCloud is freely available for use online; it is open source software and is available for installation on any web-site along with supporting documentation at http://www.maayanlab.net/G2W. Conclusions Genes2WordCloud provides a useful way to summarize and visualize large amounts of textual biological data or to find biological themes from several different sources. The open source availability of the software enables users to implement customized word-clouds on their own web-sites and desktop applications. PMID:21995939
NASA Astrophysics Data System (ADS)
Sanyal, Prasanta; Basu, Sayak
2016-04-01
The evaluation of robust future climate prediction is well dependent on the cognition of past and present hydrological systems which could be traced through the oxygen isotopic composition (δ18O) of rain. Compared to Peninsular and Southern India, explanation for the variability in δ18O values of monsoonal rain is sparse for the Eastern India. Analysis (and published records) of Indian summer monsoon (ISM) rain at the entry point of Bay of Bengal (BoB) vapor into the continent showed the gradual depletion of 18O in the ISM rain is determined by the surface run-off and location of cyclone generation in BoB. The timing and density of cyclones control the maxima in amount and minima in δ18O value of ISM rain and possibly also responsible for the long-term (last 10 years) decrease in rain δ18O values (and amount). Large spatial variation and temporally robustness of weak and insignificant amount effect suggested reconsideration of reconstructed past climate records along the track of BoB vapor. The memory effect of atmospheric vapor is found to lower the amount effect.
NASA Astrophysics Data System (ADS)
Ozturk, D.; Chaudhary, A.; Votava, P.; Kotfila, C.
2016-12-01
Jointly developed by Kitware and NASA Ames, GeoNotebook is an open source tool designed to give the maximum amount of flexibility to analysts, while dramatically simplifying the process of exploring geospatially indexed datasets. Packages like Fiona (backed by GDAL), Shapely, Descartes, Geopandas, and PySAL provide a stack of technologies for reading, transforming, and analyzing geospatial data. Combined with the Jupyter notebook and libraries like matplotlib/Basemap it is possible to generate detailed geospatial visualizations. Unfortunately, visualizations generated is either static or does not perform well for very large datasets. Also, this setup requires a great deal of boilerplate code to create and maintain. Other extensions exist to remedy these problems, but they provide a separate map for each input cell and do not support map interactions that feed back into the python environment. To support interactive data exploration and visualization on large datasets we have developed an extension to the Jupyter notebook that provides a single dynamic map that can be managed from the Python environment, and that can communicate back with a server which can perform operations like data subsetting on a cloud-based cluster.
NASA Astrophysics Data System (ADS)
Polsterer, K. L.; Gieseke, F.; Igel, C.
2015-09-01
In the last decades more and more all-sky surveys created an enormous amount of data which is publicly available on the Internet. Crowd-sourcing projects such as Galaxy-Zoo and Radio-Galaxy-Zoo used encouraged users from all over the world to manually conduct various classification tasks. The combination of the pattern-recognition capabilities of thousands of volunteers enabled scientists to finish the data analysis within acceptable time. For up-coming surveys with billions of sources, however, this approach is not feasible anymore. In this work, we present an unsupervised method that can automatically process large amounts of galaxy data and which generates a set of prototypes. This resulting model can be used to both visualize the given galaxy data as well as to classify so far unseen images.
Towards Personalized Medicine Mediated by in Vitro Virus-Based Interactome Approaches
Ohashi, Hiroyuki; Miyamoto-Sato, Etsuko
2014-01-01
We have developed a simple in vitro virus (IVV) selection system based on cell-free co-translation, using a highly stable and efficient mRNA display method. The IVV system is applicable to the high-throughput and comprehensive analysis of proteins and protein–ligand interactions. Huge amounts of genomic sequence data have been generated over the last decade. The accumulated genetic alterations and the interactome networks identified within cells represent a universal feature of a disease, and knowledge of these aspects can help to determine the optimal therapy for the disease. The concept of the “integrome” has been developed as a means of integrating large amounts of data. We have developed an interactome analysis method aimed at providing individually-targeted health care. We also consider future prospects for this system. PMID:24756093
Disposal of hypergolic propellants, phase 6 task 4. Disposal pond products
NASA Technical Reports Server (NTRS)
Cohenour, B. C.; Wiederhold, C. N.
1977-01-01
Waste monomethyl hydrazine scrubber liquor, consisting of aqueous solutions containing small amounts of CH4, Cl2, CH3Cl, CH2Cl2, and CHCl3 as well as large amounts of CH3OH is scheduled to be dumped in stabilization ponds along with nitrate and nitrite salt solutions obtained as waste liquors from the N2O4 scrubbers. The wastes are investigated as to the hazardous materials generated by such combinations of items as described as well as the finite lifetime of such materials in the stabilization ponds. The gas liquid chromatograph was used in the investigation. A series of experiments designed to convert nitrate and nitrite salts to the environmentally innocuous N2O and N2 using solar energy is reported. Results indicate that this solar conversion is feasible.
Correlative Microscopy of Neutron-Irradiated Materials
Briggs, Samuel A.; Sridharan, Kumar; Field, Kevin G.
2016-12-31
A nuclear reactor core is a highly demanding environment that presents several unique challenges for materials performance. Materials in modern light water reactor (LWR) cores must survive several decades in high-temperature (300-350°C) aqueous corrosion conditions while being subject to large amounts of high-energy neutron irradiation. Next-generation reactor designs seek to use more corrosive coolants (e.g., molten salts) and even greater temperatures and neutron doses. The high amounts of disorder and unique crystallographic defects and microchemical segregation effects induced by radiation inevitably lead to property degradation of materials. Thus, maintaining structural integrity and safety margins over the course of the reactor'smore » service life thus necessitates the ability to understand and predict these degradation phenomena in order to develop new, radiation-tolerant materials that can maintain the required performance in these extreme conditions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, P.D.
Intercontinental ballistic missiles based in silos have become relatively vulnerable, at least in theory, to counter-force attacks. This theoretical vulnerability may not, in fact, be a serious practical concern; it is nonetheless troubling both to policy-makers and to the public. Furthermore, the present generation of ICBMs is aging (the Minuteman II single warhead missile will exceed its operational life-span early in the next decade) and significant restructuring of the ballistic missile force may well be necessary if a Strategic Arms Reduction Treaty (START) is signed. This paper compares several proposed schemes for modernizing the ICBM force. Because the rail-garrison MIRVdmore » mobile system is the least costly alternative to secure a large number of strategic warheads, it receives a comparatively large amount of attention.« less
Programming secure mobile agents in healthcare environments using role-based permissions.
Georgiadis, C K; Baltatzis, J; Pangalos, G I
2003-01-01
The healthcare environment consists of vast amounts of dynamic and unstructured information, distributed over a large number of information systems. Mobile agent technology is having an ever-growing impact on the delivery of medical information. It supports acquiring and manipulating information distributed in a large number of information systems. Moreover is suitable for the computer untrained medical stuff. But the introduction of mobile agents generates advanced threads to the sensitive healthcare information, unless the proper countermeasures are taken. By applying the role-based approach to the authorization problem, we ease the sharing of information between hospital information systems and we reduce the administering part. The different initiative of the agent's migration method, results in different methods of assigning roles to the agent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2013-01-01
The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) held a Biogas and Fuel Cells Workshop June 11-13, 2012, in Golden, Colorado, to discuss biogas and waste-to-energy technologies for fuel cell applications. The overall objective was to identify opportunities for coupling renewable biomethane with highly efficient fuel cells to produce electricity; heat; combined heat and power (CHP); or combined heat, hydrogen and power (CHHP) for stationary or motive applications. The workshop focused on biogas sourced from wastewater treatment plants (WWTPs), landfills, and industrial facilities that generate or process large amounts of organic waste, including large biofuel production facilitiesmore » (biorefineries).« less
Chemical microreactor and method thereof
Morse, Jeffrey D.; Jankowski, Alan
2005-11-01
A chemical microreactor suitable for generation of hydrogen fuel from liquid sources such as ammonia, methanol, and butane through steam reforming processes when mixed with an appropriate amount of water contains capillary microchannels with integrated resistive heaters to facilitate the occurrence of catalytic steam reforming reactions. One such microreactor employs a packed catalyst capillary microchannel and at least one porous membrane. Another employs a porous membrane with a large surface area or a porous membrane support structure containing a plurality of porous membranes having a large surface area in the aggregate, i.e., greater than about 1 m.sup.2 /cm.sup.3. The packed catalyst capillary microchannels, porous membranes and porous membrane support structures may be formed by a variety of methods.
Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion.
Igarashi, Shunsuke; Nakamura, Tomoya; Matsushima, Kyoji; Yamaguchi, Masahiro
2018-04-16
In the calculation of large-scale computer-generated holograms, an approach called "tiling," which divides the hologram plane into small rectangles, is often employed due to limitations on computational memory. However, the total amount of computational complexity severely increases with the number of divisions. In this paper, we propose an efficient method for calculating tiled large-scale holograms using ray-wavefront conversion. In experiments, the effectiveness of the proposed method was verified by comparing its calculation cost with that using the previous method. Additionally, a hologram of 128K × 128K pixels was calculated and fabricated by a laser-lithography system, and a high-quality 105 mm × 105 mm 3D image including complicated reflection and translucency was optically reconstructed.
Minimising hydrogen sulphide generation during steam assisted production of heavy oil
Montgomery, Wren; Sephton, Mark A.; Watson, Jonathan S.; Zeng, Huang; Rees, Andrew C.
2015-01-01
The majority of global petroleum is in the form of highly viscous heavy oil. Traditionally heavy oil in sands at shallow depths is accessed by large scale mining activities. Recently steam has been used to allow heavy oil extraction with greatly reduced surface disturbance. However, in situ thermal recovery processes can generate hydrogen sulphide, high levels of which are toxic to humans and corrosive to equipment. Avoiding hydrogen sulphide production is the best possible mitigation strategy. Here we use laboratory aquathermolysis to reproduce conditions that may be experienced during thermal extraction. The results indicate that hydrogen sulphide generation occurs within a specific temperature and pressure window and corresponds to chemical and physical changes in the oil. Asphaltenes are identified as the major source of sulphur. Our findings reveal that for high sulphur heavy oils, the generation of hydrogen sulphide during steam assisted thermal recovery is minimal if temperature and pressure are maintained within specific criteria. This strict pressure and temperature dependence of hydrogen sulphide release can allow access to the world's most voluminous oil deposits without generating excessive amounts of this unwanted gas product. PMID:25670085
Minimising hydrogen sulphide generation during steam assisted production of heavy oil
NASA Astrophysics Data System (ADS)
Montgomery, Wren; Sephton, Mark A.; Watson, Jonathan S.; Zeng, Huang; Rees, Andrew C.
2015-02-01
The majority of global petroleum is in the form of highly viscous heavy oil. Traditionally heavy oil in sands at shallow depths is accessed by large scale mining activities. Recently steam has been used to allow heavy oil extraction with greatly reduced surface disturbance. However, in situ thermal recovery processes can generate hydrogen sulphide, high levels of which are toxic to humans and corrosive to equipment. Avoiding hydrogen sulphide production is the best possible mitigation strategy. Here we use laboratory aquathermolysis to reproduce conditions that may be experienced during thermal extraction. The results indicate that hydrogen sulphide generation occurs within a specific temperature and pressure window and corresponds to chemical and physical changes in the oil. Asphaltenes are identified as the major source of sulphur. Our findings reveal that for high sulphur heavy oils, the generation of hydrogen sulphide during steam assisted thermal recovery is minimal if temperature and pressure are maintained within specific criteria. This strict pressure and temperature dependence of hydrogen sulphide release can allow access to the world's most voluminous oil deposits without generating excessive amounts of this unwanted gas product.
Minimising hydrogen sulphide generation during steam assisted production of heavy oil.
Montgomery, Wren; Sephton, Mark A; Watson, Jonathan S; Zeng, Huang; Rees, Andrew C
2015-02-11
The majority of global petroleum is in the form of highly viscous heavy oil. Traditionally heavy oil in sands at shallow depths is accessed by large scale mining activities. Recently steam has been used to allow heavy oil extraction with greatly reduced surface disturbance. However, in situ thermal recovery processes can generate hydrogen sulphide, high levels of which are toxic to humans and corrosive to equipment. Avoiding hydrogen sulphide production is the best possible mitigation strategy. Here we use laboratory aquathermolysis to reproduce conditions that may be experienced during thermal extraction. The results indicate that hydrogen sulphide generation occurs within a specific temperature and pressure window and corresponds to chemical and physical changes in the oil. Asphaltenes are identified as the major source of sulphur. Our findings reveal that for high sulphur heavy oils, the generation of hydrogen sulphide during steam assisted thermal recovery is minimal if temperature and pressure are maintained within specific criteria. This strict pressure and temperature dependence of hydrogen sulphide release can allow access to the world's most voluminous oil deposits without generating excessive amounts of this unwanted gas product.
Hiramatsu, Ai; Hara, Yuji; Sekiyama, Makiko; Honda, Ryo; Chiemchaisri, Chart
2009-12-01
In the urban-rural fringe of the Bangkok Metropolitan Region, rapid urbanization is creating a land-use mixture of agricultural fields and residential areas. To develop appropriate policies to enhance recycling of municipal solid waste (MSW), current MSW management was investigated in the oboto (local administrative district) of Bang Maenang in Nonthaburi Province, adjoining Bangkok. The authors conducted a structural interview survey with waste-related organizations and local residents, analysed household waste generation, and performed global positioning system (GPS) tracking of municipal garbage trucks. It was found that MSW was collected and treated by local government, private-sector entities, and the local community separately. Lack of integrated management of these entities complicated waste flow in the study area, and some residences were not served by MSW collection. Organic waste, such as kitchen garbage and yard waste, accounted for a large proportion of waste generation but was underutilized. Through GPS/GIS analysis, the waste collection rate of the generated waste amount was estimated to be 45.5- 51.1% of total generation.
Lee, Cholyoung; Kim, Kyehyun; Lee, Hyuk
2018-01-15
Impervious surfaces are mainly artificial structures such as rooftops, roads, and parking lots that are covered by impenetrable materials. These surfaces are becoming the major causes of nonpoint source (NPS) pollution in urban areas. The rapid progress of urban development is increasing the total amount of impervious surfaces and NPS pollution. Therefore, many cities worldwide have adopted a stormwater utility fee (SUF) that generates funds needed to manage NPS pollution. The amount of SUF is estimated based on the impervious ratio, which is calculated by dividing the total impervious surface area by the net area of an individual land parcel. Hence, in order to identify the exact impervious ratio, large-scale impervious surface maps (ISMs) are necessary. This study proposes and assesses various methods for generating large-scale ISMs for urban areas by using existing GIS data. Bupyeong-gu, a district in the city of Incheon, South Korea, was selected as the study area. Spatial data that were freely offered by national/local governments in S. Korea were collected. First, three types of ISMs were generated by using the land-cover map, digital topographic map, and orthophotographs, to validate three methods that had been proposed conceptually by Korea Environment Corporation. Then, to generate an ISM of higher accuracy, an integration method using all data was proposed. Error matrices were made and Kappa statistics were calculated to evaluate the accuracy. Overlay analyses were performed to examine the distribution of misclassified areas. From the results, the integration method delivered the highest accuracy (Kappa statistic of 0.99) compared to the three methods that use a single type of spatial data. However, a longer production time and higher cost were limiting factors. Among the three methods using a single type of data, the land-cover map showed the highest accuracy with a Kappa statistic of 0.91. Thus, it was judged that the mapping method using the land-cover map is more appropriate than the others. In conclusion, it is desirable to apply the integration method when generating the ISM with the highest accuracy. However, if time and cost are constrained, it would be effective to primarily use the land-cover map. Copyright © 2017 Elsevier Ltd. All rights reserved.
SHIPPING CONTAINER FOR RADIOACTIVE MATERIAL
Nachbar, H.D.; Biggs, B.B.; Tariello, P.J.; George, K.O.
1963-01-15
A shipping container is described for transponting a large number of radioactive nuclear fuel element modules which produce a substantial amount of heat. The container comprises a primary pressure vessel and shield, and a rotatable head having an access port that can be indexed with module holders in the container. In order to remove heat generated in the fuel eleme nts, a heat exchanger is arranged within the container and in contact with a heat exchange fluid therein. The heat exchanger communicates with additional external heat exchangers, which dissipate heat to the atmosphere. (AEC)
Furuta, Etsuko; Ito, Takeshi
2018-02-01
A new apparatus for measuring tritiated water in expired air was developed using plastic scintillator (PS) pellets and a low-background liquid scintillation counter. The sensitivity of the apparatus was sufficient when a large adapted Teflon vial was used. The measurement method generated low amounts of organic waste because the PS pellets were reusable by rinsing, and had adequate detection limits. The apparatus is useful for the safety management of workers that are exposed to radioactive materials. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Design and Development of a Management Information System for the Monterey Navy Flying Club.
1986-03-27
Management Information System for the Monterey Navy Flying Club. It supplies the tools necessary to enable the club manager to maintain all club records and generate required administrative and financial reports. The Monterey Navy Flying Club has one of the largest memberships of the Navy sponsored flying clubs. As a result of this large membership and the amount of manual paperwork required to properly maintain club records, the Manager’s ability to provide necessary services and reports in severely hampered. The implementation of an efficient
NASA Astrophysics Data System (ADS)
Chen, Ke-Jung
2014-03-01
Modern cosmological simulations predict that the first generation of stars formed with a mass scale around 100 M⊙ about 300-400 million years after the Big Bang. When the first stars reached the end of their lives, many of them might have died as energetic supernovae (SNe) that could have significantly affected the early Universe via injecting large amounts of energy and metals into the primordial intergalactic medium. In this paper, we review the current models of the first SNe by discussing on the relevant background physics, computational methods and the latest results.
NASA Astrophysics Data System (ADS)
Hogeboom, Rick J.; Knook, Luuk; Hoekstra, Arjen Y.
2018-03-01
For centuries, humans have resorted to building dams to gain control over freshwater available for human consumption. Although dams and their reservoirs have made many important contributions to human development, they receive negative attention as well, because of the large amounts of water they can consume through evaporation. We estimate the blue water footprint of the world's artificial reservoirs and attribute it to the purposes hydroelectricity generation, irrigation water supply, residential and industrial water supply, flood protection, fishing and recreation, based on their economic value. We estimate that economic benefits from 2235 reservoirs included in this study amount to 265 × 109 US a year, with residential and industrial water supply and hydroelectricity generation as major contributors. The water footprint associated with these benefits is the sum of the water footprint of dam construction (<1% contribution) and evaporation from the reservoir's surface area, and globally adds up to 66 × 109 m3 y-1. The largest share of this water footprint (57%) is located in non-water scarce basins and only 1% in year-round scarce basins. The primary purposes of a reservoir change with increasing water scarcity, from mainly hydroelectricity generation in non-scarce basins, to residential and industrial water supply, irrigation water supply and flood control in scarcer areas.
Glass and Fiber Glass Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Computers, Electronics, and Appliances Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Textiles Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Aluminum Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Fabricated Metals Footprint, October 2012 (MECS 2006) (in Spanish)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-19
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Cement Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-01
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Chemicals Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Food and Beverage Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
All Manufacturing Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Forest Products Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Foundries Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Plastics and Rubber Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Transportation Equipment Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Iron and Steel Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Machinery Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Petroleum Refining Footprint, October 2012 (MECS 2006)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-10-17
Manufacturing energy and carbon footprints map energy consumption and losses, as well as greenhouse gas emissions from fuel consumption, for fifteen individual U.S. manufacturing sectors (representing 94% of all manufacturing energy use) and for the entire manufacturing sector. By providing energy consumption and emissions figures broken down by end use, the footprints allow for comparisons of energy use and emissions sources both within and across sectors. The footprints portray a large amount of information for each sector, including: * Comparison of the energy generated offsite and transferred to facilities versus that generated onsite * Nature and amount of energy consumedmore » by end use within facilities * Magnitude of the energy lost both outside and inside facility boundaries * Magnitude of the greenhouse gas emissions released as a result of manufacturing energy use. Energy losses indicate opportunities to improve efficiency by implementing energy management best practices, upgrading energy systems, and developing new technologies. Footprints are available below for each sector. Data is presented in two levels of detail. The first page provides a high- level snapshot of the offsite and onsite energy flow, and the second page shows the detail for onsite generation and end use of energy. The principle energy use data source is the U.S. Department of Energy (DOE) Energy Information Administration's (EIA's) Manufacturing Energy Consumption Survey (MECS), for consumption in the year 2006, when the survey was last completed.« less
Parallel generation of uniform fine droplets at hundreds of kilohertz in a flow-focusing module.
Bardin, David; Kendall, Michael R; Dayton, Paul A; Lee, Abraham P
2013-01-01
Droplet-based microfluidic systems enable a variety of biomedical applications from point-of-care diagnostics with third world implications, to targeted therapeutics alongside medical ultrasound, to molecular screening and genetic testing. Though these systems maintain the key advantage of precise control of the size and composition of the droplet as compared to conventional methods of production, the low rates at which droplets are produced limits translation beyond the laboratory setting. As well, previous attempts to scale up shear-based microfluidic systems focused on increasing the volumetric throughput and formed large droplets, negating many practical applications of emulsions such as site-specific therapeutics. We present the operation of a parallel module with eight flow-focusing orifices in the dripping regime of droplet formation for the generation of uniform fine droplets at rates in the hundreds of kilohertz. Elevating the capillary number to access dripping, generation of monodisperse droplets of liquid perfluoropentane in the parallel module exceeded 3.69 × 10(5) droplets per second, or 1.33 × 10(9) droplets per hour, at a mean diameter of 9.8 μm. Our microfluidic method offers a novel means to amass uniform fine droplets in practical amounts, for instance, to satisfy clinical needs, with the potential for modification to form massive amounts of more complex droplets.
Identification of p53 unbound to T-antigen in human cells transformed by simian virus 40 T-antigen.
O'Neill, F J; Hu, Y; Chen, T; Carney, H
1997-02-27
In several clones of SV40-transformed human cells, we investigated the relative amounts of large T-Antigen (T-Ag) and p53 proteins, both unbound and associated within complexes, with the goal of identifying changes associated with transformation and immortalization. Cells were transformed by wild type (wt) T-Ag, a functionally temperature sensitive T-Ag (tsA58) and other T-Ag variants. Western analysis showed that while most of the T-Ag was ultimately bound by p53, most of the p53 remained unbound to T-Ag. Unbound p53 remained in the supernatant after a T-Ag immunoprecipitation and p53 was present in two to fourfold excess of T-Ag. In one transformant there was five to tenfold more p53 than T-Ag. p53 was present in transformants in amounts at least 200-fold greater than in untransformed human cells. In wt and variant T-Ag transformants, including those generated with tsA58 T-Ag, large amounts of unbound p53 were present in both pre-crisis and immortal cells and when the cells were grown at permissive or non-permissive temperatures. We also found that in transformants produced by tsA58, an SV40/JCV chimeric T-Ag and other variants, T-Ag appeared to form a complex with p53 slowly perhaps because one or both proteins matured slowly. The presence in transformed human cells of large amounts of unbound p53 and in excess of T-Ag suggests that sequestration of p53 by T-Ag, resulting from complex formation, is required neither for morphological transformation nor immortalization of human cells. Rather, these results support the proposal that high levels of p53, the T-Ag/p53 complexes, or other biochemical event(s), lead to transformation and immortalization of human cells by T-Ag.
Household hazardous wastes as a potential source of pollution: a generation study.
Ojeda-Benítez, Sara; Aguilar-Virgen, Quetzalli; Taboada-González, Paul; Cruz-Sotelo, Samantha E
2013-12-01
Certain domestic wastes exhibit characteristics that render them dangerous, such as explosiveness, flammability, spontaneous combustion, reactivity, toxicity and corrosiveness. The lack of information about their generation and composition hinders the creation of special programs for their collection and treatment, making these wastes a potential threat to human health and the environment. We attempted to quantify the levels of hazardous household waste (HHW) generated in Mexicali, Mexico. The analysis considered three socioeconomic strata and eight categories. The sampling was undertaken on a house-by-house basis, and hypothesis testing was based on differences between two proportions for each of the eight categories. In this study, HHW comprised 3.49% of the total generated waste, which exceeded that reported in previous studies in Mexico. The greatest quantity of HHW was generated by the middle stratum; in the upper stratum, most packages were discarded with their contents remaining. Cleaning products represent 45.86% of the HHW generated. Statistical differences were not observed for only two categories among the three social strata. The scarcity of studies on HHW generation limits direct comparisons. Any decrease in waste generation within the middle social stratum will have a large effect on the total amount of waste generated, and decrease their impact on environmental and human health.
Building Simulation Modelers are we big-data ready?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Jibonananda; New, Joshua Ryan
Recent advances in computing and sensor technologies have pushed the amount of data we collect or generate to limits previously unheard of. Sub-minute resolution data from dozens of channels is becoming increasingly common and is expected to increase with the prevalence of non-intrusive load monitoring. Experts are running larger building simulation experiments and are faced with an increasingly complex data set to analyze and derive meaningful insight. This paper focuses on the data management challenges that building modeling experts may face in data collected from a large array of sensors, or generated from running a large number of building energy/performancemore » simulations. The paper highlights the technical difficulties that were encountered and overcome in order to run 3.5 million EnergyPlus simulations on supercomputers and generating over 200 TBs of simulation output. This extreme case involved development of technologies and insights that will be beneficial to modelers in the immediate future. The paper discusses different database technologies (including relational databases, columnar storage, and schema-less Hadoop) in order to contrast the advantages and disadvantages of employing each for storage of EnergyPlus output. Scalability, analysis requirements, and the adaptability of these database technologies are discussed. Additionally, unique attributes of EnergyPlus output are highlighted which make data-entry non-trivial for multiple simulations. Practical experience regarding cost-effective strategies for big-data storage is provided. The paper also discusses network performance issues when transferring large amounts of data across a network to different computing devices. Practical issues involving lag, bandwidth, and methods for synchronizing or transferring logical portions of the data are presented. A cornerstone of big-data is its use for analytics; data is useless unless information can be meaningfully derived from it. In addition to technical aspects of managing big data, the paper details design of experiments in anticipation of large volumes of data. The cost of re-reading output into an analysis program is elaborated and analysis techniques that perform analysis in-situ with the simulations as they are run are discussed. The paper concludes with an example and elaboration of the tipping point where it becomes more expensive to store the output than re-running a set of simulations.« less
Far from thunderstorm UV transient events in the atmosphere measured by Vernov satellite
NASA Astrophysics Data System (ADS)
Morozenko, Violetta; Klimov, Pavel; Khrenov, Boris; Gali, Garipov; Margarita, Kaznacheeva; Mikhail, Panasyuk; Sergei, Svertilov; Robert, Holzworth
2016-04-01
The steady self-contained classification of events such as sprites, elves, blue jets emerged for the period of transient luminous events (TLE) observation. In accordance with TLE origin theories the presence of the thunderstorm region where the lightnings with the large peak current generating in is necessary. However, some far-from-thunderstorm region events were also detected and revealed to us another TLE generating mechanisms. For the discovering of the TLE nature the Universitetsky-Tatiana-2 and Vernov satellites were equipped with ultraviolet (240-400 nm) and red-infrared ( >610 nm) detectors. In both detector it was carried out regardless the lightnings with the guidance by the flashes in the UV wavelength where lightning's emitting is quite faint. The lowered threshold on the Vernov satellite allowed to select the great amount of TLE with the numerous far-from-thunderstorm region events examples. such events were not conjuncted with lightning activity measured by global lightning location network (WWLLN) on the large area of approximately 107 km2 for 30 minutes before and after the time of registration. The characteristic features of this type of event are: the absence of significant signal in the red-infrared detector's channel; a relatively small number of photons (less than 5 ṡ 1021). A large number of without lightning flash were detected at high latitudes over the ocean (30°S - 60°S). Lightning activity in the magnetic conjugate point also was analyzed. The relationship of far-from-thunderstorm region events with the specific lightning discharges didn't confirmed. Far-from-thunderstorm events - a new type of transient phenomena in the upper atmosphere is not associated with the thunderstorm activity. The mechanism of such discharges is not clear, though it was accumulated a sufficient amount of experimental facts of the existence of such flashes. According to the data of Vernov satellite the temporal profile, duration, location with earth coordinates and the number of photons generated in the far-from-thunderstorm atmospheric events has been analyzed and the discussion of these events origin is in progress.
NASA Astrophysics Data System (ADS)
Tanaka, Hidemi; Shimada, Koji; Toyoshima, Tsuyoshi; Obara, Tomohiro; Niizato, Tadafumi
2004-12-01
Lithological heterogeneity of low P/T metamorphic rocks in southern area of Hidaka metamorphic belt (HMB) was formed through historical development of HMB while these rocks had been laid in ductile lower crust. Many strain-localized mylonite zones (<100 m in thickness) are preferentially developed within S-type tonalite and pelitic gneiss, which are characterized by a large modal amount of phyllosilicates (biotite+muscovite+chlorite) and quartz, compared to other lithofacies in HMB. Mylonitic foliations are more conspicuous with close to the center of the shear zone associated with increase in amounts of phyllosilicate minerals, indicating fluidenhanced weakening mechanisms were operated in plastic shear zones. Pseudotachylyte veins are observed exclusively in these mylonite zones, which were generated during exhumation stage of HMB. We conclude the seismic slip zones in southern HMB had been initiated in the ductile lower crust by concentration of localized plastic shear zones within the phyllosilicate- and quartz-rich lithofacies, which were heterogeneously formed by old metamorphic and magmatic events. Then these zones were further weakened by fluid-enhanced plastic deformation, and finally seismic slips occurred at the bottom of seismogenic upper crust, during exhumation of HMB.
Caron, Laurent; Nardello, Véronique; Mugge, José; Hoving, Erik; Alsters, Paul L; Aubry, Jean-Marie
2005-02-15
Chemically generated singlet oxygen (1O2, 1Deltag) is able to oxidize a great deal of hydrophobic substrates from molybdate-catalyzed hydrogen peroxide decomposition, provided a suitable reaction medium such as a microemulsion system is used. However, high substrate concentrations or poorly reactive organics require large amounts of H2O2 that generate high amounts of water and thus destabilize the system. We report results obtained on combining dark singlet oxygenation of hydrophobic substrates in microemulsions with a pervaporation membrane process. To avoid composition alterations after addition of H2O2 during the peroxidation, the reaction mixture circulates through a ceramic membrane module that enables a partial and selective dewatering of the microemulsion. Optimization phase diagrams of sodium molybdate/water/alcohol/anionic surfactant/organic solvent have been elaborated to maximize the catalyst concentration and therefore the reaction rate. The membrane selectivity towards the mixture constituents has been investigated showing that a high retention is observed for the catalyst, for organic solvents and hydrophobic substrates, but not for n-propanol (cosurfactant) and water. The efficiency of such a process is illustrated with the peroxidation of a poorly reactive substrate, viz., beta-pinene.
Laser decontamination of the radioactive lightning rods
NASA Astrophysics Data System (ADS)
Potiens, A. J.; Dellamano, J. C.; Vicente, R.; Raele, M. P.; Wetter, N. U.; Landulfo, E.
2014-02-01
Between 1970 and 1980 Brazil experienced a significant market for radioactive lightning rods (RLR). The device consists of an air terminal with one or more sources of americium-241 attached to it. The sources were used to ionize the air around them and to increase the attraction of atmospheric discharges. Because of their ineffectiveness, the nuclear regulatory authority in Brazil suspended the license for manufacturing, commerce and installation of RLR in 1989, and determined that the replaced RLR were to be collected to a centralized radioactive waste management facility for treatment. The first step for RLR treatment is to remove the radioactive sources. Though they can be easily removed, some contaminations are found all over the remaining metal scrap that must decontaminated for release, otherwise it must be treated as radioactive waste. Decontamination using various chemicals has proven to be inefficient and generates large amounts of secondary wastes. This work shows the preliminary results of the decontamination of 241Am-contaminated metal scrap generated in the treatment of radioactive lightning rods applying laser ablation. A Nd:YAG nanoseconds laser was used with 300 mJ energy leaving only a small amount of secondary waste to be treated.
Nutritional aspects of second generation soy foods.
Alezandro, Marcela Roquim; Granato, Daniel; Lajolo, Franco Maria; Genovese, Maria Inés
2011-05-25
Samples of 15 second generation soy-based products (n = 3), commercially available, were analyzed for their protein and isoflavone contents and in vitro antioxidant activity, by means of the Folin-Ciocalteu reducing ability, DPPH radical scavenging capacity, and oxygen radical absorbance capacity. Isoflavone identification and quantification were performed by high-performance liquid chromatography. Products containing soy and/or soy-based ingredients represent important sources of protein in addition to the low fat amounts. However, a large variation in isoflavone content and in vitro antioxidant capacity was observed. The isoflavone content varied from 2.4 to 18.1 mg/100 g (FW), and soy kibe and soy sausage presented the highest amounts. Chocolate had the highest antioxidant capacity, but this fact was probably associated with the addition of cocoa liquor, a well-known source of polyphenolics. This study showed that the soy-based foods do not present a significant content of isoflavones when compared with the grain, and their in vitro antioxidant capacity is not related with these compounds but rather to the presence of other phenolics and synthetic antioxidants, such as sodium erythorbate. However, they may represent alternative sources and provide soy protein, isoflavones, and vegetable fat for those who are not ready to eat traditional soy foods.
Developing a Hybrid Solar/Wind Powered Drip Irrigation System for Dragon Fruit Yield
NASA Astrophysics Data System (ADS)
Widiastuti, I.; Wijayanto, D. S.
2017-03-01
Irrigation operations take a large amount of water and energy which impact to total costs of crop production. Development of an efficient irrigation supplying precise amount of water and conserving the use of energy can have benefits not only by reducing the operating costs but also by enhancing the farmland productivity. This article presents an irrigation method that promotes sustainable use of water and energy appropriate for a developing tropical country. It proposes a drip irrigation system supported by a combined solar-wind electric power generation system for efficient use of water in dragon fruit cultivation. The electric power generated is used to drive a water pump filling a storage tank for irrigating a 3000 m2 dragon fruit yield in Nguntoronadi, Wonogiri, Indonesia. In designing the irrigation system, the plant’s water requirement was identified based on the value of reference evapotranspiration of the area. A cost/benefit analysis was performed to evaluate the economic feasibility of the proposed scheme. The installation of this solar and wind drip irrigation helps provide sufficient quantity of water to each plant using renewable energy sources which reduce dependence on fossil fuel.
Augmenting Conceptual Design Trajectory Tradespace Exploration with Graph Theory
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Mathew R.; Steffens, Michael; Edwards, Stephen
2016-01-01
Within conceptual design changes occur rapidly due to a combination of uncertainty and shifting requirements. To stay relevant in this fluid time, trade studies must also be performed rapidly. In order to drive down analysis time while improving the information gained by these studies, surrogate models can be created to represent the complex output of a tool or tools within a specified tradespace. In order to create this model however, a large amount of data must be collected in a short amount of time. By this method, the historical approach of relying on subject matter experts to generate the data required is schedule infeasible. However, by implementing automation and distributed analysis the required data can be generated in a fraction of the time. Previous work focused on setting up a tool called multiPOST capable of orchestrating many simultaneous runs of an analysis tool assessing these automated analyses utilizing heuristics gleaned from the best practices of current subject matter experts. In this update to the previous work, elements of graph theory are included to further drive down analysis time by leveraging data previously gathered. It is shown to outperform the previous method in both time required, and the quantity and quality of data produced.
A fault constitutive relation accounting for thermal pressurization of pore fluid
Andrews, D.J.
2002-01-01
The heat generated in a slip zone during an earthquake can raise fluid pressure and thereby reduce frictional resistance to slip. The amount of fluid pressure rise depends on the associated fluid flow. The heat generated at a given time produces fluid pressure that decreases inversely with the square root of hydraulic diffusivity times the elapsed time. If the slip velocity function is crack-like, there is a prompt fluid pressure rise at the onset of slip, followed by a slower increase. The stress drop associated with the prompt fluid pressure rise increases with rupture propagation distance. The threshold propagation distance at which thermally induced stress drop starts to dominate over frictionally induced stress drop is proportional to hydraulic diffusivity. If hydraulic diffusivity is 0.02 m2/s, estimated from borehole samples of fault zone material, the threshold propagation distance is 300 m. The stress wave in an earthquake will induce an unknown amount of dilatancy and will increase hydraulic diffusivity, both of which will lessen the fluid pressure effect. Nevertheless, if hydraulic diffusivity is no more than two orders of magnitude larger than the laboratory value, then stress drop is complete in large earthquakes.
Learning to rank-based gene summary extraction.
Shang, Yue; Hao, Huihui; Wu, Jiajin; Lin, Hongfei
2014-01-01
In recent years, the biomedical literature has been growing rapidly. These articles provide a large amount of information about proteins, genes and their interactions. Reading such a huge amount of literature is a tedious task for researchers to gain knowledge about a gene. As a result, it is significant for biomedical researchers to have a quick understanding of the query concept by integrating its relevant resources. In the task of gene summary generation, we regard automatic summary as a ranking problem and apply the method of learning to rank to automatically solve this problem. This paper uses three features as a basis for sentence selection: gene ontology relevance, topic relevance and TextRank. From there, we obtain the feature weight vector using the learning to rank algorithm and predict the scores of candidate summary sentences and obtain top sentences to generate the summary. ROUGE (a toolkit for summarization of automatic evaluation) was used to evaluate the summarization result and the experimental results showed that our method outperforms the baseline techniques. According to the experimental result, the combination of three features can improve the performance of summary. The application of learning to rank can facilitate the further expansion of features for measuring the significance of sentences.
The effect of spin in swing bowling in cricket: model trajectories for spin alone
NASA Astrophysics Data System (ADS)
Robinson, Garry; Robinson, Ian
2015-02-01
In ‘swing’ bowling, as employed by fast and fast-medium bowlers in cricket, back-spin along the line of the seam is normally applied in order to keep the seam vertical and to provide stability against ‘wobble’ of the seam. Whilst spin is normally thought of as primarily being the slow bowler's domain, the spin applied by the swing bowler has the side-effect of generating a lift or Magnus force. This force, depending on the orientation of the seam and hence that of the back-spin, can have a side-ways component as well as the expected vertical ‘lift’ component. The effect of the spin itself, in influencing the trajectory of the fast bowler's delivery, is normally not considered, presumably being thought of as negligible. The purpose of this paper is to investigate, using calculated model trajectories, the amount of side-ways movement due to the spin and to see how this predicted movement compares with the total observed side-ways movement. The size of the vertical lift component is also estimated. It is found that, although the spin is an essential part of the successful swing bowler's delivery, the amount of side-ways movement due to the spin itself amounts to a few centimetres or so, and is therefore small, but perhaps not negligible, compared to the total amount of side-ways movement observed. The spin does, however, provide a considerable amount of lift compared to the equivalent delivery bowled without spin, altering the point of pitching by up to 3 m, a very large amount indeed. Thus, for example, bowling a ball with the seam pointing directly down the pitch and not designed to swing side-ways at all, but with the amount of back-spin varied, could provide a very powerful additional weapon in the fast bowler's arsenal. So-called ‘sling bowlers’, who use a very low arm action, can take advantage of spin since effectively they can apply side-spin to the ball, giving rise to a large side-ways movement, ˜ 20{}^\\circ cm or more, which certainly is significant. For a given amount of spin the amount of side-ways movement increases as the bowler's delivery arm becomes more horizontal. This technique could also be exploited by normal spin bowlers as well as swing bowlers.
The Spatial Footprint of Natural Gas-Fired Electricity
NASA Astrophysics Data System (ADS)
Jordaan, S. M.; Heath, G.; Macknick, J.; Mohammadi, E.; Ben-Horin, D.; Urrea, V.; Marceau, D.
2015-12-01
Consistent comparisons of the amount of land required for different electricity generation technologies are challenging because land use associated with fossil fuel acquisition and delivery has not been well characterized or empirically grounded. This research focuses on improving estimates of the life cycle land use of natural gas-fired electricity (m2/MWh generated) through the novel combination of inventories of natural gas-related infrastructure, satellite imagery analysis and gas production estimates. We focus on seven counties that represent 98% of the total gas production in the Barnett Shale (Texas), evaluating over 500 sites across five life cycle stages (gas production, gathering, processing, transmission, and power generation as well as produced water disposal). We find that a large fraction of total life cycle land use is related to gathering (midstream) infrastructure, particularly pipelines; access roads related to all stages also contribute a large life cycle share. Results were sensitive to several inputs, including well lifetime, pipeline right of way, number of wells per site, variability of heat rate for electricity generation, and facility lifetime. Through this work, we have demonstrated a novel, highly-resolved and empirical method for estimating life cycle land use from natural gas infrastructure in an important production region. When replicated for other gas production regions and other fuels, the results can enable more empirically-grounded and robust comparisons of the land footprint of alternative energy choices.
Potential assessment of establishing a renewable energy plant in a rural agricultural area.
Su, Ming-Chien; Kao, Nien-Hsin; Huang, Wen-Jar
2012-06-01
An evaluation of the green energy potential generated from biogas and solar power, using agricultural manure waste and a photovoltaic (PV) system, was conducted in a large geographical area of a rural county with low population density and low pollution. The studied area, Shoufeng Township in Hualien County, is located in eastern Taiwan, where a large amount of manure waste is generated from pig farms that are scattered throughout the county. The objective of the study is to assess the possibility of establishing an integrated manure waste treatment plant by using the generated biogas incorporated with the PV system to produce renewable energy and then feed it back to the incorporated farms. A filed investigation, geographic information system (GIS) application, empirical equations development, and RETScreen modeling were conducted in the study. The results indicate that Shoufeng Township has the highest priority in setting up an integrated treatment and renewable energy plant by using GIS mapping within a 10-km radius of the transportation range. Two scenarios were plotted in assessing the renewable energy plant and the estimated electricity generation, plus the greenhouse gas (GHG) reduction was evaluated. Under the current governmental green energy scheme and from a long-term perspective, the assessment shows great potential in establishing the plant, especially in reducing environmental pollution problems, waste treatment, and developing suitable renewable energy.
BATCH-GE: Batch analysis of Next-Generation Sequencing data for genome editing assessment
Boel, Annekatrien; Steyaert, Woutert; De Rocker, Nina; Menten, Björn; Callewaert, Bert; De Paepe, Anne; Coucke, Paul; Willaert, Andy
2016-01-01
Targeted mutagenesis by the CRISPR/Cas9 system is currently revolutionizing genetics. The ease of this technique has enabled genome engineering in-vitro and in a range of model organisms and has pushed experimental dimensions to unprecedented proportions. Due to its tremendous progress in terms of speed, read length, throughput and cost, Next-Generation Sequencing (NGS) has been increasingly used for the analysis of CRISPR/Cas9 genome editing experiments. However, the current tools for genome editing assessment lack flexibility and fall short in the analysis of large amounts of NGS data. Therefore, we designed BATCH-GE, an easy-to-use bioinformatics tool for batch analysis of NGS-generated genome editing data, available from https://github.com/WouterSteyaert/BATCH-GE.git. BATCH-GE detects and reports indel mutations and other precise genome editing events and calculates the corresponding mutagenesis efficiencies for a large number of samples in parallel. Furthermore, this new tool provides flexibility by allowing the user to adapt a number of input variables. The performance of BATCH-GE was evaluated in two genome editing experiments, aiming to generate knock-out and knock-in zebrafish mutants. This tool will not only contribute to the evaluation of CRISPR/Cas9-based experiments, but will be of use in any genome editing experiment and has the ability to analyze data from every organism with a sequenced genome. PMID:27461955
NASA Astrophysics Data System (ADS)
Ceriotti, G.; Porta, G. M.; Geloni, C.; Dalla Rosa, M.; Guadagnini, A.
2017-09-01
We develop a methodological framework and mathematical formulation which yields estimates of the uncertainty associated with the amounts of CO2 generated by Carbonate-Clays Reactions (CCR) in large-scale subsurface systems to assist characterization of the main features of this geochemical process. Our approach couples a one-dimensional compaction model, providing the dynamics of the evolution of porosity, temperature and pressure along the vertical direction, with a chemical model able to quantify the partial pressure of CO2 resulting from minerals and pore water interaction. The modeling framework we propose allows (i) estimating the depth at which the source of gases is located and (ii) quantifying the amount of CO2 generated, based on the mineralogy of the sediments involved in the basin formation process. A distinctive objective of the study is the quantification of the way the uncertainty affecting chemical equilibrium constants propagates to model outputs, i.e., the flux of CO2. These parameters are considered as key sources of uncertainty in our modeling approach because temperature and pressure distributions associated with deep burial depths typically fall outside the range of validity of commonly employed geochemical databases and typically used geochemical software. We also analyze the impact of the relative abundancy of primary phases in the sediments on the activation of CCR processes. As a test bed, we consider a computational study where pressure and temperature conditions are representative of those observed in real sedimentary formation. Our results are conducive to the probabilistic assessment of (i) the characteristic pressure and temperature at which CCR leads to generation of CO2 in sedimentary systems, (ii) the order of magnitude of the CO2 generation rate that can be associated with CCR processes.
Cochran, Kimberly; Townsend, Timothy; Reinhart, Debra; Heck, Howell
2007-01-01
Methodology for the accounting, generation, and composition of building-related construction and demolition (C&D) at a regional level was explored. Six specific categories of debris were examined: residential construction, nonresidential construction, residential demolition, nonresidential demolition, residential renovation, and nonresidential renovation. Debris produced from each activity was calculated as the product of the total area of activity and waste generated per unit area of activity. Similarly, composition was estimated as the product of the total area of activity and the amount of each waste component generated per unit area. The area of activity was calculated using statistical data, and individual site studies were used to assess the average amount of waste generated per unit area. The application of the methodology was illustrated using Florida, US approximately 3,750,000 metric tons of building-related C&D debris were estimated as generated in Florida in 2000. Of that amount, concrete represented 56%, wood 13%, drywall 11%, miscellaneous debris 8%, asphalt roofing materials 7%, metal 3%, cardboard 1%, and plastic 1%. This model differs from others because it accommodates regional construction styles and available data. The resulting generation amount per capita is less than the US estimate - attributable to the high construction, low demolition activity seen in Florida.
Big biomedical data and cardiovascular disease research: opportunities and challenges.
Denaxas, Spiros C; Morley, Katherine I
2015-07-01
Electronic health records (EHRs), data generated and collected during normal clinical care, are increasingly being linked and used for translational cardiovascular disease research. Electronic health record data can be structured (e.g. coded diagnoses) or unstructured (e.g. clinical notes) and increasingly encapsulate medical imaging, genomic and patient-generated information. Large-scale EHR linkages enable researchers to conduct high-resolution observational and interventional clinical research at an unprecedented scale. A significant amount of preparatory work and research, however, is required to identify, obtain, and transform raw EHR data into research-ready variables that can be statistically analysed. This study critically reviews the opportunities and challenges that EHR data present in the field of cardiovascular disease clinical research and provides a series of recommendations for advancing and facilitating EHR research.
CARGO: effective format-free compressed storage of genomic information
Roguski, Łukasz; Ribeca, Paolo
2016-01-01
The recent super-exponential growth in the amount of sequencing data generated worldwide has put techniques for compressed storage into the focus. Most available solutions, however, are strictly tied to specific bioinformatics formats, sometimes inheriting from them suboptimal design choices; this hinders flexible and effective data sharing. Here, we present CARGO (Compressed ARchiving for GenOmics), a high-level framework to automatically generate software systems optimized for the compressed storage of arbitrary types of large genomic data collections. Straightforward applications of our approach to FASTQ and SAM archives require a few lines of code, produce solutions that match and sometimes outperform specialized format-tailored compressors and scale well to multi-TB datasets. All CARGO software components can be freely downloaded for academic and non-commercial use from http://bio-cargo.sourceforge.net. PMID:27131376
Analysis of communication in the standard versus automated aircraft
NASA Technical Reports Server (NTRS)
Veinott, Elizabeth S.; Irwin, Cheryl M.
1993-01-01
Past research has shown crew communication patterns to be associated with overall crew performance, recent flight experience together, low-and high-error crew performance and personality variables. However, differences in communication patterns as a function of aircraft type and level of aircraft automation have not been fully addressed. Crew communications from ten MD-88 and twelve DC-9 crews were obtained during a full-mission simulation. In addition to large differences in overall amount of communication during the normal and abnormal phases of flight (DC-9 crews generating less speech than MD-88 crews), differences in specific speech categories were also found. Log-linear analyses also generated speaker-response patterns related to each aircraft type, although in future analyses these patterns will need to account for variations due to crew performance.
Exploitation of Food Industry Waste for High-Value Products.
Ravindran, Rajeev; Jaiswal, Amit K
2016-01-01
A growing global population leads to an increasing demand for food production and the processing industry associated with it and consequently the generation of large amounts of food waste. This problem is intensified due to slow progress in the development of effective waste management strategies and measures for the proper treatment and disposal of waste. Food waste is a reservoir of complex carbohydrates, proteins, lipids, and nutraceuticals and can form the raw materials for commercially important metabolites. The current legislation on food waste treatment prioritises the prevention of waste generation and least emphasises disposal. Recent valorisation studies for food supply chain waste opens avenues to the production of biofuels, enzymes, bioactive compounds, biodegradable plastics, and nanoparticles among many other molecules. Copyright © 2015 Elsevier Ltd. All rights reserved.
New tool to assemble repetitive regions using next-generation sequencing data
NASA Astrophysics Data System (ADS)
Kuśmirek, Wiktor; Nowak, Robert M.; Neumann, Łukasz
2017-08-01
The next generation sequencing techniques produce a large amount of sequencing data. Some part of the genome are composed of repetitive DNA sequences, which are very problematic for the existing genome assemblers. We propose a modification of the algorithm for a DNA assembly, which uses the relative frequency of reads to properly reconstruct repetitive sequences. The new approach was implemented and tested, as a demonstration of the capability of our software we present some results for model organisms. The new implementation, using a three-layer software architecture was selected, where the presentation layer, data processing layer, and data storage layer were kept separate. Source code as well as demo application with web interface and the additional data are available at project web-page: http://dnaasm.sourceforge.net.
Metagenomics of Thermophiles with a Focus on Discovery of Novel Thermozymes
DeCastro, María-Eugenia; Rodríguez-Belmonte, Esther; González-Siso, María-Isabel
2016-01-01
Microbial populations living in environments with temperatures above 50°C (thermophiles) have been widely studied, increasing our knowledge in the composition and function of these ecological communities. Since these populations express a broad number of heat-resistant enzymes (thermozymes), they also represent an important source for novel biocatalysts that can be potentially used in industrial processes. The integrated study of the whole-community DNA from an environment, known as metagenomics, coupled with the development of next generation sequencing (NGS) technologies, has allowed the generation of large amounts of data from thermophiles. In this review, we summarize the main approaches commonly utilized for assessing the taxonomic and functional diversity of thermophiles through metagenomics, including several bioinformatics tools and some metagenome-derived methods to isolate their thermozymes. PMID:27729905
Baryon asymmetry from primordial black holes
NASA Astrophysics Data System (ADS)
Hamada, Yuta; Iso, Satoshi
2017-03-01
We propose a new scenario of the baryogenesis from primordial black holes (PBH). Assuming the presence of microscopic baryon (or lepton) number violation, and the presence of an effective CP-violating operator such as ∂αF (R…)Jα , where F (R…) is a scalar function of the Riemann tensor and Jα is a baryonic (leptonic) current, the time evolution of an evaporating black hole generates baryonic (leptonic) chemical potential at the horizon; consequently PBH emanates asymmetric Hawking radiation between baryons (leptons) and antibaryons (leptons). Though the operator is higher-dimensional and largely suppressed by a high mass scale M* , we show that a sufficient amount of asymmetry can be generated for a wide range of parameters of the PBH mass MPBH , its abundance ΩPBH , and the scale M*.
Super short term forecasting of photovoltaic power generation output in micro grid
NASA Astrophysics Data System (ADS)
Gong, Cheng; Ma, Longfei; Chi, Zhongjun; Zhang, Baoqun; Jiao, Ran; Yang, Bing; Chen, Jianshu; Zeng, Shuang
2017-01-01
The prediction model combining data mining and support vector machine (SVM) was built. Which provide information of photovoltaic (PV) power generation output for economic operation and optimal control of micro gird, and which reduce influence of power system from PV fluctuation. Because of the characteristic which output of PV rely on radiation intensity, ambient temperature, cloudiness, etc., so data mining was brought in. This technology can deal with large amounts of historical data and eliminate superfluous data, by using fuzzy classifier of daily type and grey related degree. The model of SVM was built, which can dock with information from data mining. Based on measured data from a small PV station, the prediction model was tested. The numerical example shows that the prediction model is fast and accurate.
Probabilistic load simulation: Code development status
NASA Astrophysics Data System (ADS)
Newell, J. F.; Ho, H.
1991-05-01
The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.
Sampling populations of humans across the world: ELSI issues.
Knoppers, Bartha Maria; Zawati, Ma'n H; Kirby, Emily S
2012-01-01
There are an increasing number of population studies collecting data and samples to illuminate gene-environment contributions to disease risk and health. The rising affordability of innovative technologies capable of generating large amounts of data helps achieve statistical power and has paved the way for new international research collaborations. Most data and sample collections can be grouped into longitudinal, disease-specific, or residual tissue biobanks, with accompanying ethical, legal, and social issues (ELSI). Issues pertaining to consent, confidentiality, and oversight cannot be examined using a one-size-fits-all approach-the particularities of each biobank must be taken into account. It remains to be seen whether current governance approaches will be adequate to handle the impact of next-generation sequencing technologies on communication with participants in population biobanking studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajamani, S.
The leather industry is an important export-oriented industry in India, with more than 3,000 tanneries located in different clusters. Sodium sulfide, a toxic chemical, is used in large quantities to remove hair and excess flesh from hides and skins. Most of the sodium sulfide used in the process is discharged as waste in the effluent, which causes serious environmental problems. Reduction of sulfide in the effluent is generally achieved by means of chemicals in the pretreatment system, which involves aerobic mixing using large amounts of chemicals and high energy, and generating large volumes of sludge. A simple biotechnological system thatmore » uses the residual biosludge from the secondary settling tank was developed, and the commercial-scale application established that more than 90% of the sulfide could be reduced in the primary treatment system. In addition to the reduction of sulfide, foul smells, BOD and COD are reduced to a considerable level. 3 refs., 2 figs., 1 tab.« less
Mohanty, Sanjay K; Saiers, James E; Ryan, Joseph N
2015-08-04
In subsurface soils, colloids are mobilized by infiltrating rainwater, but the source of colloids and the process by which colloids are generated between rainfalls are not clear. We examined the effect of drying duration and the spatial variation of soil permeability on the mobilization of in situ colloids in intact soil cores (fractured and heavily weathered saprolite) during dry-wet cycles. Measuring water flux at multiple sampling ports at the core base, we found that water drained through flow paths of different permeability. The duration of antecedent drying cycles affected the amount of mobilized colloids, particularly in high-flux ports that received water from soil regions with a large number of macro- and mesopores. In these ports, the amount of mobilized colloids increased with increased drying duration up to 2.5 days. For drying durations greater than 2.5 days, the amount of mobilized colloids decreased. In contrast, increasing drying duration had a limited effect on colloid mobilization in low-flux ports, which presumably received water from soil regions with fewer macro- and mesopores. On the basis of these results, we attribute this dependence of colloid mobilization upon drying duration to colloid generation from dry pore walls and distribution of colloids in flow paths, which appear to be sensitive to the moisture content of soil after drying and flow path permeability. The results are useful for improving the understanding of colloid mobilization during fluctuating weather conditions.
Development of a Two-Stage Microalgae Dewatering Process – A Life Cycle Assessment Approach
Soomro, Rizwan R.; Zeng, Xianhai; Lu, Yinghua; Lin, Lu; Danquah, Michael K.
2016-01-01
Even though microalgal biomass is leading the third generation biofuel research, significant effort is required to establish an economically viable commercial-scale microalgal biofuel production system. Whilst a significant amount of work has been reported on large-scale cultivation of microalgae using photo-bioreactors and pond systems, research focus on establishing high performance downstream dewatering operations for large-scale processing under optimal economy is limited. The enormous amount of energy and associated cost required for dewatering large-volume microalgal cultures has been the primary hindrance to the development of the needed biomass quantity for industrial-scale microalgal biofuels production. The extremely dilute nature of large-volume microalgal suspension and the small size of microalgae cells in suspension create a significant processing cost during dewatering and this has raised major concerns towards the economic success of commercial-scale microalgal biofuel production as an alternative to conventional petroleum fuels. This article reports an effective framework to assess the performance of different dewatering technologies as the basis to establish an effective two-stage dewatering system. Bioflocculation coupled with tangential flow filtration (TFF) emerged a promising technique with total energy input of 0.041 kWh, 0.05 kg CO2 emissions and a cost of $ 0.0043 for producing 1 kg of microalgae biomass. A streamlined process for operational analysis of two-stage microalgae dewatering technique, encompassing energy input, carbon dioxide emission, and process cost, is presented. PMID:26904075
2013-01-01
Background Next generation sequencing technologies have greatly advanced many research areas of the biomedical sciences through their capability to generate massive amounts of genetic information at unprecedented rates. The advent of next generation sequencing has led to the development of numerous computational tools to analyze and assemble the millions to billions of short sequencing reads produced by these technologies. While these tools filled an important gap, current approaches for storing, processing, and analyzing short read datasets generally have remained simple and lack the complexity needed to efficiently model the produced reads and assemble them correctly. Results Previously, we presented an overlap graph coarsening scheme for modeling read overlap relationships on multiple levels. Most current read assembly and analysis approaches use a single graph or set of clusters to represent the relationships among a read dataset. Instead, we use a series of graphs to represent the reads and their overlap relationships across a spectrum of information granularity. At each information level our algorithm is capable of generating clusters of reads from the reduced graph, forming an integrated graph modeling and clustering approach for read analysis and assembly. Previously we applied our algorithm to simulated and real 454 datasets to assess its ability to efficiently model and cluster next generation sequencing data. In this paper we extend our algorithm to large simulated and real Illumina datasets to demonstrate that our algorithm is practical for both sequencing technologies. Conclusions Our overlap graph theoretic algorithm is able to model next generation sequencing reads at various levels of granularity through the process of graph coarsening. Additionally, our model allows for efficient representation of the read overlap relationships, is scalable for large datasets, and is practical for both Illumina and 454 sequencing technologies. PMID:24564333
Bubble accumulation and its role in the evolution of magma reservoirs in the upper crust.
Parmigiani, A; Faroughi, S; Huber, C; Bachmann, O; Su, Y
2016-04-28
Volcanic eruptions transfer huge amounts of gas to the atmosphere. In particular, the sulfur released during large silicic explosive eruptions can induce global cooling. A fundamental goal in volcanology, therefore, is to assess the potential for eruption of the large volumes of crystal-poor, silicic magma that are stored at shallow depths in the crust, and to obtain theoretical bounds for the amount of volatiles that can be released during these eruptions. It is puzzling that highly evolved, crystal-poor silicic magmas are more likely to generate volcanic rocks than plutonic rocks. This observation suggests that such magmas are more prone to erupting than are their crystal-rich counterparts. Moreover, well studied examples of largely crystal-poor eruptions (for example, Katmai, Taupo and Minoan) often exhibit a release of sulfur that is 10 to 20 times higher than the amount of sulfur estimated to be stored in the melt. Here we argue that these two observations rest on how the magmatic volatile phase (MVP) behaves as it rises buoyantly in zoned magma reservoirs. By investigating the fluid dynamics that controls the transport of the MVP in crystal-rich and crystal-poor magmas, we show how the interplay between capillary stresses and the viscosity contrast between the MVP and the host melt results in a counterintuitive dynamics, whereby the MVP tends to migrate efficiently in crystal-rich parts of a magma reservoir and accumulate in crystal-poor regions. The accumulation of low-density bubbles of MVP in crystal-poor magmas has implications for the eruptive potential of such magmas, and is the likely source of the excess sulfur released during explosive eruptions.
Wang, Michael T M; Bolland, Mark J; Gamble, Greg; Grey, Andrew
2015-01-01
Publication of clinical research findings in prominent journals influences health beliefs and medical practice, in part by engendering news coverage. Randomized controlled trials (RCTs) should be most influential in guiding clinical practice. We determined whether study design of clinical research published in high-impact journals influences media coverage. We compared the incidence and amount of media coverage of RCTs with that of observational studies published in the top 7 medical journals between 1 January 2013 and 31 March 2013. We specifically assessed media coverage of the most rigorous RCTs, those with >1000 participants that reported 'hard' outcomes. There was no difference between RCTs and observational studies in coverage by major newspapers or news agencies, or in total number of news stories generated (all P>0.63). Large RCTs reporting 'hard' outcomes did not generate more news coverage than small RCTs that reported surrogate outcomes and observational studies (all P>0.32). RCTs were more likely than observational studies to attract a journal editorial (70% vs 46%, P = 0.003), but less likely to be the subject of a journal press release (17% vs 50%, P<0.001). Large RCTs that reported 'hard' outcomes did not attract an editorial more frequently than other studies (61% vs 58%, P>0.99), nor were they more likely to be the subject of a journal press release (14% vs 38%, P = 0.14). The design of clinical studies whose results are published in high-impact medical journals is not associated with the likelihood or amount of ensuing news coverage.
Heterogeneous recombination among Hepatitis B virus genotypes.
Castelhano, Nadine; Araujo, Natalia M; Arenas, Miguel
2017-10-01
The rapid evolution of Hepatitis B virus (HBV) through both evolutionary forces, mutation and recombination, allows this virus to generate a large variety of adapted variants at both intra and inter-host levels. It can, for instance, generate drug resistance or the diverse viral genotypes that currently exist in the HBV epidemics. Concerning the latter, it is known that recombination played a major role in the emergence and genetic diversification of novel genotypes. In this regard, the quantification of viral recombination in each genotype can provide relevant information to devise expectations about the evolutionary trends of the epidemic. Here we measured the amount of this evolutionary force by estimating global and local recombination rates in >4700 HBV complete genome sequences corresponding to nine (A to I) HBV genotypes. Counterintuitively, we found that genotype E presents extremely high levels of recombination, followed by genotypes B and C. On the other hand, genotype G presents the lowest level, where recombination is almost negligible. We discuss these findings in the light of known characteristics of these genotypes. Additionally, we present a phylogenetic network to depict the evolutionary history of the studied HBV genotypes. This network clearly classified all genotypes into specific groups and indicated that diverse pairs of genotypes are derived from a common ancestor (i.e., C-I, D-E and, F-H) although still the origin of this virus presented large uncertainty. Altogether we conclude that the amount of observed recombination is heterogeneous among HBV genotypes and that this heterogeneity can influence on the future expansion of the epidemic. Copyright © 2017 Elsevier B.V. All rights reserved.
Destruction of Navy Hazardous Wastes by Supercritical Water Oxidation
1994-08-01
cleaning and derusting (nitrite and citric acid solutions), electroplating ( acids and metal bearing solutions), electronics and refrigeration... acid forming chemical species or that contain a large amount of dissolved solids present a challenge to current SCWO •-chnology. Approved for public...Waste streams that contain a large amount of mineral- acid forming chemical species or that contain a large amount of dissolved solids present a challenge
Empirical relationships between tree fall and landscape-level amounts of logging and fire
Blanchard, Wade; Blair, David; McBurney, Lachlan; Stein, John; Banks, Sam C.
2018-01-01
Large old trees are critically important keystone structures in forest ecosystems globally. Populations of these trees are also in rapid decline in many forest ecosystems, making it important to quantify the factors that influence their dynamics at different spatial scales. Large old trees often occur in forest landscapes also subject to fire and logging. However, the effects on the risk of collapse of large old trees of the amount of logging and fire in the surrounding landscape are not well understood. Using an 18-year study in the Mountain Ash (Eucalyptus regnans) forests of the Central Highlands of Victoria, we quantify relationships between the probability of collapse of large old hollow-bearing trees at a site and the amount of logging and the amount of fire in the surrounding landscape. We found the probability of collapse increased with an increasing amount of logged forest in the surrounding landscape. It also increased with a greater amount of burned area in the surrounding landscape, particularly for trees in highly advanced stages of decay. The most likely explanation for elevated tree fall with an increasing amount of logged or burned areas in the surrounding landscape is change in wind movement patterns associated with cutblocks or burned areas. Previous studies show that large old hollow-bearing trees are already at high risk of collapse in our study area. New analyses presented here indicate that additional logging operations in the surrounding landscape will further elevate that risk. Current logging prescriptions require the protection of large old hollow-bearing trees on cutblocks. We suggest that efforts to reduce the probability of collapse of large old hollow-bearing trees on unlogged sites will demand careful landscape planning to limit the amount of timber harvesting in the surrounding landscape. PMID:29474487
Empirical relationships between tree fall and landscape-level amounts of logging and fire.
Lindenmayer, David B; Blanchard, Wade; Blair, David; McBurney, Lachlan; Stein, John; Banks, Sam C
2018-01-01
Large old trees are critically important keystone structures in forest ecosystems globally. Populations of these trees are also in rapid decline in many forest ecosystems, making it important to quantify the factors that influence their dynamics at different spatial scales. Large old trees often occur in forest landscapes also subject to fire and logging. However, the effects on the risk of collapse of large old trees of the amount of logging and fire in the surrounding landscape are not well understood. Using an 18-year study in the Mountain Ash (Eucalyptus regnans) forests of the Central Highlands of Victoria, we quantify relationships between the probability of collapse of large old hollow-bearing trees at a site and the amount of logging and the amount of fire in the surrounding landscape. We found the probability of collapse increased with an increasing amount of logged forest in the surrounding landscape. It also increased with a greater amount of burned area in the surrounding landscape, particularly for trees in highly advanced stages of decay. The most likely explanation for elevated tree fall with an increasing amount of logged or burned areas in the surrounding landscape is change in wind movement patterns associated with cutblocks or burned areas. Previous studies show that large old hollow-bearing trees are already at high risk of collapse in our study area. New analyses presented here indicate that additional logging operations in the surrounding landscape will further elevate that risk. Current logging prescriptions require the protection of large old hollow-bearing trees on cutblocks. We suggest that efforts to reduce the probability of collapse of large old hollow-bearing trees on unlogged sites will demand careful landscape planning to limit the amount of timber harvesting in the surrounding landscape.
NASA Astrophysics Data System (ADS)
Watanabe, Sho; Furuichi, Toru; Ishii, Kazuei
This study proposed an estimation method for collectable amount of food waste considering the food waste generator's cooperation ratio ant the amount of food waste generation, and clarified the factors influencing the collectable amount of food waste. In our method, the cooperation ratio was calculated by using the binary logit model which is often used for the traffic multiple choice question. In order to develop a more precise binary logit model, the factors influencing on the cooperation ratio were extracted by a questionnaire survey asking food waste generator's intention, and the preference investigation was then conducted at the second step. As a result, the collectable amount of food waste was estimated to be 72 [t/day] in the Ishikari bay new port area under a condition of current collection system by using our method. In addition, the most critical factor influencing on the collectable amount of food waste was the treatment fee for households, and was the permitted mixture degree of improper materials for retail trade and restaurant businesses
NASA Astrophysics Data System (ADS)
Toftum, J.; Freund, S.; Salthammer, T.; Weschler, C. J.
This study examined the formation and growth of secondary organic aerosols (SOA) generated when ozone was added to a 1 m 3 glass chamber that contained either pine shelving, oriented strand board (OSB), beech boards, or beach boards painted with an "eco" paint. The experiments were conducted at close to real-world conditions; the chamber was ventilated at ˜0.5 air changes/h; the loadings (exposed surface of building materials to chamber volume) were in the range of 1-2.5 m 2 m -3; and the initial O 3 concentrations were between 15 and 40 ppb. Throughout each experiment particles were measured with both a condensation nuclei counter and an optical counter, while terpenes were measured before and after the ozone exposure period using sorbent tubes. The pine boards emitted primarily α-pinene and 3-carene and lesser amounts of 5 other terpenes; when O 3 was introduced, the particle counts increased dramatically; the mass concentration reached ˜15 μg m -3 at ˜20 ppb O 3, and ˜95 μg m -3 at ˜40 ppb O 3. The OSB emitted primarily limonene and α-pinene. Although the particle counts increased when O 3 was introduced, the increase was not as large as anticipated based on the terpene concentrations. The beech boards emitted negligible quantities of terpenes, and the introduction of O 3 resulted in almost no increase in the particle concentration. Beech boards painted with an "eco" paint emitted large amounts of limonene and lesser amounts of carvone; upon introduction of O 3 the particle counts increased sharply with the mass concentration reaching ˜20 μg m -3 at ˜15 ppb O 3 and ˜160 μg m -3 at ˜35 ppb O 3. These experiments demonstrate that the emission of terpenes and potential generation of SOA varies greatly among different types of wood and pressed wood materials. In the case of the pine boards and painted beech boards, the SOA concentrations generated at modest O 3 concentrations approach or exceed current guideline levels for PM 2.5 established by the US EPA and the World Health Organization.
Sagace: A web-based search engine for biomedical databases in Japan
2012-01-01
Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data) and biological resource banks (such as mouse models of disease and cell lines). With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/. PMID:23110816
A stochastic simulator of birth-death master equations with application to phylodynamics.
Vaughan, Timothy G; Drummond, Alexei J
2013-06-01
In this article, we present a versatile new software tool for the simulation and analysis of stochastic models of population phylodynamics and chemical kinetics. Models are specified via an expressive and human-readable XML format and can be used as the basis for generating either single population histories or large ensembles of such histories. Importantly, phylogenetic trees or networks can be generated alongside the histories they correspond to, enabling investigations into the interplay between genealogies and population dynamics. Summary statistics such as means and variances can be recorded in place of the full ensemble, allowing for a reduction in the amount of memory used--an important consideration for models including large numbers of individual subpopulations or demes. In the case of population size histories, the resulting simulation output is written to disk in the flexible JSON format, which is easily read into numerical analysis environments such as R for visualization or further processing. Simulated phylogenetic trees can be recorded using the standard Newick or NEXUS formats, with extensions to these formats used for non-tree-like inheritance relationships.
A Stochastic Simulator of Birth–Death Master Equations with Application to Phylodynamics
Vaughan, Timothy G.; Drummond, Alexei J.
2013-01-01
In this article, we present a versatile new software tool for the simulation and analysis of stochastic models of population phylodynamics and chemical kinetics. Models are specified via an expressive and human-readable XML format and can be used as the basis for generating either single population histories or large ensembles of such histories. Importantly, phylogenetic trees or networks can be generated alongside the histories they correspond to, enabling investigations into the interplay between genealogies and population dynamics. Summary statistics such as means and variances can be recorded in place of the full ensemble, allowing for a reduction in the amount of memory used—an important consideration for models including large numbers of individual subpopulations or demes. In the case of population size histories, the resulting simulation output is written to disk in the flexible JSON format, which is easily read into numerical analysis environments such as R for visualization or further processing. Simulated phylogenetic trees can be recorded using the standard Newick or NEXUS formats, with extensions to these formats used for non-tree-like inheritance relationships. PMID:23505043
A Ubiquitous Sensor Network Platform for Integrating Smart Devices into the Semantic Sensor Web
de Vera, David Díaz Pardo; Izquierdo, Álvaro Sigüenza; Vercher, Jesús Bernat; Gómez, Luis Alfonso Hernández
2014-01-01
Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs. PMID:24945678
Big data analytics as a service infrastructure: challenges, desired properties and solutions
NASA Astrophysics Data System (ADS)
Martín-Márquez, Manuel
2015-12-01
CERN's accelerator complex generates a very large amount of data. A large volumen of heterogeneous data is constantly generated from control equipment and monitoring agents. These data must be stored and analysed. Over the decades, CERN's researching and engineering teams have applied different approaches, techniques and technologies for this purpose. This situation has minimised the necessary collaboration and, more relevantly, the cross data analytics over different domains. These two factors are essential to unlock hidden insights and correlations between the underlying processes, which enable better and more efficient daily-based accelerator operations and more informed decisions. The proposed Big Data Analytics as a Service Infrastructure aims to: (1) integrate the existing developments; (2) centralise and standardise the complex data analytics needs for CERN's research and engineering community; (3) deliver real-time, batch data analytics and information discovery capabilities; and (4) provide transparent access and Extract, Transform and Load (ETL), mechanisms to the various and mission-critical existing data repositories. This paper presents the desired objectives and properties resulting from the analysis of CERN's data analytics requirements; the main challenges: technological, collaborative and educational and; potential solutions.
A new generation scanning system for the high-speed analysis of nuclear emulsions
NASA Astrophysics Data System (ADS)
Alexandrov, A.; Buonaura, A.; Consiglio, L.; D'Ambrosio, N.; De Lellis, G.; Di Crescenzo, A.; Galati, G.; Lauria, A.; Montesi, M. C.; Tioukov, V.; Vladymyrov, M.
2016-06-01
The development of automatic scanning systems was a fundamental issue for large scale neutrino detectors exploiting nuclear emulsions as particle trackers. Such systems speed up significantly the event analysis in emulsion, allowing the feasibility of experiments with unprecedented statistics. In the early 1990s, R&D programs were carried out by Japanese and European laboratories leading to automatic scanning systems more and more efficient. The recent progress in the technology of digital signal processing and of image acquisition allows the fulfillment of new systems with higher performances. In this paper we report the description and the performance of a new generation scanning system able to operate at the record speed of 84 cm2/hour and based on the Large Angle Scanning System for OPERA (LASSO) software infrastructure developed by the Naples scanning group. Such improvement, reduces the scanning time by a factor 4 with respect to the available systems, allowing the readout of huge amount of nuclear emulsions in reasonable time. This opens new perspectives for the employment of such detectors in a wider variety of applications.
A ubiquitous sensor network platform for integrating smart devices into the semantic sensor web.
de Vera, David Díaz Pardo; Izquierdo, Alvaro Sigüenza; Vercher, Jesús Bernat; Hernández Gómez, Luis Alfonso
2014-06-18
Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs.
Grandjean, Geoffrey; Graham, Ryan; Bartholomeusz, Geoffrey
2011-11-01
In recent years high throughput screening operations have become a critical application in functional and translational research. Although a seemingly unmanageable amount of data is generated by these high-throughput, large-scale techniques, through careful planning, an effective Laboratory Information Management System (LIMS) can be developed and implemented in order to streamline all phases of a workflow. Just as important as data mining and analysis procedures at the end of complex processes is the tracking of individual steps of applications that generate such data. Ultimately, the use of a customized LIMS will enable users to extract meaningful results from large datasets while trusting the robustness of their assays. To illustrate the design of a custom LIMS, this practical example is provided to highlight the important aspects of the design of a LIMS to effectively modulate all aspects of an siRNA screening service. This system incorporates inventory management, control of workflow, data handling and interaction with investigators, statisticians and administrators. All these modules are regulated in a synchronous manner within the LIMS. © 2011 Bentham Science Publishers
Responses to Oxidative and Heavy Metal Stresses in Cyanobacteria: Recent Advances
Cassier-Chauvat, Corinne; Chauvat, Franck
2014-01-01
Cyanobacteria, the only known prokaryotes that perform oxygen-evolving photosynthesis, are receiving strong attention in basic and applied research. In using solar energy, water, CO2 and mineral salts to produce a large amount of biomass for the food chain, cyanobacteria constitute the first biological barrier against the entry of toxics into the food chain. In addition, cyanobacteria have the potential for the solar-driven carbon-neutral production of biofuels. However, cyanobacteria are often challenged by toxic reactive oxygen species generated under intense illumination, i.e., when their production of photosynthetic electrons exceeds what they need for the assimilation of inorganic nutrients. Furthermore, in requiring high amounts of various metals for growth, cyanobacteria are also frequently affected by drastic changes in metal availabilities. They are often challenged by heavy metals, which are increasingly spread out in the environment through human activities, and constitute persistent pollutants because they cannot be degraded. Consequently, it is important to analyze the protection against oxidative and metal stresses in cyanobacteria because these ancient organisms have developed most of these processes, a large number of which have been conserved during evolution. This review summarizes what is known regarding these mechanisms, emphasizing on their crosstalk. PMID:25561236
NASA Technical Reports Server (NTRS)
1986-01-01
Emerging satellite designs require increasing amounts of electrical power to operate spacecraft instruments and to provide environments suitable for human habitation. In the past, electrical power was generated by covering rigid honeycomb panels with solar cells. This technology results in unacceptable weight and volume penalties when large amounts of power are required. To fill the need for large-area, lightweight solar arrays, a fabrication technique in which solar cells are attached to a copper printed circuit laminated to a plastic sheet was developed. The result is a flexible solar array with one-tenth the stowed volume and one-third the weight of comparably sized rigid arrays. An automated welding process developed to attack the cells to the printed circuit guarantees repeatable welds that are more tolerant of severe environments than conventional soldered connections. To demonstrate the flight readiness of this technology, the Solar Array Flight Experiment (SAFE) was developed and flown on the space shuttle Discovery in September 1984. The tests showed the modes and frequencies of the array to be very close to preflight predictions. Structural damping, however, was higher than anticipated. Electrical performance of the active solar panel was also tested. The flight performance and postflight data evaluation are described.
Tripropellant combustion process
NASA Technical Reports Server (NTRS)
Kmiec, T. D.; Carroll, R. G.
1988-01-01
The addition of small amounts of hydrogen to the combustion of LOX/hydrocarbon propellants in large rocket booster engines has the potential to enhance the system stability. Programs being conducted to evaluate the effects of hydrogen on the combustion of LOX/hydrocarbon propellants at supercritical pressures are described. Combustion instability has been a problem during the development of large hydrocarbon fueled rocket engines. At the higher combustion chamber pressures expected for the next generation of booster engines, the effect of unstable combustion could be even more destructive. The tripropellant engine cycle takes advantage of the superior cooling characteristics of hydrogen to cool the combustion chamber and a small amount of the hydrogen coolant can be used in the combustion process to enhance the system stability. Three aspects of work that will be accomplished to evaluate tripropellant combustion are described. The first is laboratory demonstration of the benefits through the evaluation of drop size, ignition delay and burning rate. The second is analytical modeling of the combustion process using the empirical relationship determined in the laboratory. The third is a subscale demonstration in which the system stability will be evaluated. The approach for each aspect is described and the analytical models that will be used are presented.
Distributed memory parallel Markov random fields using graph partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinemann, C.; Perciano, T.; Ushizima, D.
Markov random fields (MRF) based algorithms have attracted a large amount of interest in image analysis due to their ability to exploit contextual information about data. Image data generated by experimental facilities, though, continues to grow larger and more complex, making it more difficult to analyze in a reasonable amount of time. Applying image processing algorithms to large datasets requires alternative approaches to circumvent performance problems. Aiming to provide scientists with a new tool to recover valuable information from such datasets, we developed a general purpose distributed memory parallel MRF-based image analysis framework (MPI-PMRF). MPI-PMRF overcomes performance and memory limitationsmore » by distributing data and computations across processors. The proposed approach was successfully tested with synthetic and experimental datasets. Additionally, the performance of the MPI-PMRF framework is analyzed through a detailed scalability study. We show that a performance increase is obtained while maintaining an accuracy of the segmentation results higher than 98%. The contributions of this paper are: (a) development of a distributed memory MRF framework; (b) measurement of the performance increase of the proposed approach; (c) verification of segmentation accuracy in both synthetic and experimental, real-world datasets« less
Helioviewer.org: Browsing Very Large Image Archives Online Using JPEG 2000
NASA Astrophysics Data System (ADS)
Hughitt, V. K.; Ireland, J.; Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Schmidt, L.; Wamsler, B.; Beck, J.; Alexanderian, A.; Fleck, B.
2009-12-01
As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Recent efforts have resulted in increased performance, dynamic movie generation, and improved support for mobile web browsers. Future functionality will include: support for additional data-sources including RHESSI, SDO, STEREO, and TRACE, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.
Procedures of determining organic trace compounds in municipal sewage sludge-a review.
Lindholm-Lehto, Petra C; Ahkola, Heidi S J; Knuutinen, Juha S
2017-02-01
Sewage sludge is the largest by-product generated during the wastewater treatment process. Since large amounts of sludge are being produced, different ways of disposal have been introduced. One tempting option is to use it as fertilizer in agricultural fields due to its high contents of inorganic nutrients. This, however, can be limited by the amount of trace contaminants in the sewage sludge, containing a variety of microbiological pollutants and pathogens but also inorganic and organic contaminants. The bioavailability and the effects of trace contaminants on the microorganisms of soil are still largely unknown as well as their mixture effects. Therefore, there is a need to analyze the sludge to test its suitability before further use. In this article, a variety of sampling, pretreatment, extraction, and analysis methods have been reviewed. Additionally, different organic trace compounds often found in the sewage sludge and their methods of analysis have been compiled. In addition to traditional Soxhlet extraction, the most common extraction methods of organic contaminants in sludge include ultrasonic extraction (USE), supercritical fluid extraction (SFE), microwave-assisted extraction (MAE), and pressurized liquid extraction (PLE) followed by instrumental analysis based on gas or liquid chromatography and mass spectrometry.
Modeling tsunamis induced by retrogressive submarine landslides
NASA Astrophysics Data System (ADS)
Løvholt, F.; Kim, J.; Harbitz, C. B.
2015-12-01
Enormous submarine landslides having volumes up to thousands of km3 and long run-out may cause tsunamis with widespread effects. Clay-rich landslides, such as Trænadjupet and Storegga offshore Norway commonly involve retrogressive mass and momentum release mechanisms that affect the tsunami generation. Therefore, such landslides may involve a large amount of smaller blocks. As a consequence, the failure mechanisms and release rate of the individual blocks are of importance for the tsunami generation. Previous attempts to model the tsunami generation due to retrogressive landslides are few, and limited to idealized conditions. Here, we review the basic effects of retrogression on tsunamigenesis in simple geometries. To this end, two different methods are employed for the landslide motion, a series block with pre-scribed time lags and kinematics, and a dynamic retrogressive model where the inter-block time lag is determined by the model. The effect of parameters such as time lag on wave-height, wave-length, and dispersion are discussed. Finally, we discuss how the retrogressive effects may have influenced the tsunamis due to large landslides such as the Storegga slide. The research leading to these results has received funding from the Research Council of Norway under grant number 231252 (Project TsunamiLand) and the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement 603839 (Project ASTARTE).
Mohammed, Yassene; Percy, Andrew J; Chambers, Andrew G; Borchers, Christoph H
2015-02-06
Multiplexed targeted quantitative proteomics typically utilizes multiple reaction monitoring and allows the optimized quantification of a large number of proteins. One challenge, however, is the large amount of data that needs to be reviewed, analyzed, and interpreted. Different vendors provide software for their instruments, which determine the recorded responses of the heavy and endogenous peptides and perform the response-curve integration. Bringing multiplexed data together and generating standard curves is often an off-line step accomplished, for example, with spreadsheet software. This can be laborious, as it requires determining the concentration levels that meet the required accuracy and precision criteria in an iterative process. We present here a computer program, Qualis-SIS, that generates standard curves from multiplexed MRM experiments and determines analyte concentrations in biological samples. Multiple level-removal algorithms and acceptance criteria for concentration levels are implemented. When used to apply the standard curve to new samples, the software flags each measurement according to its quality. From the user's perspective, the data processing is instantaneous due to the reactivity paradigm used, and the user can download the results of the stepwise calculations for further processing, if necessary. This allows for more consistent data analysis and can dramatically accelerate the downstream data analysis.
CFD modeling of thermoelectric generators in automotive EGR-coolers
NASA Astrophysics Data System (ADS)
Högblom, Olle; Andersson, Ronnie
2012-06-01
A large amount of the waste heat in the exhaust gases from diesel engines is removed in the exhaust gas recirculation (EGR) cooler. Introducing a thermoelectric generator (TEG) in an EGR cooler requires a completely new design of the heat exchanger. To accomplish that a model of the TEG-EGR system is required. In this work, a transient 3D CFD model for simulation of gas flow, heat transfer and power generation has been developed. This model allows critical design parameters in the TEG-EGR to be identified and design requirements for the systems to be specified. Besides the prediction of Seebeck, Peltier, Thomson and Joule effects, the simulations also give detailed insight to the temperature gradients in the gas-phase and inside the thermoelectric (TE) elements. The model is a very valuable tool to identify bottlenecks, improve design, select optimal TE materials and operating conditions. The results show that the greatest heat transfer resistance is located in the gas phase and it is critical to reduce this in order to achieve a large temperature difference over the thermoelectric elements without compromising on the maximum allowable pressure drop in the system. Further results from an investigation of the thermoelectric performance during a vehicle test cycle is presented.
Integrating publicly-available data to generate computationally ...
The adverse outcome pathway (AOP) framework provides a way of organizing knowledge related to the key biological events that result in a particular health outcome. For the majority of environmental chemicals, the availability of curated pathways characterizing potential toxicity is limited. Methods are needed to assimilate large amounts of available molecular data and quickly generate putative AOPs for further testing and use in hazard assessment. A graph-based workflow was used to facilitate the integration of multiple data types to generate computationally-predicted (cp) AOPs. Edges between graph entities were identified through direct experimental or literature information or computationally inferred using frequent itemset mining. Data from the TG-GATEs and ToxCast programs were used to channel large-scale toxicogenomics information into a cpAOP network (cpAOPnet) of over 20,000 relationships describing connections between chemical treatments, phenotypes, and perturbed pathways measured by differential gene expression and high-throughput screening targets. Sub-networks of cpAOPs for a reference chemical (carbon tetrachloride, CCl4) and outcome (hepatic steatosis) were extracted using the network topology. Comparison of the cpAOP subnetworks to published mechanistic descriptions for both CCl4 toxicity and hepatic steatosis demonstrate that computational approaches can be used to replicate manually curated AOPs and identify pathway targets that lack genomic mar
Implementation of New System for Oxygen Generation and Carbon Dioxide Removal =
NASA Astrophysics Data System (ADS)
Karavolos, Angelo Peter
This research effort develops an integrated system for CO2 removal and O2 production. A unique material, dodeca-tungsto-phosphoric acid (H3PO4W12O3; henceforth referred to as DTPA) is mixed with tetra-ethyl-ortho-silicate Si(OC2H 5)4 or TEOS. This mixture exhibits unique properties of heat absorption and high electrical conductivity. In the system described herein, the DTPA resides within a cross linked arrangement of TEOS. The DTPA furnishes a source of O2, while the TEOS furnishes structural support for the large DTPA crystals. In addition, the large amount of H2O within the crystal also adsorbs CO2. It can also be cross-linked with other polymers such as polycarbonate, for different applications and properties such as flexible textiles. A set of isolated bench experiments were designed to test CO2 adsorption, O2 production, heat production, and voltage production were conducted to test the hypothesis that DTPA can provide CO2 adsorption, O2 generation, heat generation and electrical generation. Five experiments with this apparatus were conducted: (1) a mass balance experiment; (2) an X-ray diffraction experiment; (3) a photo spectroscopic experiment; (4) a calorimetric experiment; and (5) a dielectric experiment. Results illustrate that approximately 2880 grams of this material produces 576 grams of O2, and removes 1760 grams of CO2. The reaction also produces approximately 844 kJ/mole heat, and can supply 12.2 V potential over a period of 4.5 hours. The amount of unused material and the recycling ability suggests the usefulness of the technique to achieve between a 50-75% closed system. In addition, an experiment using 18O tracer demonstrated that approximately 20% of the O2 produced comes from processed CO2 adsorbed by the crystal, while the remaining 80% of the O2 produced comes from replaced O2 within the crystal itself. The device has multiple applications including environmental control and life support for aircraft cabins, space vehicle interiors, submarine pressure vessels, sealed armored vehicles, and personal protective equipment for individuals working in confined spaces such as mines. None
Genomics Portals: integrative web-platform for mining genomics data.
Shinde, Kaustubh; Phatak, Mukta; Johannes, Freudenberg M; Chen, Jing; Li, Qian; Vineet, Joshi K; Hu, Zhen; Ghosh, Krishnendu; Meller, Jaroslaw; Medvedovic, Mario
2010-01-13
A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org.
Genomics Portals: integrative web-platform for mining genomics data
2010-01-01
Background A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Results Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. Conclusion The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org. PMID:20070909
Rare behavior of growth processes via umbrella sampling of trajectories
NASA Astrophysics Data System (ADS)
Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen
2018-03-01
We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.
Learning from Massive Distributed Data Sets (Invited)
NASA Astrophysics Data System (ADS)
Kang, E. L.; Braverman, A. J.
2013-12-01
Technologies for remote sensing and ever-expanding computer experiments in climate science are generating massive data sets. Meanwhile, it has been common in all areas of large-scale science to have these 'big data' distributed over multiple different physical locations, and moving large amounts of data can be impractical. In this talk, we will discuss efficient ways for us to summarize and learn from distributed data. We formulate a graphical model to mimic the main characteristics of a distributed-data network, including the size of the data sets and speed of moving data. With this nominal model, we investigate the trade off between prediction accurate and cost of data movement, theoretically and through simulation experiments. We will also discuss new implementations of spatial and spatio-temporal statistical methods optimized for distributed data.
Method for forming a chemical microreactor
Morse, Jeffrey D [Martinez, CA; Jankowski, Alan [Livermore, CA
2009-05-19
Disclosed is a chemical microreactor that provides a means to generate hydrogen fuel from liquid sources such as ammonia, methanol, and butane through steam reforming processes when mixed with an appropriate amount of water. The microreactor contains capillary microchannels with integrated resistive heaters to facilitate the occurrence of catalytic steam reforming reactions. Two distinct embodiment styles are discussed. One embodiment style employs a packed catalyst capillary microchannel and at least one porous membrane. Another embodiment style employs a porous membrane with a large surface area or a porous membrane support structure containing a plurality of porous membranes having a large surface area in the aggregate, i.e., greater than about 1 m.sup.2/cm.sup.3. Various methods to form packed catalyst capillary microchannels, porous membranes and porous membrane support structures are also disclosed.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2013-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
NASA Technical Reports Server (NTRS)
1996-01-01
Soot, sometimes referred to as smoke, is made up primarily of the carbon particles generated by most combustion processes. For example, large quantities of soot can be seen issuing from the exhaust pipes of diesel-powered vehicles. Heated soot also is responsible for the warm orange color of candle flames, though that soot is generally consumed before it can exit the flame. Research has suggested that heavy atmospheric soot concentrations aggravate conditions such as pneumonia and asthma, causing many deaths each year. To understand the formation and oxidation of soot, NASA Lewis Research Center scientists, together with several university investigators, are investigating the properties of soot generated in reduced gravity, where the absence of buoyancy allows more time for the particles to grow. The increased time allows researchers to better study the life cycle of these particles, with the hope that increased understanding will lead to better control strategies. To quantify the amount of soot present in a flame, Lewis scientists developed a unique imaging technique that provides quantitative and qualitative soot data over a large field of view. There is significant improvement over the single-point methods normally used. The technique is shown in the sketch, where light from a laser is expanded with a microscope objective, rendered parallel, and passed through a flame where soot particles reduce the amount of light transmitted to the camera. A filter only allows light at the wavelength of the laser to pass to the camera, preventing any extraneous signals. When images of the laser light with and without the flame are compared, a quantitative map of the soot concentration is produced. In addition to that data, a qualitative image of the soot in the flame is also generated, an example of which is displayed in the photo. This technique has the potential to be adapted to real-time process control in industrial powerplants.
Tank, David C.
2016-01-01
Advances in high-throughput sequencing (HTS) have allowed researchers to obtain large amounts of biological sequence information at speeds and costs unimaginable only a decade ago. Phylogenetics, and the study of evolution in general, is quickly migrating towards using HTS to generate larger and more complex molecular datasets. In this paper, we present a method that utilizes microfluidic PCR and HTS to generate large amounts of sequence data suitable for phylogenetic analyses. The approach uses the Fluidigm Access Array System (Fluidigm, San Francisco, CA, USA) and two sets of PCR primers to simultaneously amplify 48 target regions across 48 samples, incorporating sample-specific barcodes and HTS adapters (2,304 unique amplicons per Access Array). The final product is a pooled set of amplicons ready to be sequenced, and thus, there is no need to construct separate, costly genomic libraries for each sample. Further, we present a bioinformatics pipeline to process the raw HTS reads to either generate consensus sequences (with or without ambiguities) for every locus in every sample or—more importantly—recover the separate alleles from heterozygous target regions in each sample. This is important because it adds allelic information that is well suited for coalescent-based phylogenetic analyses that are becoming very common in conservation and evolutionary biology. To test our approach and bioinformatics pipeline, we sequenced 576 samples across 96 target regions belonging to the South American clade of the genus Bartsia L. in the plant family Orobanchaceae. After sequencing cleanup and alignment, the experiment resulted in ~25,300bp across 486 samples for a set of 48 primer pairs targeting the plastome, and ~13,500bp for 363 samples for a set of primers targeting regions in the nuclear genome. Finally, we constructed a combined concatenated matrix from all 96 primer combinations, resulting in a combined aligned length of ~40,500bp for 349 samples. PMID:26828929
Goikoetxea, Estibalitz; Murgia, Xabier; Serna-Grande, Pablo; Valls-i-Soler, Adolf; Rey-Santano, Carmen; Rivas, Alejandro; Antón, Raúl; Basterretxea, Francisco J.; Miñambres, Lorena; Méndez, Estíbaliz; Lopez-Arraiza, Alberto; Larrabe-Barrena, Juan Luis; Gomez-Solaetxe, Miguel Angel
2014-01-01
Objective Aerosol delivery holds potential to release surfactant or perfluorocarbon (PFC) to the lungs of neonates with respiratory distress syndrome with minimal airway manipulation. Nevertheless, lung deposition in neonates tends to be very low due to extremely low lung volumes, narrow airways and high respiratory rates. In the present study, the feasibility of enhancing lung deposition by intracorporeal delivery of aerosols was investigated using a physical model of neonatal conducting airways. Methods The main characteristics of the surfactant and PFC aerosols produced by a nebulization system, including the distal air pressure and air flow rate, liquid flow rate and mass median aerodynamic diameter (MMAD), were measured at different driving pressures (4–7 bar). Then, a three-dimensional model of the upper conducting airways of a neonate was manufactured by rapid prototyping and a deposition study was conducted. Results The nebulization system produced relatively large amounts of aerosol ranging between 0.3±0.0 ml/min for surfactant at a driving pressure of 4 bar, and 2.0±0.1 ml/min for distilled water (H2Od) at 6 bar, with MMADs between 2.61±0.1 µm for PFD at 7 bar and 10.18±0.4 µm for FC-75 at 6 bar. The deposition study showed that for surfactant and H2Od aerosols, the highest percentage of the aerosolized mass (∼65%) was collected beyond the third generation of branching in the airway model. The use of this delivery system in combination with continuous positive airway pressure set at 5 cmH2O only increased total airway pressure by 1.59 cmH2O at the highest driving pressure (7 bar). Conclusion This aerosol generating system has the potential to deliver relatively large amounts of surfactant and PFC beyond the third generation of branching in a neonatal airway model with minimal alteration of pre-set respiratory support. PMID:25211475
2017-01-01
Strong foam can be generated in porous media containing oil, resulting in incremental oil recovery; however, oil recovery factor is restricted. A large fraction of oil recovered by foam flooding forms an oil-in-water emulsion, so that costly methods may need to be used to separate the oil. Moreover, strong foam could create a large pressure gradient, which may cause fractures in the reservoir. This study presents a novel chemical-foam flooding process for enhanced oil recovery (EOR) from water-flooded reservoirs. The presented method involved the use of chemically designed foam to mobilize the remaining oil after water flooding and then to displace the mobilized oil to the production well. A blend of two anionic surfactant formulations was formulated for this method: (a) IOS, for achieving ultralow interfacial tension (IFT), and (b) AOS, for generating a strong foam. Experiments were performed using Bentheimer sandstone cores, where X-ray CT images were taken during foam generation to find the stability of the advancing front of foam propagation and to map the gas saturation for both the transient and the steady-state flow regimes. Then the proposed chemical-foam strategy for incremental oil recovery was tested through the coinjection of immiscible nitrogen gas and surfactant solutions with three different formulation properties in terms of IFT reduction and foaming strength capability. The discovered optimal formulation contains a foaming agent surfactant, a low IFT surfactant, and a cosolvent, which has a high foam stability and a considerably low IFT (1.6 × 10–2 mN/m). Coinjection resulted in higher oil recovery and much less MRF than the same process with only using a foaming agent. The oil displacement experiment revealed that coinjection of gas with a blend of surfactants, containing a cosolvent, can recover a significant amount of oil (33% OIIP) over water flooding with a larger amount of clean oil and less emulsion. PMID:29093612
Hosseini-Nasab, S M; Zitha, P L J
2017-10-19
Strong foam can be generated in porous media containing oil, resulting in incremental oil recovery; however, oil recovery factor is restricted. A large fraction of oil recovered by foam flooding forms an oil-in-water emulsion, so that costly methods may need to be used to separate the oil. Moreover, strong foam could create a large pressure gradient, which may cause fractures in the reservoir. This study presents a novel chemical-foam flooding process for enhanced oil recovery (EOR) from water-flooded reservoirs. The presented method involved the use of chemically designed foam to mobilize the remaining oil after water flooding and then to displace the mobilized oil to the production well. A blend of two anionic surfactant formulations was formulated for this method: (a) IOS, for achieving ultralow interfacial tension (IFT), and (b) AOS, for generating a strong foam. Experiments were performed using Bentheimer sandstone cores, where X-ray CT images were taken during foam generation to find the stability of the advancing front of foam propagation and to map the gas saturation for both the transient and the steady-state flow regimes. Then the proposed chemical-foam strategy for incremental oil recovery was tested through the coinjection of immiscible nitrogen gas and surfactant solutions with three different formulation properties in terms of IFT reduction and foaming strength capability. The discovered optimal formulation contains a foaming agent surfactant, a low IFT surfactant, and a cosolvent, which has a high foam stability and a considerably low IFT (1.6 × 10 -2 mN/m). Coinjection resulted in higher oil recovery and much less MRF than the same process with only using a foaming agent. The oil displacement experiment revealed that coinjection of gas with a blend of surfactants, containing a cosolvent, can recover a significant amount of oil (33% OIIP) over water flooding with a larger amount of clean oil and less emulsion.
NASA Astrophysics Data System (ADS)
Pascale, C.; Guillevic, M.; Ackermann, A.; Leuenberger, D.; Niederhauser, B.
2017-12-01
To answer the needs of air quality and climate monitoring networks, two new gas generators were developed and manufactured at METAS in order to dynamically generate SI-traceable reference gas mixtures for reactive compounds at atmospheric concentrations. The technical features of the transportable generators allow for the realization of such gas standards for reactive compounds (e.g. NO2, volatile organic compounds) in the nmol · mol-1 range (ReGaS2), and fluorinated gases in the pmol ṡ mol-1 range (ReGaS3). The generation method is based on permeation and dynamic dilution. The transportable generators have multiple individual permeation chambers allowing for the generation of mixtures containing up to five different compounds. This mixture is then diluted using mass flow controllers, thus making the production process adaptable to generate the required amount of substance fraction. All parts of ReGaS2 in contact with the gas mixture are coated to reduce adsorption/desorption processes. Each input parameter required to calculate the generated amount of substance fraction is calibrated with SI-primary standards. The stability and reproducibility of the generated amount of substance fractions were tested with NO2 for ReGaS2 and HFC-125 for ReGaS3. They demonstrate stability over 1-4 d better than 0.4% and 0.8%, respectively, and reproducibility better than 0.7% and 1%, respectively. Finally, the relative expanded uncertainty of the generated amount of substance fraction is smaller than 3% with the major contributions coming from the uncertainty of the permeation rate and/or of the purity of the matrix gas. These relative expanded uncertainties meet then the needs of the data quality objectives fixed by the World Meteorological Organization.
Improvements in nanoscale zero-valent iron production by milling through the addition of alumina
NASA Astrophysics Data System (ADS)
Ribas, D.; Cernik, M.; Martí, V.; Benito, J. A.
2016-07-01
A new milling procedure for a cost-effective production of nanoscale zero-valent iron for environmental remediation is presented. Conventional ball milling of iron in an organic solvent as Mono Ethylene Glycol produces flattened iron particles that are unlikely to break even after very long milling times. With the aim of breaking down these iron flakes, in this new procedure, further milling is carried out by adding an amount of fine alumina powder to the previously milled solution. As the amount of added alumina increases from 9 to 54 g l-1, a progressive decrease of the presence of flakes is observed. In the latter case, the appearance of the particles formed by fragments of former flakes is rather homogeneous, with most of the final nanoparticles having an equivalent diameter well below 1 µm and with an average particle size in solution of around 400 nm. An additional increase of alumina content results in a highly viscous solution showing worse particle size distribution. Milled particles, in the case of alumina concentrations of 54 g l-1, have a fairly large specific surface area and high Fe(0) content. These new particles show a very good Cr(VI) removal efficiency compared with other commercial products available. This good reactivity is related to the absence of an oxide layer, the large amount of superficial irregularities generated by the repetitive fracture process during milling and the presence of a fine nanostructure within the iron nanoparticles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
BLEJWAS,THOMAS E.; SANDERS,THOMAS L.; EAGAN,ROBERT J.
2000-01-01
Nuclear power is an important and, the authors believe, essential component of a secure nuclear future. Although nuclear fuel cycles create materials that have some potential for use in nuclear weapons, with appropriate fuel cycles, nuclear power could reduce rather than increase real proliferation risk worldwide. Future fuel cycles could be designed to avoid plutonium production, generate minimal amounts of plutonium in proliferation-resistant amounts or configurations, and/or transparently and efficiently consume plutonium already created. Furthermore, a strong and viable US nuclear infrastructure, of which nuclear power is a large element, is essential if the US is to maintain a leadershipmore » or even participatory role in defining the global nuclear infrastructure and controlling the proliferation of nuclear weapons. By focusing on new fuel cycles and new reactor technologies, it is possible to advantageously burn and reduce nuclear materials that could be used for nuclear weapons rather than increase and/or dispose of these materials. Thus, the authors suggest that planners for a secure nuclear future use technology to design an ideal future. In this future, nuclear power creates large amounts of virtually atmospherically clean energy while significantly lowering the threat of proliferation through the thoughtful use, physical security, and agreed-upon transparency of nuclear materials. The authors must develop options for policy makers that bring them as close as practical to this ideal. Just as Atoms for Peace became the ideal for the first nuclear century, they see a potential nuclear future that contributes significantly to power for peace and prosperity.« less
Rotor compound concept for designing an industrial HTS synchronous motor
NASA Astrophysics Data System (ADS)
Kashani, M.; Hosseina, M.; Sarrafan, K.; Darabi, A.
2013-06-01
Recently, producing power with smaller amount of losses become as a goal in our daily life. Today, large amount of energy waste in power networks all around the world. The main reason is “resistive electric equipments” of power networks. Since early 1980s, simultaneous with the development of high temperature superconductive (HTS) technology, superconductors gently attracted the mankind attentions. Using superconductive equipments instead of conventional resistive ones are result in salient electric loss reduction in power systems. Especially to reduce losses in power networks superconductive industrial rotating machines can potentially perform a significant role. In early recent century, first generation of HTS rotating machines was born. But unfortunately they have long way to penetrate the commercial markets yet. In HTS rotating machines the conventional copper made windings are replaced with the HTS superconductors. In this paper an industrial HTS synchronous motor with YBCO coated conductor field windings was designed. As a new approach, model was equipped with a compound rotor that includes both magnetic and non-magnetic materials. So, large amount of heavy iron made part was replaced by light non-magnetic material such as G-10 fiberglass. Furthermore, in this structure iron loss in rotor could be reduced to its lowest value. Also less weight and more air gap energy density were the additional advantages. Regarding zero electric loss production in field windings and less iron loss in rotor construction, this model potentially is more effective than the other iron made HTS motors.
Wang, Lu-Yong; Fasulo, D
2006-01-01
Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.
Stable Short-Term Frequency Support Using Adaptive Gains for a DFIG-Based Wind Power Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jinsik; Jang, Gilsoo; Muljadi, Eduard
For the fixed-gain inertial control of wind power plants (WPPs), a large gain setting provides a large contribution to supporting system frequency control, but it may cause over-deceleration for a wind turbine generator that has a small amount of kinetic energy (KE). Further, if the wind speed decreases during inertial control, even a small gain may cause over-deceleration. This paper proposes a stable inertial control scheme using adaptive gains for a doubly fed induction generator (DFIG)-based WPP. The scheme aims to improve the frequency nadir (FN) while ensuring stable operation of all DFIGs, particularly when the wind speed decreases duringmore » inertial control. In this scheme, adaptive gains are set to be proportional to the KE stored in DFIGs, which is spatially and temporally dependent. To improve the FN, upon detecting an event, large gains are set to be proportional to the KE of DFIGs; to ensure stable operation, the gains decrease with the declining KE. The simulation results demonstrate that the scheme improves the FN while ensuring stable operation of all DFIGs in various wind and system conditions. Further, it prevents over-deceleration even when the wind speed decreases during inertial control.« less
Sources of material for 'loess' deposits at 15°N in North Africa
NASA Astrophysics Data System (ADS)
McLaren, Sue; Smalley, Ian; O'Hara-Dhand, Ken
2014-05-01
Africa is not a loess-rich continent. Lacking are the large expanses of glacial terrain and the high cold mountain regions, which would have provided the material and processes for loess deposits. African geomorphology and climatic history did not favour the formation of major loess deposits. However, within the African setting there are situations which could lead to particle formation and loess deposition. Loess deposits are made from 'large' dust (i.e. particles around 30µm). Small dust (around 3µm) is generated in large amounts in Africa, and distributed over large distances. Large dust is not generated in significant amounts in Africa, and this accounts for the relative lack of loess deposits. It is a relative lack; examination of the map of loess distribution in the World by Scheidig 1934 (still the best world loess map) shows some possible loess in Africa. In particular there is a band across the continent at around 15°N. We propose some possible sources for this material, and fit these sources into a recently revised deterministic model of loess deposit formation. And look at some exotic but possible indicators of the loessic nature of the 15°N band. Three possible material sources are: (1). The Fonta-Djalon highlands to the west of the loess band, (2). The Bodélé Depression, towards the centre of the loess band, and (3). The Ethiopian highlands to the east. There is a convenient river associated with the loess band; the Niger rises in the Fonta-Djalon region and carries material through the loess zone. The catchment of the Niger is well placed to receive large dust material from the Bodélé depression. Most Bodélé material is small dust carried away in high suspension but small amounts of large dust could be transported to the Niger catchment. Material from the Ethiopian highlands makes up the Nile silt but again some could be transported to the west to contribute to the loess band- which is a modest loess deposit. The deposit can be examined with respect to the deterministic model of loess deposit formation which sets out four event aspects which must be accommodated. PTDC: provenance (of material), transportation, deposition and change- all need to be considered when loess deposit genesis is examined. In the case of the 15°N loess the P actions are speculative, and probably not very effective. In fact the deposits they deliver, as mapped by Scheidig, still have to be established as genuine loess. There are indicators of loessic nature; one is that they are favoured by tunnel nesting birds, in particular bee-eaters. Merops apiaster (the European bee-eater) travels large distances to nest in the European loess. The bee-eater nesting zones map nicely on to European loess distribution. In the 15°N zone there is a concentration of nesting activity by the Northern Carmine bee-eater (Merops nubicus) and we take this as an indicator of ground nature.
Measurements of jet-related observables at the LHC
NASA Astrophysics Data System (ADS)
Kokkas, P.
2015-11-01
During the first years of the LHC operation a large amount of jet data was recorded by the ATLAS and CMS experiments. In this review several measurements of jet-related observables are presented, such as multi-jet rates and cross sections, ratios of jet cross sections, jet shapes and event shape observables. All results presented here are based on jet data collected at a centre-of-mass energy of 7 TeV. Data are compared to various Monte Carlo generators, as well as to theoretical next-to-leading-order calculations allowing a test of perturbative Quantum Chromodynamics in a previously unexplored energy region.
NASA Technical Reports Server (NTRS)
Banerian, G.
1977-01-01
The fundamentals of jet noise generation and suppression have been studied in great detail over the past twenty-five years. Considerable progress has been made recently in our understanding of this subject, though some aspects of it remain perplexing. The importance of accounting for the influence of the jets mean flow in shrouding acoustic sources is now recognized and the large amount of information obtained on jet noise reduction schemes, e.g., the internal mixer nozzle, the inverted profile nozzle and multi-element suppressors, has helped clarify trends and identify remaining issues. Current understanding of inflight effects is limited and in need of much more attention.
Safeguarding structural data repositories against bad apples
Minor, Wladek; Dauter, Zbigniew; Helliwell, John R.; Jaskolski, Mariusz; Wlodawer, Alexander
2016-01-01
Structural biology research generates large amounts of data, some deposited in public databases/repositories, but a substantial remainder never becoming available to the scientific community. Additionally, some of the deposited data contain less or more serious errors that may bias the results of data mining. Thorough analysis and discussion of these problems is needed in order to ameliorate this situation. This note is an attempt to propose some solutions and encourage both further discussion and action on the part of the relevant organizations, in particular the Protein Data Bank and various bodies of the International Union of Crystallography. PMID:26840827
NASA Technical Reports Server (NTRS)
1993-01-01
C Language Integration Production System (CLIPS), a NASA-developed expert systems program, has enabled a security systems manufacturer to design a new generation of hardware. C.CURESystem 1 Plus, manufactured by Software House, is a software based system that is used with a variety of access control hardware at installations around the world. Users can manage large amounts of information, solve unique security problems and control entry and time scheduling. CLIPS acts as an information management tool when accessed by C.CURESystem 1 Plus. It asks questions about the hardware and when given the answer, recommends possible quick solutions by non-expert persons.
Realizing the financial benefits of capitation arbitrage.
Sussman, A J; Fairchild, D G; Colling, M C; Brennan, T A
1999-11-01
By anticipating the arbitrage potential of cash flow under budgeted capitation, healthcare organizations can make the best use of cash flow as a revenue-generating resource. Factors that determine the magnitude of the benefits for providers and insurers include settlement interval, withhold amount, which party controls the withhold, and incurred-but-not-reported expenses. In choosing how to structure these factors in their contract negotiations, providers and insurers should carefully assess whether capitation surpluses or deficits can be expected from the provider. In both instances, the recipient and magnitude of capitation arbitrage benefits are dictated largely by the performance of the provider.
2010 Program of Study: Swirling and Swimming in Turbulence
2011-06-01
total transit time for half the particle path is then T = −12 log a− 1 + 3 2 log 2− b −1 . (10) For small a, the first term dominates the travel time... the half -saturation density, which provides a limitation to the amount of prey that can be eaten at large P , i.e. Z depends linearly on P until the ...implies that, for example, a new generation of Z00 is produced by all interactions between two Z00, half of the interactions between Z00 and Z01, and 1 in
Directed module detection in a large-scale expression compendium.
Fu, Qiang; Lemmens, Karen; Sanchez-Rodriguez, Aminael; Thijs, Inge M; Meysman, Pieter; Sun, Hong; Fierro, Ana Carolina; Engelen, Kristof; Marchal, Kathleen
2012-01-01
Public online microarray databases contain tremendous amounts of expression data. Mining these data sources can provide a wealth of information on the underlying transcriptional networks. In this chapter, we illustrate how the web services COLOMBOS and DISTILLER can be used to identify condition-dependent coexpression modules by exploring compendia of public expression data. COLOMBOS is designed for user-specified query-driven analysis, whereas DISTILLER generates a global regulatory network overview. The user is guided through both web services by means of a case study in which condition-dependent coexpression modules comprising a gene of interest (i.e., "directed") are identified.
Characterizing Marine Soundscapes.
Erbe, Christine; McCauley, Robert; Gavrilov, Alexander
2016-01-01
The study of marine soundscapes is becoming widespread and the amount of data collected is increasing rapidly. Data owners (typically academia, industry, government, and defense) are negotiating data sharing and generating potential for data syntheses, comparative studies, analyses of trends, and large-scale and long-term acoustic ecology research. A problem is the lack of standards and commonly agreed protocols for the recording of marine soundscapes, data analysis, and reporting that make a synthesis and comparison of results difficult. We provide a brief overview of the components in a marine soundscape, the hard- and software tools for recording and analyzing marine soundscapes, and common reporting formats.
Climate change and the vulnerability of electricity generation to water stress in the European Union
NASA Astrophysics Data System (ADS)
Behrens, Paul; van Vliet, Michelle T. H.; Nanninga, Tijmen; Walsh, Brid; Rodrigues, João F. D.
2017-08-01
Thermoelectric generation requires large amounts of water for cooling. Recent warm periods have led to curtailments in generation, highlighting concerns about security of supply. Here we assess EU-wide climate impacts for 1,326 individual thermoelectric plants and 818 water basins in 2020 and 2030. We show that, despite policy goals and a decrease in electricity-related water withdrawal, the number of regions experiencing some reduction in power availability due to water stress rises from 47 basins to 54 basins between 2014 and 2030, with further plants planned for construction in stressed basins. We examine the reasons for these pressures by including water demand for other uses. The majority of vulnerable basins lie in the Mediterranean region, with further basins in France, Germany and Poland. We investigate four adaptations, finding that increased future seawater cooling eases some pressures. This highlights the need for an integrated, basin-level approach in energy and water policy.
Critical review: Uncharted waters? The future of the electricity-water nexus.
Sanders, Kelly T
2015-01-06
Electricity generation often requires large amounts of water, most notably for cooling thermoelectric power generators and moving hydroelectric turbines. This so-called "electricity-water nexus" has received increasing attention in recent years by governments, nongovernmental organizations, industry, and academics, especially in light of increasing water stress in many regions around the world. Although many analyses have attempted to project the future water requirements of electricity generation, projections vary considerably due to differences in temporal and spatial boundaries, modeling frameworks, and scenario definitions. This manuscript is intended to provide a critical review of recent publications that address the future water requirements of electricity production and define the factors that will moderate the water requirements of the electric grid moving forward to inform future research. The five variables identified include changes in (1) fuel consumption patterns, (2) cooling technology preferences, (3) environmental regulations, (4) ambient climate conditions, and (5) electric grid characteristics. These five factors are analyzed to provide guidance for future research related to the electricity-water nexus.
Use of Advanced Solar Cells for Commercial Communication Satellites
NASA Technical Reports Server (NTRS)
Bailey, Sheila G.; Landis, Geoffrey A.
1995-01-01
The current generation of communications satellites are located primarily in geosynchronous Earth orbit (GEO). Over the next decade, however, a new generation of communications satellites will be built and launched, designed to provide a world-wide interconnection of portable telephones. For this mission, the satellites must be positioned in lower polar and near-polar orbits. To provide complete coverage, large numbers of satellites will be required. Because the required number of satellites decreases as the orbital altitude is increased, fewer satellites would be required if the orbit chosen were raised from low to intermediate orbit. However, in intermediate orbits, satellites encounter significant radiation due to trapped electrons and protons. Radiation tolerant solar cells may be necessary to make such satellites feasible. We analyze the amount of radiation encountered in low and intermediate polar orbits at altitudes of interest to next-generation communication satellites, calculate the expected degradation for silicon, GaAs, and InP solar cells, and show that the lifetimes can be significantly increased by use of advanced solar cells.
Use of advanced solar cells for commerical communication satellites
NASA Astrophysics Data System (ADS)
Landis, Geoffrey A.; Bailey, Sheila G.
1995-01-01
The current generation of communications satellites are located primarily in geosynchronous Earth orbit (GEO). Over the next decade, however, a new generation of communications satellites will be built and launched, designed to provide a world-wide interconnection of portable telephones. For this mission, the satellites must be positioned in lower polar- and near-polar orbits. To provide complete coverage, large numbers of satellites will be required. Because of the required number of satellites decreases as the orbital altitude is increased, fewer satellites would be required if the orbit chosen were raised from Low to intermediate orbit. However, in intermediate orbits, satellites encounter significant radiation due to trapped electrons and protons. Radiation tolerant solar cells may be necessary to make such satellites feasible. We analyze the amount of radiation encountered in low and intermediate polar orbits at altitudes of interest to next-generation communication satellites, calculate the expected degradation for silicon, GaAs, and InP solar cells, and show that the lifetimes can be significantly increased by use of advanced solar cells.
Use of advanced solar cells for commercial communication satellites
NASA Astrophysics Data System (ADS)
Bailey, Sheila G.; Landis, Geoffrey A.
1995-03-01
The current generation of communications satellites are located primarily in geosynchronous Earth orbit (GEO). Over the next decade, however, a new generation of communications satellites will be built and launched, designed to provide a world-wide interconnection of portable telephones. For this mission, the satellites must be positioned in lower polar and near-polar orbits. To provide complete coverage, large numbers of satellites will be required. Because the required number of satellites decreases as the orbital altitude is increased, fewer satellites would be required if the orbit chosen were raised from low to intermediate orbit. However, in intermediate orbits, satellites encounter significant radiation due to trapped electrons and protons. Radiation tolerant solar cells may be necessary to make such satellites feasible. We analyze the amount of radiation encountered in low and intermediate polar orbits at altitudes of interest to next-generation communication satellites, calculate the expected degradation for silicon, GaAs, and InP solar cells, and show that the lifetimes can be significantly increased by use of advanced solar cells.
Emission of bacteria and fungi in the air from wastewater treatment plants - a review.
Korzeniewska, Ewa
2011-01-01
An increase in global population, coupled with intensive development of industry and agriculture, has resulted in the generation and accumulation of large amounts of waste around the world. The spread of pathogenic microorganisms, endotoxins, odours and dust particles in the air is an inevitable consequence of waste production and waste management. Thus, the risk of infections associated with wastewater treatment plants (WWTPs) has become of a particular importance in recent decades. Sewage and unstable sludge contain various pathogens such as viruses, bacteria, and human and animal parasites. These microorganisms can be transmitted to the ambient air in wastewater droplets, which are generated during aeration or mechanical moving of the sewage. Bioaerosols generated during wastewater treatment may therefore pose a potential health hazard to workers of these plants or to habitants of their surroundings. The degree of human exposure to airborne bacteria, fungi, endotoxin and other allergens may vary significantly depending upon the type and the capacity of a plant, kind of the facilities, performed activities and meteorological conditions.
Nuclear Thermal Rocket (NTR) Propulsion and Power Systems for Outer Planetary Exploration Missions
NASA Technical Reports Server (NTRS)
Borowski, S. K.; Cataldo, R. L.
2001-01-01
The high specific impulse (I (sub sp)) and engine thrust generated using liquid hydrogen (LH2)-cooled Nuclear Thermal Rocket (NTR) propulsion makes them attractive for upper stage applications for difficult robotic science missions to the outer planets. Besides high (I (sub sp)) and thrust, NTR engines can also be designed for "bimodal" operation allowing substantial amounts of electrical power (10's of kWe ) to be generated for onboard spacecraft systems and high data rate communications with Earth during the course of the mission. Two possible options for using the NTR are examined here. A high performance injection stage utilizing a single 15 klbf thrust engine can inject large payloads to the outer planets using a 20 t-class launch vehicle when operated in an "expendable mode". A smaller bimodal NTR stage generating approx. 1 klbf of thrust and 20 to 40 kWe for electric propulsion can deliver approx. 100 kg using lower cost launch vehicles. Additional information is contained in the original extended abstract.
Watanabe, Rikiya; Noji, Hiroyuki
2014-01-01
F1-ATPase (F1) is a rotary motor protein that couples ATP hydrolysis to mechanical rotation with high efficiency. In our recent study, we observed a highly temperature-sensitive (TS) step in the reaction catalyzed by a thermophilic F1 that was characterized by a rate constant remarkably sensitive to temperature and had a Q10 factor of 6–19. Since reactions with high Q10 values are considered to involve large conformational changes, we speculated that the TS reaction plays a key role in the rotation of F1. To clarify the role of the TS reaction, in this study, we conducted a stall and release experiment using magnetic tweezers, and assessed the torque generated during the TS reaction. The results indicate that the TS reaction generates the same amount of rotational torque as does ATP binding, but more than that generated during ATP hydrolysis. Thus, we confirmed that the TS reaction contributes significantly to the rotation of F1. PMID:24825532
Carvalho, A B; Sampaio, M C; Varandas, F R; Klaczko, L B
1998-01-01
Most sexually reproducing species have sexual proportions around 1:1. This major biological phenomenon remained unexplained until 1930, when FISHER proposed that it results from a mechanism of natural selection. Here we report the first experimental test of his model that obeys all its assumptions. We used a naturally occurring X-Y meiotic drive system--the sex-ratio trait of Drosophila mediopunctat--to generate female-biased experimental populations. As predicted by FISHER, these populations evolved toward equal sex proportions due to natural selection, by accumulation of autosomal alleles that direct the parental reproductive effort toward the rare sex. Classical Fisherian evolution is a rather slow mechanism: despite a very large amount of genetic variability, the experimental populations evolved from 16% of males to 32% of males in 49 generations and would take 330 generations (29 years) to reach 49%. This slowness has important implications for species potentially endangered by skewed sexual proportions, such as reptiles with temperature sex determination. PMID:9504919
The role of oxidative stress in the metabolic syndrome.
Whaley-Connell, Adam; McCullough, Peter A; Sowers, James R
2011-01-01
Loss of reduction-oxidation (redox) homeostasis and generation of excess free oxygen radicals play an important role in the pathogenesis of diabetes, hypertension, and consequent cardiovascular disease. Reactive oxygen species are integral in routine in physiologic mechanisms. However, loss of redox homeostasis contributes to proinflammatory and profibrotic pathways that promote impairments in insulin metabolic signaling, reduced endothelial-mediated vasorelaxation, and associated cardiovascular and renal structural and functional abnormalities. Redox control of metabolic function is a dynamic process with reversible pro- and anti-free radical processes. Labile iron is necessary for the catalysis of superoxide anion, hydrogen peroxide, and the generation of the damaging hydroxyl radical. Acute hypoxia and cellular damage in cardiovascular tissue liberate larger amounts of cytosolic and extracellular iron that is poorly liganded; thus, large increases in the generation of oxygen free radicals are possible, causing tissue damage. The understanding of iron and the imbalance of redox homeostasis within the vasculature is integral in hypertension and progression of metabolic dysregulation that contributes to insulin resistance, endothelial dysfunction, and cardiovascular and kidney disease.
ENGINES: exploring single nucleotide variation in entire human genomes.
Amigo, Jorge; Salas, Antonio; Phillips, Christopher
2011-04-19
Next generation ultra-sequencing technologies are starting to produce extensive quantities of data from entire human genome or exome sequences, and therefore new software is needed to present and analyse this vast amount of information. The 1000 Genomes project has recently released raw data for 629 complete genomes representing several human populations through their Phase I interim analysis and, although there are certain public tools available that allow exploration of these genomes, to date there is no tool that permits comprehensive population analysis of the variation catalogued by such data. We have developed a genetic variant site explorer able to retrieve data for Single Nucleotide Variation (SNVs), population by population, from entire genomes without compromising future scalability and agility. ENGINES (ENtire Genome INterface for Exploring SNVs) uses data from the 1000 Genomes Phase I to demonstrate its capacity to handle large amounts of genetic variation (>7.3 billion genotypes and 28 million SNVs), as well as deriving summary statistics of interest for medical and population genetics applications. The whole dataset is pre-processed and summarized into a data mart accessible through a web interface. The query system allows the combination and comparison of each available population sample, while searching by rs-number list, chromosome region, or genes of interest. Frequency and FST filters are available to further refine queries, while results can be visually compared with other large-scale Single Nucleotide Polymorphism (SNP) repositories such as HapMap or Perlegen. ENGINES is capable of accessing large-scale variation data repositories in a fast and comprehensive manner. It allows quick browsing of whole genome variation, while providing statistical information for each variant site such as allele frequency, heterozygosity or FST values for genetic differentiation. Access to the data mart generating scripts and to the web interface is granted from http://spsmart.cesga.es/engines.php. © 2011 Amigo et al; licensee BioMed Central Ltd.
Worden, R P
1995-09-07
An upper bound on the speed of evolution is derived. The bound concerns the amount of genetic information which is expressed in observable ways in various aspects of the phenotype. The genetic information expressed in some part of the phenotype of a species cannot increase faster than a given rate, determined by the selection pressure on that part. This rate is typically a small fraction of a bit per generation. Total expressed genetic information cannot increase faster than a species-specific rate--typically a few bits per generation. These bounds apply to all aspects of the phenotype, but are particularly relevant to cognition. As brains are highly complex, we expect large amounts of expressed genetic information in the brain--of the order of 100 kilobytes--yet evolutionary changes in brain genetic information are only a fraction of a bit per generation. This has important consequences for cognitive evolution. The limit implies that the human brain differs from the chimpanzee brain by at most 5 kilobytes of genetic design information. This is not enough to define a Language Acquisition Device, unless it depends heavily on pre-existing primate symbolic cognition. Subject to the evolutionary speed limit, in changing environments a simple, modular brain architecture is fitter than more complex ones. This encourages us to look for simplicity in brain design, rather than expecting the brain to be a patchwork of ad hoc adaptations. The limit implies that pure species selection is not an important mechanism of evolutionary change.
Modeling of oxidation of aluminum nanoparticles by using Cabrera Mott Model
NASA Astrophysics Data System (ADS)
Ramazanova, Zamart; Zyskin, Maxim; Martirosyan, Karen
2012-10-01
Our research focuses on modeling new Nanoenergetic Gas-Generator (NGG) formulations that rapidly release a large amount of gaseous products and generates shock and pressure waves. Nanoenergetic thermite reagents include mixtures of Al and metal oxides such as bismuth trioxide and iodine pentoxide. The research problem is considered a spherically symmetric case and used the Cabrera Mott oxidation model to describe the kinetics of oxide growth on spherical Al nanoparticles for evaluating reaction time which a process of the reaction with oxidizer happens on the outer part of oxide layer of aluminum ions are getting in contact with an oxidizing agent and react. We assumed that a ball of Al of radius 20 to 50 nm is covered by a thin oxide layer 2-4 nm and is surrounded by abundant amount of oxygen stored by oxidizers. The ball is rapidly heated up to ignition temperature to initiate self-sustaining oxidation reaction. As a result highly exothermic reaction is generated. In the oxide layer of excess concentrations of electrons and ions are dependent on the electric field potential with the corresponding of the Gibbs factors and that it conducts to the solution of a nonlinear Poisson equation for the electric field potential in a moving boundary domain. Motion of the boundary is determined by the gradient of a solution on the boundary. We investigated oxidation model numerically, using the COMSOL software utilizing finite element analysis. The computing results demonstrate that oxidation rate increases with the decreasing particle radius.
Aerodynamics and flow features of a damselfly in takeoff flight.
Bode-Oke, Ayodeji T; Zeyghami, Samane; Dong, Haibo
2017-09-26
Flight initiation is fundamental for survival, escape from predators and lifting payload from one place to another in biological fliers and can be broadly classified into jumping and non-jumping takeoffs. During jumping takeoffs, the legs generate most of the initial impulse. Whereas the wings generate most of the forces in non-jumping takeoffs, which are usually voluntary, slow, and stable. It is of great interest to understand how these non-jumping takeoffs occur and what strategies insects use to generate large amount of forces required for this highly demanding flight initiation mode. Here, for the first time, we report accurate wing and body kinematics measurements of a damselfly during a non-jumping takeoff. Furthermore, using a high fidelity computational fluid dynamics simulation, we identify the 3D flow features and compute the wing aerodynamics forces to unravel the key mechanisms responsible for generating large flight forces. Our numerical results show that a damselfly generates about three times its body weight during the first half-stroke for liftoff. In generating these forces, the wings flap through a steeply inclined stroke plane with respect to the horizon, slicing through the air at high angles of attack (45°-50°). Consequently, a leading edge vortex (LEV) is formed during both the downstroke and upstroke on all the four wings. The formation of the LEV, however, is inhibited in the subsequent upstrokes following takeoff. Accordingly, we observe a drastic reduction in the magnitude of the aerodynamic force, signifying the importance of LEV in augmenting force production. Our analysis also shows that forewing-hindwing interaction plays a favorable role in enhancing both lift and thrust production during takeoff.
Software framework for prognostic health monitoring of ocean-based power generation
NASA Astrophysics Data System (ADS)
Bowren, Mark
On August 5, 2010 the U.S. Department of Energy (DOE) has designated the Center for Ocean Energy Technology (COET) at Florida Atlantic University (FAU) as a national center for ocean energy research and development of prototypes for open-ocean power generation. Maintenance on ocean-based machinery can be very costly. To avoid unnecessary maintenance it is necessary to monitor the condition of each machine in order to predict problems. This kind of prognostic health monitoring (PHM) requires a condition-based maintenance (CBM) system that supports diagnostic and prognostic analysis of large amounts of data. Research in this field led to the creation of ISO13374 and the development of a standard open-architecture for machine condition monitoring. This thesis explores an implementation of such a system for ocean-based machinery using this framework and current open-standard technologies.
A Framework for the Design of Effective Graphics for Scientific Visualization
NASA Technical Reports Server (NTRS)
Miceli, Kristina D.
1992-01-01
This proposal presents a visualization framework, based on a data model, that supports the production of effective graphics for scientific visualization. Visual representations are effective only if they augment comprehension of the increasing amounts of data being generated by modern computer simulations. These representations are created by taking into account the goals and capabilities of the scientist, the type of data to be displayed, and software and hardware considerations. This framework is embodied in an assistant-based visualization system to guide the scientist in the visualization process. This will improve the quality of the visualizations and decrease the time the scientist is required to spend in generating the visualizations. I intend to prove that such a framework will create a more productive environment for tile analysis and interpretation of large, complex data sets.
Quantum-locked key distribution at nearly the classical capacity rate.
Lupo, Cosmo; Lloyd, Seth
2014-10-17
Quantum data locking is a protocol that allows for a small secret key to (un)lock an exponentially larger amount of information, hence yielding the strongest violation of the classical one-time pad encryption in the quantum setting. This violation mirrors a large gap existing between two security criteria for quantum cryptography quantified by two entropic quantities: the Holevo information and the accessible information. We show that the latter becomes a sensible security criterion if an upper bound on the coherence time of the eavesdropper's quantum memory is known. Under this condition, we introduce a protocol for secret key generation through a memoryless qudit channel. For channels with enough symmetry, such as the d-dimensional erasure and depolarizing channels, this protocol allows secret key generation at an asymptotic rate as high as the classical capacity minus one bit.
Data Prospecting Framework - a new approach to explore "big data" in Earth Science
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Rushing, J.; Lin, A.; Kuo, K.
2012-12-01
Due to advances in sensors, computation and storage, cost and effort required to produce large datasets have been significantly reduced. As a result, we are seeing a proliferation of large-scale data sets being assembled in almost every science field, especially in geosciences. Opportunities to exploit the "big data" are enormous as new hypotheses can be generated by combining and analyzing large amounts of data. However, such a data-driven approach to science discovery assumes that scientists can find and isolate relevant subsets from vast amounts of available data. Current Earth Science data systems only provide data discovery through simple metadata and keyword-based searches and are not designed to support data exploration capabilities based on the actual content. Consequently, scientists often find themselves downloading large volumes of data, struggling with large amounts of storage and learning new analysis technologies that will help them separate the wheat from the chaff. New mechanisms of data exploration are needed to help scientists discover the relevant subsets We present data prospecting, a new content-based data analysis paradigm to support data-intensive science. Data prospecting allows the researchers to explore big data in determining and isolating data subsets for further analysis. This is akin to geo-prospecting in which mineral sites of interest are determined over the landscape through screening methods. The resulting "data prospects" only provide an interaction with and feel for the data through first-look analytics; the researchers would still have to download the relevant datasets and analyze them deeply using their favorite analytical tools to determine if the datasets will yield new hypotheses. Data prospecting combines two traditional categories of data analysis, data exploration and data mining within the discovery step. Data exploration utilizes manual/interactive methods for data analysis such as standard statistical analysis and visualization, usually on small datasets. On the other hand, data mining utilizes automated algorithms to extract useful information. Humans guide these automated algorithms and specify algorithm parameters (training samples, clustering size, etc.). Data Prospecting combines these two approaches using high performance computing and the new techniques for efficient distributed file access.
The Origin of Life in a Terrestrial Hydrothermal Pool? The Importance of a Catalytic Surface
NASA Astrophysics Data System (ADS)
Sydow, L. A.; Bennett, P.
2013-12-01
A premise of one chemoautotrophic theory for the origin of life is that a recurring reaction catalyzed on the charged surfaces of pyrite served as the first metabolism and was later enveloped by a primitive cellular membrane. This proposed 'surface metabolism' is analogous to the reductive acetyl-CoA pathway (Wächtershäuser 1988) and requires the abiotic formation of methanethiol (CH3SH), the simplest of the alkyl thiols, which would serve the role of coenzyme-A in the surface metabolism. Abiogenic CH3SH has not previously been identified in terrestrial hot springs, but it has been produced in the laboratory under hydrothermal conditions in the presence of a catalyst, usually FeS. Its formation would occur via the following reactions, with reaction 2 requiring catalysis: CO2 + 2H2S --> CS2 + 2H2O (1) CS2 + 3H2 --> CH3SH + H2S (2) We have identified CH3SH in Cinder Pool, an acid-sulfate-chloride hot spring in Yellowstone National Park. This spring is unusual in that it contains a subaqueous molten sulfur layer (~18 m depth) and thousands of iron-sulfur-spherules floating on the surface, which are created by gas bubbling through the molten floor of the spring. Analysis with EDS has shown that cinder material, largely composed of elemental sulfur, also contains trace iron sulfide minerals, meaning it could serve as a reactive and catalytic surface for abiogenic CH3SH formation in Cinder Pool. Furthermore, the cinders themselves are highly porous, and these void spaces could trap necessary reactants near the catalytic surface. Gas samples were collected from Cinder pool in fall of 2011 using the bubble strip method. One sample contained measurable quantities of CH3SH, and all samples contained related reactant sulfur gases such as large amounts of H2S, and smaller amounts of CS2 and dimethyl disulfide. Laboratory microcosm experiments were conducted to replicate these findings in a sterile environment to ensure CH3SH generation was abiotic. Analog Cinder Pool water and either FeS, FeS2, or cinders collected from the pool itself were incubated with H2, CO2, and CS2 as reaction gases for over a week at pool temperatures. An experiment was also conducted without CS2 to see if the solid materials could act as the sole source of sulfur. All of the experimental solids were capable of catalyzing CH3SH production when CS2 was added. Without added CS2, however, only cinders produced CH3SH, while FeS and FeS2 bottles could only make small amounts of H2S. Presumably, the large amount of elemental sulfur in the cinders creates enough H2S for the subsequent generation of CS2 then CH3SH, which is catalyzed by the trace iron-sulfide they also contain. Cinders act as an excellent reaction surface for CH3SH generation, and a similar material may have played a significant part in the advent of life on earth.
Flood triggering in Switzerland: the role of daily to monthly preceding precipitation
NASA Astrophysics Data System (ADS)
Froidevaux, P.; Schwanbeck, J.; Weingartner, R.; Chevalier, C.; Martius, O.
2015-03-01
Determining the role of different precipitation periods for peak discharge generation is crucial for both projecting future changes in flood probability and for short- and medium-range flood forecasting. We analyze catchment-averaged daily precipitation time series prior to annual peak discharge events (floods) in Switzerland. The high amount of floods considered - more than 4000 events from 101 catchments have been analyzed - allows to derive significant information about the role of antecedent precipitation for peak discharge generation. Based on the analysis of precipitation times series, we propose a new separation of flood-related precipitation periods: (i) the period 0 to 1 day before flood days, when the maximum flood-triggering precipitation rates are generally observed, (ii) the period 2 to 3 days before flood days, when longer-lasting synoptic situations generate "significantly higher than normal" precipitation amounts, and (iii) the period from 4 days to one month before flood days when previous wet episodes may have already preconditioned the catchment. The novelty of this study lies in the separation of antecedent precipitation into the precursor antecedent precipitation (4 days before floods or earlier, called PRE-AP) and the short range precipitation (0 to 3 days before floods, a period when precipitation is often driven by one persistent weather situation like e.g. a stationary low-pressure system). Because we consider a high number of events and because we work with daily precipitation values, we do not separate the "antecedent" and "peak-triggering" precipitation. The whole precipitation recorded during the flood day is included in the short-range antecedent precipitation. The precipitation accumulating 0 to 3 days before an event is the most relevant for floods in Switzerland. PRE-AP precipitation has only a weak and region-specific influence on flood probability. Floods were significantly more frequent after wet PRE-AP periods only in the Jura Mountains, in the western and eastern Swiss plateau, and at the exit of large lakes. As a general rule, wet PRE-AP periods enhance the flood probability in catchments with gentle topography, high infiltration rates, and large storage capacity (karstic cavities, deep soils, large reservoirs). In contrast, floods were significantly less frequent after wet PRE-AP periods in glacial catchments because of reduced melt. For the majority of catchments however, no significant correlation between precipitation amounts and flood occurrences is found when the last three days before floods are omitted in the precipitation amounts. Moreover, the PRE-AP was not higher for extreme floods than for annual floods with a high frequency and was very close to climatology for all floods. The weak influence of PRE-AP is a clear indicator of a short discharge memory of Prealpine, Alpine and Southalpine Swiss catchments. Our study nevertheless poses the question whether the impact of long-term precursory precipitation for floods in such catchments is not overestimated in the general perception. We conclude that the consideration of a 3-4 days precipitation period should be sufficient to represent (understand, reconstruct, model, project) Swiss Alpine floods.
On Feature Extraction from Large Scale Linear LiDAR Data
NASA Astrophysics Data System (ADS)
Acharjee, Partha Pratim
Airborne light detection and ranging (LiDAR) can generate co-registered elevation and intensity map over large terrain. The co-registered 3D map and intensity information can be used efficiently for different feature extraction application. In this dissertation, we developed two algorithms for feature extraction, and usages of features for practical applications. One of the developed algorithms can map still and flowing waterbody features, and another one can extract building feature and estimate solar potential on rooftops and facades. Remote sensing capabilities, distinguishing characteristics of laser returns from water surface and specific data collection procedures provide LiDAR data an edge in this application domain. Furthermore, water surface mapping solutions must work on extremely large datasets, from a thousand square miles, to hundreds of thousands of square miles. National and state-wide map generation/upgradation and hydro-flattening of LiDAR data for many other applications are two leading needs of water surface mapping. These call for as much automation as possible. Researchers have developed many semi-automated algorithms using multiple semi-automated tools and human interventions. This reported work describes a consolidated algorithm and toolbox developed for large scale, automated water surface mapping. Geometric features such as flatness of water surface, higher elevation change in water-land interface and, optical properties such as dropouts caused by specular reflection, bimodal intensity distributions were some of the linear LiDAR features exploited for water surface mapping. Large-scale data handling capabilities are incorporated by automated and intelligent windowing, by resolving boundary issues and integrating all results to a single output. This whole algorithm is developed as an ArcGIS toolbox using Python libraries. Testing and validation are performed on a large datasets to determine the effectiveness of the toolbox and results are presented. Significant power demand is located in urban areas, where, theoretically, a large amount of building surface area is also available for solar panel installation. Therefore, property owners and power generation companies can benefit from a citywide solar potential map, which can provide available estimated annual solar energy at a given location. An efficient solar potential measurement is a prerequisite for an effective solar energy system in an urban area. In addition, the solar potential calculation from rooftops and building facades could open up a wide variety of options for solar panel installations. However, complex urban scenes make it hard to estimate the solar potential, partly because of shadows cast by the buildings. LiDAR-based 3D city models could possibly be the right technology for solar potential mapping. Although, most of the current LiDAR-based local solar potential assessment algorithms mainly address rooftop potential calculation, whereas building facades can contribute a significant amount of viable surface area for solar panel installation. In this paper, we introduce a new algorithm to calculate solar potential of both rooftop and building facades. Solar potential received by the rooftops and facades over the year are also investigated in the test area.
Generation of Complex Karstic Conduit Networks with a Hydro-chemical Model
NASA Astrophysics Data System (ADS)
De Rooij, R.; Graham, W. D.
2016-12-01
The discrete-continuum approach is very well suited to simulate flow and solute transport within karst aquifers. Using this approach, discrete one-dimensional conduits are embedded within a three-dimensional continuum representative of the porous limestone matrix. Typically, however, little is known about the geometry of the karstic conduit network. As such the discrete-continuum approach is rarely used for practical applications. It may be argued, however, that the uncertainty associated with the geometry of the network could be handled by modeling an ensemble of possible karst conduit networks within a stochastic framework. We propose to generate stochastically realistic karst conduit networks by simulating the widening of conduits as caused by the dissolution of limestone over geological relevant timescales. We illustrate that advanced numerical techniques permit to solve the non-linear and coupled hydro-chemical processes efficiently, such that relatively large and complex networks can be generated in acceptable time frames. Instead of specifying flow boundary conditions on conduit cells to recharge the network as is typically done in classical speleogenesis models, we specify an effective rainfall rate over the land surface and let model physics determine the amount of water entering the network. This is advantageous since the amount of water entering the network is extremely difficult to reconstruct, whereas the effective rainfall rate may be quantified using paleoclimatic data. Furthermore, we show that poorly known flow conditions may be constrained by requiring a realistic flow field. Using our speleogenesis model we have investigated factors that influence the geometry of simulated conduit networks. We illustrate that our model generates typical branchwork, network and anastomotic conduit systems. Flow, solute transport and water ages in karst aquifers are simulated using a few illustrative networks.
Context-sensitive trace inlining for Java.
Häubl, Christian; Wimmer, Christian; Mössenböck, Hanspeter
2013-12-01
Method inlining is one of the most important optimizations in method-based just-in-time (JIT) compilers. It widens the compilation scope and therefore allows optimizing multiple methods as a whole, which increases the performance. However, if method inlining is used too frequently, the compilation time increases and too much machine code is generated. This has negative effects on the performance. Trace-based JIT compilers only compile frequently executed paths, so-called traces, instead of whole methods. This may result in faster compilation, less generated machine code, and better optimized machine code. In the previous work, we implemented a trace recording infrastructure and a trace-based compiler for [Formula: see text], by modifying the Java HotSpot VM. Based on this work, we evaluate the effect of trace inlining on the performance and the amount of generated machine code. Trace inlining has several major advantages when compared to method inlining. First, trace inlining is more selective than method inlining, because only frequently executed paths are inlined. Second, the recorded traces may capture information about virtual calls, which simplify inlining. A third advantage is that trace information is context sensitive so that different method parts can be inlined depending on the specific call site. These advantages allow more aggressive inlining while the amount of generated machine code is still reasonable. We evaluate several inlining heuristics on the benchmark suites DaCapo 9.12 Bach, SPECjbb2005, and SPECjvm2008 and show that our trace-based compiler achieves an up to 51% higher peak performance than the method-based Java HotSpot client compiler. Furthermore, we show that the large compilation scope of our trace-based compiler has a positive effect on other compiler optimizations such as constant folding or null check elimination.
New by-products rich in bioactive substances from the olive oil mill processing.
Romero, Concepción; Medina, Eduardo; Mateo, Maria Antonia; Brenes, Manuel
2018-01-01
Olive oil extraction generates a large amount of residue consisting mainly of the pomace and leaves when using a two-phase centrifugation system. The aim of this study was to assess the content of phenolic and triterpene compounds in the by-products produced in Spanish olive oil mills. Olive pomace had concentrations of phenolic and triterpene substances lower than 2 and 3 g kg -1 , respectively. The leaves contained a high concentration of these substances, although those collected from ground-picked olives had lost most of their phenolic compounds. Moreover, the sediment from the bottom of the olive oil storage tanks did not have a significant amount of these substances. By contrast, a new by-product called olive pomace skin has been revealed as a very rich source of triterpenic acids, the content of which can reach up to 120 g kg -1 in this waste product, maslinic acid comprising around 70% of total triterpenics. Among the by-products generated during extraction of olive oil, olive pomace skin has been discovered to be a very rich source of triterpenic acids, which can reach up to 120 g kg -1 of the waste. These results will contribute to the valorization of olive oil by-products. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Plaza, Antonio; Plaza, Javier; Paz, Abel
2010-10-01
Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.
de Lourdes Mora-García, María; García-Rocha, Rosario; Morales-Ramírez, Omar; Montesinos, Juan José; Weiss-Steider, Benny; Hernández-Montes, Jorge; Ávila-Ibarra, Luis Roberto; Don-López, Christian Azucena; Velasco-Velázquez, Marco Antonio; Gutiérrez-Serrano, Vianey; Monroy-García, Alberto
2016-10-26
In recent years, immunomodulatory mechanisms of mesenchymal stem/stromal cells (MSCs) from bone marrow and other "classic" sources have been described. However, the phenotypic and functional properties of tumor MSCs are poorly understood. The aim of this study was to analyze the immunosuppressive capacity of cervical cancer-derived MSCs (CeCa-MSCs) on effector T lymphocytes through the purinergic pathway. We determined the expression and functional activity of the membrane-associated ectonucleotidases CD39 and CD73 on CeCa-MSCs and normal cervical tissue-derived MSCs (NCx-MSCs). We also analyzed their immunosuppressive capacity to decrease proliferation, activation and effector cytotoxic T (CD8+) lymphocyte function through the generation of adenosine (Ado). We detected that CeCa-MSCs express higher levels of CD39 and CD73 ectonucleotidases in cell membranes compared to NCx-MSCs, and that this feature was associated with the ability to strongly suppress the proliferation, activation and effector functions of cytotoxic T-cells through the generation of large amounts of Ado from the hydrolysis of ATP, ADP and AMP nucleotides. This study suggests that CeCa-MSCs play an important role in the suppression of the anti-tumor immune response in CeCa through the purinergic pathway.
NASA Astrophysics Data System (ADS)
Takeda, Keigo; Ishikawa, Kenji; Tanaka, Hiromasa; Kano, Hiroyuki; Sekine, Makoto; Hori, Masaru
2013-09-01
Non-equilibrium atmospheric pressure plasma jet (NEAPPJ) is very attractive tool for bio and medical applications. In the plasma treatments, samples are typically located at a far region from main discharge, and treated in open air without purge gases. Influence of air engulfment on generation of activated species in the NEAPPJ in open air is a large issue for the application. In this study, the AC excited argon NEAPPJ with the gas flow rate of 2 slm was generated under the open air condition. The densities of the grand state nitrogen monoxide (NO) and the ground state O atom generated by the NEAPPJ were measured by laser induced fluorescence spectroscopy and vacuum ultraviolet absorption spectroscopy. The length of the plasma jet was around 10 mm. Up to 10 mm, the NO density increased with increasing the distance from plasma head, and then saturated in remote region of plasma. On the other hand, the O atom density decreased from 1014 to 1013 cm-3 with increasing the distance. Especially, the amount of decrease in O atom density became the largest at the plasma edge. We will discuss the generation and loss processes of activated species generated in the NEAPPJ with the measurement results using spectroscopic methods.
NASA Astrophysics Data System (ADS)
Duffy, D.; Maxwell, T. P.; Doutriaux, C.; Williams, D. N.; Chaudhary, A.; Ames, S.
2015-12-01
As the size of remote sensing observations and model output data grows, the volume of the data has become overwhelming, even to many scientific experts. As societies are forced to better understand, mitigate, and adapt to climate changes, the combination of Earth observation data and global climate model projects is crucial to not only scientists but to policy makers, downstream applications, and even the public. Scientific progress on understanding climate is critically dependent on the availability of a reliable infrastructure that promotes data access, management, and provenance. The Earth System Grid Federation (ESGF) has created such an environment for the Intergovernmental Panel on Climate Change (IPCC). ESGF provides a federated global cyber infrastructure for data access and management of model outputs generated for the IPCC Assessment Reports (AR). The current generation of the ESGF federated grid allows consumers of the data to find and download data with limited capabilities for server-side processing. Since the amount of data for future AR is expected to grow dramatically, ESGF is working on integrating server-side analytics throughout the federation. The ESGF Compute Working Team (CWT) has created a Web Processing Service (WPS) Application Programming Interface (API) to enable access scalable computational resources. The API is the exposure point to high performance computing resources across the federation. Specifically, the API allows users to execute simple operations, such as maximum, minimum, average, and anomalies, on ESGF data without having to download the data. These operations are executed at the ESGF data node site with access to large amounts of parallel computing capabilities. This presentation will highlight the WPS API, its capabilities, provide implementation details, and discuss future developments.
NASA Astrophysics Data System (ADS)
Sun, Qiliang; Alves, Tiago M.; Lu, Xiangyang; Chen, Chuanxu; Xie, Xinong
2018-03-01
Submarine slope failure can mobilize large amounts of seafloor sediment, as shown in varied offshore locations around the world. Submarine landslide volumes are usually estimated by mapping their tops and bases on seismic data. However, two essential components of the total volume of failed sediments are overlooked in most estimates: (a) the volume of subseismic turbidites generated during slope failure and (b) the volume of shear compaction occurring during the emplacement of failed sediment. In this study, the true volume of a large submarine landslide in the northern South China Sea is estimated using seismic, multibeam bathymetry and Ocean Drilling Program/Integrated Ocean Drilling Program well data. The submarine landslide was evacuated on the continental slope and deposited in an ocean basin connected to the slope through a narrow moat. This particular character of the sea floor provides an opportunity to estimate the amount of strata remobilized by slope instability. The imaged volume of the studied landslide is 1035 ± 64 km3, 406 ± 28 km3 on the slope and 629 ± 36 km3 in the ocean basin. The volume of subseismic turbidites is 86 km3 (median value), and the volume of shear compaction is 100 km3, which are 8.6% and 9.7% of the landslide volume imaged on seismic data, respectively. This study highlights that the original volume of the failed sediments is significantly larger than that estimated using seismic and bathymetric data. Volume loss related to the generation of landslide-related turbidites and shear compaction must be considered when estimating the total volume of failed strata in the submarine realm.
NASA Astrophysics Data System (ADS)
Drost, Edwin J. F.; Lowe, Ryan J.; Ivey, Greg N.; Jones, Nicole L.; Péquignet, Christine A.
2017-05-01
The numerical wave model SWAN (Simulating WAves Nearshore) and historical wave buoy observations were used to investigate the response of surface wave fields to tropical cyclone (TC) wind forcing on the Australian North West Shelf (NWS). Analysis of historical wave data during TC events at a key location on the NWS showed that an average of 1.7 large TCs impacted the region each year, albeit with high variability in TC track, intensity and size, and also in the surface wave field response. An accurately modeled TC wind field resulted in a good prediction of the observed extreme wave conditions by SWAN. Results showed that the presence of strong background winds during a TC and a long TC lifetime (with large variations in translation speed) can provide additional energy input. This potentially enhances the generated swell waves and increases the spatial extent of the TC generated surface wave fields. For the TC translation speeds in this study, a positive relationship between TC translation speed and the resulting maximum significant wave height and wave field asymmetry was observed. Bottom friction across the wide NWS limited the amount of wave energy reaching the coastal region; consistently reducing wave energy in depths below 50 m, and in the case of the most extreme conditions, in depths up to 100 m that comprise much of the shelf. Nevertheless, whitecapping was still the dominant dissipation mechanism on the broader shelf region. Shelf-scale refraction had little effect on the amount of wave energy reaching the nearshore zone; however, refraction locally enhanced or reduced wave energy depending on the orientation of the isobaths with respect to the dominant wave direction during the TC.
Wang, Michael T. M.; Bolland, Mark J.; Gamble, Greg; Grey, Andrew
2015-01-01
Background Publication of clinical research findings in prominent journals influences health beliefs and medical practice, in part by engendering news coverage. Randomized controlled trials (RCTs) should be most influential in guiding clinical practice. We determined whether study design of clinical research published in high-impact journals influences media coverage. Methods and Findings We compared the incidence and amount of media coverage of RCTs with that of observational studies published in the top 7 medical journals between 1 January 2013 and 31 March 2013. We specifically assessed media coverage of the most rigorous RCTs, those with >1000 participants that reported ‘hard’ outcomes. There was no difference between RCTs and observational studies in coverage by major newspapers or news agencies, or in total number of news stories generated (all P>0.63). Large RCTs reporting ‘hard’ outcomes did not generate more news coverage than small RCTs that reported surrogate outcomes and observational studies (all P>0.32). RCTs were more likely than observational studies to attract a journal editorial (70% vs 46%, P = 0.003), but less likely to be the subject of a journal press release (17% vs 50%, P<0.001). Large RCTs that reported ‘hard’ outcomes did not attract an editorial more frequently than other studies (61% vs 58%, P>0.99), nor were they more likely to be the subject of a journal press release (14% vs 38%, P = 0.14). Conclusions The design of clinical studies whose results are published in high-impact medical journals is not associated with the likelihood or amount of ensuing news coverage. PMID:26701758
Kumar, Vinod; Goel, Rajeev; Chawla, Raman; Silambarasan, M.; Sharma, Rakesh Kumar
2010-01-01
Chemical, biological, radiological, and nuclear (CBRN) decontamination is the removal of CBRN material from equipment or humans. The objective of the decontamination is to reduce radiation burden, salvage equipment, and materials, remove loose CBRN contaminants, and fix the remaining in place in preparation for protective storage or permanent disposal work activities. Decontamination may be carried out using chemical, electrochemical, and mechanical means. Like materials, humans may also be contaminated with CBRN contamination. Changes in cellular function can occur at lower radiation doses and exposure to chemicals. At high dose, cell death may take place. Therefore, decontamination of humans at the time of emergency while generating bare minimum waste is an enormous task requiring dedication of large number of personnel and large amount of time. General principles of CBRN decontamination are discussed in this review with emphasis on radiodecontamination. PMID:21829318
Facilitating access to information in large documents with an intelligent hypertext system
NASA Technical Reports Server (NTRS)
Mathe, Nathalie
1993-01-01
Retrieving specific information from large amounts of documentation is not an easy task. It could be facilitated if information relevant in the current problem solving context could be automatically supplied to the user. As a first step towards this goal, we have developed an intelligent hypertext system called CID (Computer Integrated Documentation) and tested it on the Space Station Freedom requirement documents. The CID system enables integration of various technical documents in a hypertext framework and includes an intelligent context-sensitive indexing and retrieval mechanism. This mechanism utilizes on-line user information requirements and relevance feedback either to reinforce current indexing in case of success or to generate new knowledge in case of failure. This allows the CID system to provide helpful responses, based on previous usage of the documentation, and to improve its performance over time.
Recycling of metal bearing electronic scrap in a plasma furnace
NASA Astrophysics Data System (ADS)
Jarosz, Piotr; Małecki, Stanisław; Gargul, Krzysztof
2011-12-01
The recycling of electronic waste and the recovery of valuable components are large problems in the modern world economy. This paper presents the effects of melting sorted electronic scrap in a plasma furnace. Printed circuit boards, cables, and windings were processed separately. The characteristics of the obtained products (i.e., alloy metal, slag, dust, and gases) are presented. A method of their further processing in order to obtain commercial products is proposed. Because of the chemical composition and physical properties, the waste slag is environmentally inert and can be used for the production of abrasives. Process dusts containing large amounts of carbon and its compounds have a high calorific value. That makes it possible to use them for energy generation. The gas has a high calorific value, and its afterburning combined with energy recovery is necessary.
Uvf - Unified Volume Format: A General System for Efficient Handling of Large Volumetric Datasets.
Krüger, Jens; Potter, Kristin; Macleod, Rob S; Johnson, Christopher
2008-01-01
With the continual increase in computing power, volumetric datasets with sizes ranging from only a few megabytes to petascale are generated thousands of times per day. Such data may come from an ordinary source such as simple everyday medical imaging procedures, while larger datasets may be generated from cluster-based scientific simulations or measurements of large scale experiments. In computer science an incredible amount of work worldwide is put into the efficient visualization of these datasets. As researchers in the field of scientific visualization, we often have to face the task of handling very large data from various sources. This data usually comes in many different data formats. In medical imaging, the DICOM standard is well established, however, most research labs use their own data formats to store and process data. To simplify the task of reading the many different formats used with all of the different visualization programs, we present a system for the efficient handling of many types of large scientific datasets (see Figure 1 for just a few examples). While primarily targeted at structured volumetric data, UVF can store just about any type of structured and unstructured data. The system is composed of a file format specification with a reference implementation of a reader. It is not only a common, easy to implement format but also allows for efficient rendering of most datasets without the need to convert the data in memory.
Diller, Kyle I; Bayden, Alexander S; Audie, Joseph; Diller, David J
2018-01-01
There is growing interest in peptide-based drug design and discovery. Due to their relatively large size, polymeric nature, and chemical complexity, the design of peptide-based drugs presents an interesting "big data" challenge. Here, we describe an interactive computational environment, PeptideNavigator, for naturally exploring the tremendous amount of information generated during a peptide drug design project. The purpose of PeptideNavigator is the presentation of large and complex experimental and computational data sets, particularly 3D data, so as to enable multidisciplinary scientists to make optimal decisions during a peptide drug discovery project. PeptideNavigator provides users with numerous viewing options, such as scatter plots, sequence views, and sequence frequency diagrams. These views allow for the collective visualization and exploration of many peptides and their properties, ultimately enabling the user to focus on a small number of peptides of interest. To drill down into the details of individual peptides, PeptideNavigator provides users with a Ramachandran plot viewer and a fully featured 3D visualization tool. Each view is linked, allowing the user to seamlessly navigate from collective views of large peptide data sets to the details of individual peptides with promising property profiles. Two case studies, based on MHC-1A activating peptides and MDM2 scaffold design, are presented to demonstrate the utility of PeptideNavigator in the context of disparate peptide-design projects. Copyright © 2017 Elsevier Ltd. All rights reserved.
Design of capacity incentive and energy compensation for demand response programs
NASA Astrophysics Data System (ADS)
Liu, Zhoubin; Cui, Wenqi; Shen, Ran; Hu, Yishuang; Wu, Hui; Ye, Chengjin
2018-02-01
Variability and Uncertainties caused by renewable energy sources have called for large amount of balancing services. Demand side resources (DSRs) can be a good alternative of traditional generating units to provide balancing service. In the areas where the electricity market has not been fully established, e.g., China, DSRs can help balance the power system with incentive-based demand response programs. However, there is a lack of information about the interruption cost of consumers in these areas, making it hard to determine the rational amount of capacity incentive and energy compensation for the participants of demand response programs. This paper proposes an algorithm to calculate the amount of capacity incentive and energy compensation for demand response programs when there lacks the information about interruption cost. Available statistical information of interruption cost in referenced areas is selected as the referenced data. Interruption cost of the targeted area is converted from the referenced area by product per electricity consumption. On this basis, capacity incentive and energy compensation are obtained to minimize the payment to consumers. Moreover, the loss of consumers is guaranteed to be covered by the revenue they earned from load serving entities.
Estimation of construction and demolition waste using waste generation rates in Chennai, India.
Ram, V G; Kalidindi, Satyanarayana N
2017-06-01
A large amount of construction and demolition waste is being generated owing to rapid urbanisation in Indian cities. A reliable estimate of construction and demolition waste generation is essential to create awareness about this stream of solid waste among the government bodies in India. However, the required data to estimate construction and demolition waste generation in India are unavailable or not explicitly documented. This study proposed an approach to estimate construction and demolition waste generation using waste generation rates and demonstrated it by estimating construction and demolition waste generation in Chennai city. The demolition waste generation rates of primary materials were determined through regression analysis using waste generation data from 45 case studies. Materials, such as wood, electrical wires, doors, windows and reinforcement steel, were found to be salvaged and sold on the secondary market. Concrete and masonry debris were dumped in either landfills or unauthorised places. The total quantity of construction and demolition debris generated in Chennai city in 2013 was estimated to be 1.14 million tonnes. The proportion of masonry debris was found to be 76% of the total quantity of demolition debris. Construction and demolition debris forms about 36% of the total solid waste generated in Chennai city. A gross underestimation of construction and demolition waste generation in some earlier studies in India has also been shown. The methodology proposed could be utilised by government bodies, policymakers and researchers to generate reliable estimates of construction and demolition waste in other developing countries facing similar challenges of limited data availability.
Characterization of food waste generators: a Hawaii case study.
Okazaki, W K; Turn, S Q; Flachsbart, P G
2008-12-01
Information on food waste disposal and on recycling methods and recycled amounts is reported. Data were obtained from a mail and phone survey of all licensed food establishments in Hawaii conducted in 2004 and 2005. Of 8253 licensed food establishments, 5033 completed surveys. It was found that relationships exist between food establishment size (measured by the number of meals served per day or the number of employees) and the amount of food an establishment recycled; establishment type and recycling behavior; and establishment type and amount recycled. The amount of food waste recycled in the state of Hawaii was estimated to be 264,000 L/day and annual food waste generation was estimated to be 336,000 tonnes.
Microphysics of Pyrocumulonimbus Clouds
NASA Technical Reports Server (NTRS)
Jensen, Eric; Ackerman, Andrew S.; Fridlind, Ann
2004-01-01
The intense heat from forest fires can generate explosive deep convective cloud systems that inject pollutants to high altitudes. Both satellite and high-altitude aircraft measurements have documented cases in which these pyrocumulonimbus clouds inject large amounts of smoke well into the stratosphere (Fromm and Servranckx 2003; Jost et al. 2004). This smoke can remain in the stratosphere, be transported large distances, and affect lower stratospheric chemistry. In addition recent in situ measurements in pyrocumulus updrafts have shown that the high concentrations of smoke particles have significant impacts on cloud microphysical properties. Very high droplet number densities result in delayed precipitation and may enhance lightning (Andrew et al. 2004). Presumably, the smoke particles will also lead to changes in the properties of anvil cirrus produces by the deep convection, with resulting influences on cloud radiative forcing. In situ sampling near the tops of mature pyrocumulonimbus is difficult due to the high altitude and violence of the storms. In this study, we use large eddy simulations (LES) with size-resolved microphysics to elucidate physical processes in pyrocumulonimbus clouds.
Semantics-based distributed I/O with the ParaMEDIC framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balaji, P.; Feng, W.; Lin, H.
2008-01-01
Many large-scale applications simultaneously rely on multiple resources for efficient execution. For example, such applications may require both large compute and storage resources; however, very few supercomputing centers can provide large quantities of both. Thus, data generated at the compute site oftentimes has to be moved to a remote storage site for either storage or visualization and analysis. Clearly, this is not an efficient model, especially when the two sites are distributed over a wide-area network. Thus, we present a framework called 'ParaMEDIC: Parallel Metadata Environment for Distributed I/O and Computing' which uses application-specific semantic information to convert the generatedmore » data to orders-of-magnitude smaller metadata at the compute site, transfer the metadata to the storage site, and re-process the metadata at the storage site to regenerate the output. Specifically, ParaMEDIC trades a small amount of additional computation (in the form of data post-processing) for a potentially significant reduction in data that needs to be transferred in distributed environments.« less
Why large porphyry Cu deposits like high Sr/Y magmas?
Chiaradia, Massimo; Ulianov, Alexey; Kouzmanov, Kalin; Beate, Bernardo
2012-01-01
Porphyry systems supply most copper and significant gold to our economy. Recent studies indicate that they are frequently associated with high Sr/Y magmatic rocks, but the meaning of this association remains elusive. Understanding the association between high Sr/Y magmatic rocks and porphyry-type deposits is essential to develop genetic models that can be used for exploration purposes. Here we present results on a Pleistocene volcano of Ecuador that highlight the behaviour of copper in magmas with variable (but generally high) Sr/Y values. We provide indirect evidence for Cu partitioning into a fluid phase exsolved at depths of ~15 km from high Sr/Y (>70) andesitic magmas before sulphide saturation. This lends support to the hypothesis that large amounts of Cu- and S-bearing fluids can be accumulated into and released from a long-lived high Sr/Y deep andesitic reservoir to a shallower magmatic-hydrothermal system with the potential of generating large porphyry-type deposits. PMID:23008750
Why large porphyry Cu deposits like high Sr/Y magmas?
Chiaradia, Massimo; Ulianov, Alexey; Kouzmanov, Kalin; Beate, Bernardo
2012-01-01
Porphyry systems supply most copper and significant gold to our economy. Recent studies indicate that they are frequently associated with high Sr/Y magmatic rocks, but the meaning of this association remains elusive. Understanding the association between high Sr/Y magmatic rocks and porphyry-type deposits is essential to develop genetic models that can be used for exploration purposes. Here we present results on a Pleistocene volcano of Ecuador that highlight the behaviour of copper in magmas with variable (but generally high) Sr/Y values. We provide indirect evidence for Cu partitioning into a fluid phase exsolved at depths of ~15 km from high Sr/Y (>70) andesitic magmas before sulphide saturation. This lends support to the hypothesis that large amounts of Cu- and S-bearing fluids can be accumulated into and released from a long-lived high Sr/Y deep andesitic reservoir to a shallower magmatic-hydrothermal system with the potential of generating large porphyry-type deposits.
The Strasbourg Large Refractor and Dome: Significant Improvements and Failed Attempts
NASA Astrophysics Data System (ADS)
Heck, Andre
2009-01-01
Founded by the German Empire in the late 19th century, Strasbourg Astronomical Observatory featured several novelties from the start. According to Mueller (1978), the separation of observing buildings from the study area and from the astronomers' residence was a revolution in observatory construction. The instruments were, as much as possible, isolated from the vibrations of the buildings themselves. "Gas flames" and water were used to reduce temperature effects. Thus the Large Dome (ca 11m diameter), housing the Large Refractor (ca 49cm, then the largest in Germany) and covered by zinc over wood, could be cooled down by water running from the top. Reports (including by the French who took over the observatory after World War I) are however somehow nonexistent on the effective usage and actual efficiency of such a system (which must have generated locally a significant amount of humidity). The paper will detail these technical attempts as well as the specificities of the instruments installed in that new observatory intended as a showcase of German astronomy.
Conservation status of polar bears (Ursus maritimus) in relation to projected sea-ice declines
NASA Astrophysics Data System (ADS)
Laidre, K. L.; Regehr, E. V.; Akcakaya, H. R.; Amstrup, S. C.; Atwood, T.; Lunn, N.; Obbard, M.; Stern, H. L., III; Thiemann, G.; Wiig, O.
2016-12-01
Loss of Arctic sea ice due to climate change is the most serious threat to polar bears (Ursus maritimus) throughout their circumpolar range. We performed a data-based sensitivity analysis with respect to this threat by evaluating the potential response of the global polar bear population to projected sea-ice conditions. We conducted 1) an assessment of generation length for polar bears, 2) developed of a standardized sea-ice metric representing important habitat characteristics for the species; and 3) performed population projections over three generations, using computer simulation and statistical models representing alternative relationships between sea ice and polar bear abundance. Using three separate approaches, the median percent change in mean global population size for polar bears between 2015 and 2050 ranged from -4% (95% CI = -62%, 50%) to -43% (95% CI = -76%, -20%). Results highlight the potential for large reductions in the global population if sea-ice loss continues. They also highlight the large amount of uncertainty in statistical projections of polar bear abundance and the sensitivity of projections to plausible alternative assumptions. The median probability of a reduction in the mean global population size of polar bears greater than 30% over three generations was approximately 0.71 (range 0.20-0.95. The median probability of a reduction greater than 50% was approximately 0.07 (range 0-0.35), and the probability of a reduction greater than 80% was negligible.
Grainger, Matthew James; Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce
2018-01-01
Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions.
Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce
2018-01-01
Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions. PMID:29389949
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
2016-02-01
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
Size and shape of Brain may be such as to take advantage of two Dimensions of Time
NASA Astrophysics Data System (ADS)
Kriske, Richard
2014-03-01
This author had previously Theorized that there are two non-commuting Dimensions of time. One is Clock Time and the other is Information Time (which we generally refer to as Information, like Spin Up or Spin Down). When time does not commute with another Dimension of Time, one takes the Clock Time at one point in space and the Information time is not known; that is different than if one takes the Information time at that point and the Clock time is not known--This is not explicitly about time but rather space. An example of this non-commutation is that if one knows the Spin at one point and the Time at one point of space then simultaneosly, one knows the Spin at another point of Space and the Time there (It is the same time), it is a restatement of the EPR paradox. As a matter of fact two Dimensions of Time would prove the EPR paradox. It is obvious from that argument that if one needed to take advantage of Information, then a fairly large space needs to be used, a large amount of Energy needs to be Generated and a symmetry needs to be established in Space-like the lobes of a Brain in order to detect the fact that the Tclock and Tinfo are not Commuting. This Non-Commuting deposits a large amount of Information simultaneously in that space, and synchronizes the time there.
Rhizosphere Environment and Labile Phosphorus Release from Organic Waste-Amended Soils.
NASA Astrophysics Data System (ADS)
Dao, Thanh H.
2015-04-01
Crop residues and biofertilizers are primary sources of nutrients for organic crop production. However, soils treated with large amounts of nutrient-enriched manure have elevated phosphorus (P) levels in regions of intensive animal agriculture. Surpluses occurred in these amended soils, resulting in large pools of exchangeable inorganic P (Pi) and enzyme-labile organic P (Po) that averaging 30.9 and 68.2 mg kg-1, respectively. Organic acids produced during crop residue decomposition can promote the complexation of counter-ions and decouple and release unbound Pi from metal and alkali metal phosphates. Animal manure and cover crop residues also contain large amounts of soluble organic matter, and likely generate similar ligands. However, a high degree of heterogeneity in P spatial distribution in such amended fields, arising from variances in substrate physical forms ranging from slurries to dried solids, composition, and diverse application methods and equipment. Distinct clusters of Pi and Po were observed, where accumulation of the latter forms was associated with high soil microbial biomass C and reduced phosphomonoesterases' activity. Accurate estimates of plant requirements and lability of soil P pools, and real-time plant and soil P sensing systems are critical considerations to optimally manage manure-derived nutrients in crop production systems. An in situ X-ray fluorescence-based approach to sensing canopy and soil XRFS-P was developed to improve the yield-soil P relationship for optimal nutrient recommendations in addition to allowing in-the-field verification of foliar P status.
Preliminary volcano-hazard assessment for Akutan Volcano east-central Aleutian Islands, Alaska
Waythomas, Christopher F.; Power, John A.; Richter, Donlad H.; McGimsey, Robert G.
1998-01-01
Akutan Volcano is a 1100-meter-high stratovolcano on Akutan Island in the east-central Aleutian Islands of southwestern Alaska. The volcano is located about 1238 kilometers southwest of Anchorage and about 56 kilometers east of Dutch Harbor/Unalaska. Eruptive activity has occurred at least 27 times since historical observations were recorded beginning in the late 1700?s. Recent eruptions produced only small amounts of fine volcanic ash that fell primarily on the upper flanks of the volcano. Small amounts of ash fell on the Akutan Harbor area during eruptions in 1911, 1948, 1987, and 1989. Plumes of volcanic ash are the primary hazard associated with eruptions of Akutan Volcano and are a major hazard to all aircraft using the airfield at Dutch Harbor or approaching Akutan Island. Eruptions similar to historical Akutan eruptions should be anticipated in the future. Although unlikely, eruptions larger than those of historical time could generate significant amounts of volcanic ash, fallout, pyroclastic flows, and lahars that would be hazardous to life and property on all sectors of the volcano and other parts of the island, but especially in the major valleys that head on the volcano flanks. During a large eruption an ash cloud could be produced that may be hazardous to aircraft using the airfield at Cold Bay and the airspace downwind from the volcano. In the event of a large eruption, volcanic ash fallout could be relatively thick over parts of Akutan Island and volcanic bombs could strike areas more than 10 kilometers from the volcano.
FISH Oracle 2: a web server for integrative visualization of genomic data in cancer research.
Mader, Malte; Simon, Ronald; Kurtz, Stefan
2014-03-31
A comprehensive view on all relevant genomic data is instrumental for understanding the complex patterns of molecular alterations typically found in cancer cells. One of the most effective ways to rapidly obtain an overview of genomic alterations in large amounts of genomic data is the integrative visualization of genomic events. We developed FISH Oracle 2, a web server for the interactive visualization of different kinds of downstream processed genomics data typically available in cancer research. A powerful search interface and a fast visualization engine provide a highly interactive visualization for such data. High quality image export enables the life scientist to easily communicate their results. A comprehensive data administration allows to keep track of the available data sets. We applied FISH Oracle 2 to published data and found evidence that, in colorectal cancer cells, the gene TTC28 may be inactivated in two different ways, a fact that has not been published before. The interactive nature of FISH Oracle 2 and the possibility to store, select and visualize large amounts of downstream processed data support life scientists in generating hypotheses. The export of high quality images supports explanatory data visualization, simplifying the communication of new biological findings. A FISH Oracle 2 demo server and the software is available at http://www.zbh.uni-hamburg.de/fishoracle.
Anomalous Ion Heating, Intrinsic and Induced Rotation in the Pegasus Toroidal Experiment
NASA Astrophysics Data System (ADS)
Burke, M. G.; Barr, J. L.; Bongard, M. W.; Fonck, R. J.; Hinson, E. T.; Perry, J. M.; Redd, A. J.; Thome, K. E.
2014-10-01
Pegasus plasmas are initiated through either standard, MHD stable, inductive current drive or non-solenoidal local helicity injection (LHI) current drive with strong reconnection activity, providing a rich environment to study ion dynamics. During LHI discharges, a large amount of anomalous impurity ion heating has been observed, with Ti ~ 800 eV but Te < 100 eV. The ion heating is hypothesized to be a result of large-scale magnetic reconnection activity, as the amount of heating scales with increasing fluctuation amplitude of the dominant, edge localized, n = 1 MHD mode. Chordal Ti spatial profiles indicate centrally peaked temperatures, suggesting a region of good confinement near the plasma core surrounded by a stochastic region. LHI plasmas are observed to rotate, perhaps due to an inward radial current generated by the stochastization of the plasma edge by the injected current streams. H-mode plasmas are initiated using a combination of high-field side fueling and Ohmic current drive. This regime shows a significant increase in rotation shear compared to L-mode plasmas. In addition, these plasmas have been observed to rotate in the counter-Ip direction without any external momentum sources. The intrinsic rotation direction is consistent with predictions from the saturated Ohmic confinement regime. Work supported by US DOE Grant DE-FG02-96ER54375.
Taghdisi, Seyed Mohammad; Danesh, Noor Mohammad; Emrani, Ahmad Sarreshtehdar; Ramezani, Mohammad; Abnous, Khalil
2015-11-15
Cocaine is a strong central nervous system stimulant and one of the most commonly abused drugs. In this study, an electrochemical aptasensor was designed for sensitive and selective detection of cocaine, based on single-walled carbon nanotubes (SWNTs), gold electrode and complimentary strand of aptamer (CS). This electrochemical aptasensor inherits properties of SWNTs and gold such as large surface area and high electrochemical conductivity, as well as high affinity and selectivity of aptamer toward its target and the stronger interaction of SWNTs with single-stranded DNA (ssDNA) than double-stranded DNA (dsDNA). In the absence of cocaine, a little amount of SWNTs bind to Aptamer-CS-modified electrode, so that the electrochemical signal is weak. In the presence of cocaine, aptamer binds to cocaine, leaves the surface of electrode. So that, a large amount of SWNTs bind to CS-modified electrode, generating to a strong electrochemical signal. The designed electrochemical aptasensor showed good selectivity toward cocaine with a limit of detection (LOD) as low as 105 pM. Moreover, the fabricated electrochemical aptasensor was successfully applied to detect cocaine in serum with a LOD as low as 136 pM. Copyright © 2015 Elsevier B.V. All rights reserved.
Pulsed Electron Beam Water Radiolysis for Sub-Microsecond Hydroxyl Radical Protein Footprinting
Watson, Caroline; Janik, Ireneusz; Zhuang, Tiandi; Charvátová, Olga; Woods, Robert J.; Sharp, Joshua S.
2009-01-01
Hydroxyl radical footprinting is a valuable technique for studying protein structure, but care must be taken to ensure that the protein does not unfold during the labeling process due to oxidative damage. Footprinting methods based on sub-microsecond laser photolysis of peroxide that complete the labeling process faster than the protein can unfold have been recently described; however, the mere presence of large amounts of hydrogen peroxide can also cause uncontrolled oxidation and minor conformational changes. We have developed a novel method for sub-microsecond hydroxyl radical protein footprinting using a pulsed electron beam from a 2 MeV Van de Graaff electron accelerator to generate a high concentration of hydroxyl radicals by radiolysis of water. The amount of oxidation can be controlled by buffer composition, pulsewidth, dose, and dissolved nitrous oxide gas in the sample. Our results with ubiquitin and β-lactoglobulin A demonstrate that one sub-microsecond electron beam pulse produces extensive protein surface modifications. Highly reactive residues that are buried within the protein structure are not oxidized, indicating that the protein retains its folded structure during the labeling process. Time-resolved spectroscopy indicates that the major part of protein oxidation is complete in a timescale shorter than that of large scale protein motions. PMID:19265387
A Geodynamic model for melt generation and evolution of the Mid-Continental Rift
NASA Astrophysics Data System (ADS)
Gunawardana, P. M.; Moucha, R.; Rooney, T. O.; Stein, S.; Stein, C. A.; Hansen, M.
2017-12-01
The Mid-Continent Rift System (MCRS) is a 3000-km long failed rift system which formed within the Precambrian continent of Laurentia and nearly split North America apart about 1.1 billion years ago. The MCRS can also be classified as a Large Igneous Province (LIP) made up of two distinct magmatic phases (Stein et al., 2015). The first large scale magmatism is characterized by a large volume of flood basalt that filled a fault controlled basin. The second post-rift phase consists of volcanics and sediment that were deposited in a thermally subsiding basin after faulting ended. This flood basalt filled rift geometry is a special characteristic of the MCRS which is not observed in other presently active or ancient rifts. Hence the MCRS's unusual nature likely reflects the combined effects of rifting and a mantle plume. We investigate this hypothesis with a geodynamic model by fully exploring the parameter space for a range of mantle potential and plume excess temperatures as well as different extension scenarios and lithospheric/plume structure. We quantify the rate and amount of melt generated and compare these with inferred volume and history of magmatic activity. Our model suggests that for an initially thin 100 km continental lithosphere and high mantle potential temperatures above 1453 oC, a plume is not required to generate the inferred volume of flood basalt as long as the lithosphere is thinned rapidly (> 3cm/yr). We note that in this scenario, majority of the melt generation continues for a few million years after the rifting ends. However, for present-day mantle potential temperatures ( 1343 oC) a mantle plume is required with an excess of temperature > 100 oC to generate the required volume of flood basalt. Furthermore, if the initial continental lithosphere thickness is greater than 100 km, the required plume excess temperature to generate enough melt, must be > 200 oC for the range of mantle potential temperatures we explored. References Stein, C.A., Kley, J., Stein, S., Hindle, D., and Keller, G.R., 2015, North America's Midcontinent Rift: When rift met LIP: Geosphere, v. 11, no. 5, p. 1607-1616, doi:10.1130/GES01183.1.
Jiang, Jiming
2015-04-01
Sequencing of complete plant genomes has become increasingly more routine since the advent of the next-generation sequencing technology. Identification and annotation of large amounts of noncoding but functional DNA sequences, including cis-regulatory DNA elements (CREs), have become a new frontier in plant genome research. Genomic regions containing active CREs bound to regulatory proteins are hypersensitive to DNase I digestion and are called DNase I hypersensitive sites (DHSs). Several recent DHS studies in plants illustrate that DHS datasets produced by DNase I digestion followed by next-generation sequencing (DNase-seq) are highly valuable for the identification and characterization of CREs associated with plant development and responses to environmental cues. DHS-based genomic profiling has opened a door to identify and annotate the 'dark matter' in sequenced plant genomes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Can we always ignore ship-generated food waste?
Polglaze, John
2003-01-01
Considerable quantities of food waste can be generated at a rapid rate in ships, particularly those with large numbers of people onboard. By virtue of the amounts involved and its nature, food waste is potentially the most difficult to manage component of a ship's garbage stream, however, in most sea areas it may be dealt with by the simple expedient of direct discharge to sea. As a consequence, only minimal attention is paid to food waste management by many ship and port operators and advisory bodies, and there is a paucity of information in the available literature. The determination that management of ships' food waste is inconsequential is, however, incorrect in many circumstances. Disposal to sea is not always possible due to restrictions imposed by MARPOL 73/78 and other marine pollution control instruments. Effective management of food waste can be critical for ships that operate in areas where disposal is restricted or totally prohibited.
Organic pollutants removal in wastewater by heterogeneous photocatalytic ozonation.
Xiao, Jiadong; Xie, Yongbing; Cao, Hongbin
2015-02-01
Heterogeneous photocatalysis and ozonation are robust advanced oxidation processes for eliminating organic contaminants in wastewater. The combination of these two methods is carried out in order to enhance the overall mineralization of refractory organics. An apparent synergism between heterogeneous photocatalysis and ozonation has been demonstrated in many literatures, which gives rise to an improvement of total organic carbon removal. The present overview dissects the heterogeneous catalysts and the influences of different operational parameters, followed by the discussion on the kinetics, mechanism, economic feasibility and future trends of this integrated technology. The enhanced oxidation rate mainly results from a large amount of hydroxyl radicals generated from a synergistically induced decomposition of dissolved ozone, besides superoxide ion radicals and the photo-induced holes. Six reaction pathways possibly exist for the generation of hydroxyl radicals in the reaction mechanism of heterogeneous photocatalytic ozonation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Norris, Gareth
2015-01-01
The increasing use of multi-media applications, trial presentation software and computer generated exhibits (CGE) has raised questions as to the potential impact of the use of presentation technology on juror decision making. A significant amount of the commentary on the manner in which CGE exerts legal influence is largely anecdotal; empirical examinations too are often devoid of established theoretical rationalisations. This paper will examine a range of established judgement heuristics (for example, the attribution error, representativeness, simulation), in order to establish their appropriate application for comprehending legal decisions. Analysis of both past cases and empirical studies will highlight the potential for heuristics and biases to be restricted or confounded by the use of CGE. The paper will conclude with some wider discussion on admissibility, access to justice, and emerging issues in the use of multi-media in court. Copyright © 2015 Elsevier Ltd. All rights reserved.
The auditory neural network in man
NASA Technical Reports Server (NTRS)
Galambos, R.
1975-01-01
The principles of anatomy and physiology necessary for understanding brain wave recordings made from the scalp of normal people are briefly discussed. Brain waves evoked by sounds are described and certain of their features are related to the physical aspects of the stimulus and to the psychological state of the listener. The position is taken that data obtained through scalp probes can reveal a large amount of detail about brain functioning and that analysis of such records enable detection of the response of the nervous system to an acoustic message at the moment of its inception and to the progress of the message through the brain. Brain events responsible for distinguishing between similar signals and making decisions about them appear to generate characteristic and identifiable electrical waves. Some theoretical speculation about these data are introduced with the aim of generating a more heuristic model of the functioning brain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shou-Zhe, E-mail: lisz@dlut.edu.cn; Zhang, Xin; Chen, Chuan-Jie
2014-07-15
The atmospheric-pressure microwave N{sub 2}-H{sub 2} plasma torch is generated and diagnosed by optical emission spectroscopy. It is found that a large amount of N atoms and NH radicals are generated in the plasma torch and the emission intensity of N{sub 2}{sup +} first negative band is the strongest over the spectra. The mixture of hydrogen in nitrogen plasma torch causes the morphology of the plasma discharge to change with appearance that the afterglow shrinks greatly and the emission intensity of N{sub 2}{sup +} first negative band decreases with more hydrogen mixed into nitrogen plasma. In atmospheric-pressure microwave-induced plasma torch,more » the hydrogen imposes a great influence on the characteristics of nitrogen plasma through the quenching effect of the hydrogen on the metastable state of N{sub 2}.« less
Castanea sativa by-products: a review on added value and sustainable application.
Braga, Nair; Rodrigues, Francisca; Oliveira, M Beatriz P P
2015-01-01
Castanea sativa Mill. is a species of the family Fagaceae abundant in south Europe and Asia. The fruits (chestnut) are an added value resource in producing countries. Chestnut economic value is increasing not only for nutritional qualities but also for the beneficial health effects related with its consumption. During chestnut processing, a large amount of waste material is generated namely inner shell, outer shell and leaves. Studies on chestnut by-products revealed a good profile of bioactive compounds with antioxidant, anticarcinogenic and cardioprotective properties. These agro-industrial wastes, after valorisation, can be used by other industries, such as pharmaceutical, food or cosmetics, generating more profits, reducing pollution costs and improving social, economic and environmental sustainability. The purpose of this review is to provide knowledge about the type of chestnut by-products produced, the studies concerning its chemical composition and biological activity, and also to discuss other possible applications of these materials.
Jockusch, Steffen; Turro, Nicholas J; Banala, Srinivas; Kräutler, Bernhard
2014-02-01
Fluorescent chlorophyll catabolites (FCCs) are fleeting intermediates of chlorophyll breakdown, which is seen as an enzyme controlled detoxification process of the chlorophylls in plants. However, some plants accumulate large amounts of persistent FCCs, such as in senescent leaves and in peels of yellow bananas. The photophysical properties of such a persistent FCC (Me-sFCC) were investigated in detail. FCCs absorb in the near UV spectral region and show blue fluorescence (max at 437 nm). The Me-sFCC fluorescence had a quantum yield of 0.21 (lifetime 1.6 ns). Photoexcited Me-sFCC intersystem crosses into the triplet state (quantum yield 0.6) and generates efficiently singlet oxygen (quantum yield 0.59). The efficient generation of singlet oxygen makes fluorescent chlorophyll catabolites phototoxic, but might also be useful as a (stress) signal and for defense of the plant tissue against infection by pathogens.
Lee, Kyu-Tae; Jang, Ji-Yun; Park, Sang Jin; Ok, Song Ah; Park, Hui Joon
2017-09-28
See-through perovskite solar cells with high efficiency and iridescent colors are demonstrated by employing a multilayer dielectric mirror. A certain amount of visible light is used for wide color gamut semitransparent color generation, which can be easily tuned by changing an angle of incidence, and a wide range of visible light is efficiently reflected back toward a photoactive layer of the perovskite solar cells by the dielectric mirror for highly efficient light-harvesting performance, thus achieving 10.12% power conversion efficiency. We also rigorously examine how the number of pairs in the multilayer dielectric mirror affects optical properties of the colored semitransparent perovskite solar cells. The described approach can open the door to a large number of applications such as building-integrated photovoltaics, self-powered wearable electronics and power-generating color filters for energy-efficient display systems.
SeqCompress: an algorithm for biological sequence compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan
2014-10-01
The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.
Filamentary model in resistive switching materials
NASA Astrophysics Data System (ADS)
Jasmin, Alladin C.
2017-12-01
The need for next generation computer devices is increasing as the demand for efficient data processing increases. The amount of data generated every second also increases which requires large data storage devices. Oxide-based memory devices are being studied to explore new research frontiers thanks to modern advances in nanofabrication. Various oxide materials are studied as active layers for non-volatile memory. This technology has potential application in resistive random-access-memory (ReRAM) and can be easily integrated in CMOS technologies. The long term perspective of this research field is to develop devices which mimic how the brain processes information. To realize such application, a thorough understanding of the charge transport and switching mechanism is important. A new perspective in the multistate resistive switching based on current-induced filament dynamics will be discussed. A simple equivalent circuit of the device gives quantitative information about the nature of the conducting filament at different resistance states.
Studying biodiversity: is a new paradigm really needed?
Nichols, James D.; Cooch, Evan G.; Nichols, Jonathan M.; Sauer, John R.
2012-01-01
Authors in this journal have recommended a new approach to the conduct of biodiversity science. This data-driven approach requires the organization of large amounts of ecological data, analysis of these data to discover complex patterns, and subsequent development of hypotheses corresponding to detected patterns. This proposed new approach has been contrasted with more-traditional knowledge-based approaches in which investigators deduce consequences of competing hypotheses to be confronted with actual data, providing a basis for discriminating among the hypotheses. We note that one approach is directed at hypothesis generation, whereas the other is also focused on discriminating among competing hypotheses. Here, we argue for the importance of using existing knowledge to the separate issues of (a) hypothesis selection and generation and (b) hypothesis discrimination and testing. In times of limited conservation funding, the relative efficiency of different approaches to learning should be an important consideration in decisions about how to study biodiversity.
Performance study of thermo-electric generator
NASA Astrophysics Data System (ADS)
Rohit, G.; Manaswini, D.; Kotebavi, Vinod; R, Nagaraja S.
2017-07-01
Devices like automobiles, stoves, ovens, boilers, kilns and heaters dissipate large amount of waste heat. Since most of this waste heat goes unused, the efficiency of these devices is drastically reduced. A lot of research is being conducted on the recovery of the waste heat, among which Thermoelectric Generators (TEG) is one of the popular method. TEG is a semiconductor device that produces electric potential difference when a thermal gradient develops on it. This paper deals with the study of performance of a TEG module for different hot surface temperatures. Performance characteristics used here are voltage, current and power developed by the TEG. One side of the TEG was kept on a hot plate where uniform heat flux was supplied to that. And the other side was cooled by supplying cold water. The results show that the output power increases significantly with increase in the temperature of the hot surface.
Reid, Jeffrey G; Carroll, Andrew; Veeraraghavan, Narayanan; Dahdouli, Mahmoud; Sundquist, Andreas; English, Adam; Bainbridge, Matthew; White, Simon; Salerno, William; Buhay, Christian; Yu, Fuli; Muzny, Donna; Daly, Richard; Duyk, Geoff; Gibbs, Richard A; Boerwinkle, Eric
2014-01-29
Massively parallel DNA sequencing generates staggering amounts of data. Decreasing cost, increasing throughput, and improved annotation have expanded the diversity of genomics applications in research and clinical practice. This expanding scale creates analytical challenges: accommodating peak compute demand, coordinating secure access for multiple analysts, and sharing validated tools and results. To address these challenges, we have developed the Mercury analysis pipeline and deployed it in local hardware and the Amazon Web Services cloud via the DNAnexus platform. Mercury is an automated, flexible, and extensible analysis workflow that provides accurate and reproducible genomic results at scales ranging from individuals to large cohorts. By taking advantage of cloud computing and with Mercury implemented on the DNAnexus platform, we have demonstrated a powerful combination of a robust and fully validated software pipeline and a scalable computational resource that, to date, we have applied to more than 10,000 whole genome and whole exome samples.
New insights into the bioactivity of peptides from probiotics.
Mandal, Santi M; Pati, Bikas R; Chakraborty, Ranadhir; Franco, Octavio L
2016-06-01
Probiotics are unique bacteria that offer several therapeutic benefits to human beings when administered in optimum amounts. Probiotics are able to produce antimicrobial substances, which stimulate the body's immune responses. Here, we review in detail the anti-infective peptides derived from probiotics and their potential immunomodulatory and anti-inflammatory activities, including a major role in cross-talk between probiotics and gut microbiota under adverse conditions. Insights from the engineered cell surface of probiotics may provide novel anti-infective therapy by heterologous expression of receptor peptides of bacterial toxins. It may be possible to use antigenic peptides from viral pathogens as live vaccines. Another possibility is to generate antiviral peptides that bind directly to virus particles, while some peptides exert anti-inflammatory and anticancer effects. Some extracellular polymeric substances might serve as anti-infective peptides. These avenues of treatment have remained largely unexplored to date, despite their potential in generating powerful anti-inflammatory and anti-infective products.
A knowledge-based, concept-oriented view generation system for clinical data.
Zeng, Q; Cimino, J J
2001-04-01
Information overload is a well-known problem for clinicians who must review large amounts of data in patient records. Concept-oriented views, which organize patient data around clinical concepts such as diagnostic strategies and therapeutic goals, may offer a solution to the problem of information overload. However, although concept-oriented views are desirable, they are difficult to create and maintain. We have developed a general-purpose, knowledge-based approach to the generation of concept-oriented views and have developed a system to test our approach. The system creates concept-oriented views through automated identification of relevant patient data. The knowledge in the system is represented by both a semantic network and rules. The key relevant data identification function is accomplished by a rule-based traversal of the semantic network. This paper focuses on the design and implementation of the system; an evaluation of the system is reported separately.
Ono, Hiroyuki; Saitsu, Hirotomo; Horikawa, Reiko; Nakashima, Shinichi; Ohkubo, Yumiko; Yanagi, Kumiko; Nakabayashi, Kazuhiko; Fukami, Maki; Fujisawa, Yasuko; Ogata, Tsutomu
2018-02-02
Although partial androgen insensitivity syndrome (PAIS) is caused by attenuated responsiveness to androgens, androgen receptor gene (AR) mutations on the coding regions and their splice sites have been identified only in <25% of patients with a diagnosis of PAIS. We performed extensive molecular studies including whole exome sequencing in a Japanese family with PAIS, identifying a deep intronic variant beyond the branch site at intron 6 of AR (NM_000044.4:c.2450-42 G > A). This variant created the splice acceptor motif that was accompanied by pyrimidine-rich sequence and two candidate branch sites. Consistent with this, reverse transcriptase (RT)-PCR experiments for cycloheximide-treated lymphoblastoid cell lines revealed a relatively large amount of aberrant mRNA produced by the newly created splice acceptor site and a relatively small amount of wildtype mRNA produced by the normal splice acceptor site. Furthermore, most of the aberrant mRNA was shown to undergo nonsense mediated decay (NMD) and, if a small amount of aberrant mRNA may have escaped NMD, such mRNA was predicted to generate a truncated AR protein missing some functional domains. These findings imply that the deep intronic mutation creating an alternative splice acceptor site resulted in the production of a relatively small amount of wildtype AR mRNA, leading to PAIS.
The welfare effects of integrating renewable energy into electricity markets
NASA Astrophysics Data System (ADS)
Lamadrid, Alberto J.
The challenges of deploying more renewable energy sources on an electric grid are caused largely by their inherent variability. In this context, energy storage can help make the electric delivery system more reliable by mitigating this variability. This thesis analyzes a series of models for procuring electricity and ancillary services for both individuals and social planners with high penetrations of stochastic wind energy. The results obtained for an individual decision maker using stochastic optimization are ambiguous, with closed form solutions dependent on technological parameters, and no consideration of the system reliability. The social planner models correctly reflect the effect of system reliability, and in the case of a Stochastic, Security Constrained Optimal Power Flow (S-SC-OPF or SuperOPF), determine reserve capacity endogenously so that system reliability is maintained. A single-period SuperOPF shows that including ramping costs in the objective function leads to more wind spilling and increased capacity requirements for reliability. However, this model does not reflect the inter temporal tradeoffs of using Energy Storage Systems (ESS) to improve reliability and mitigate wind variability. The results with the multiperiod SuperOPF determine the optimum use of storage for a typical day, and compare the effects of collocating ESS at wind sites with the same amount of storage (deferrable demand) located at demand centers. The collocated ESS has slightly lower operating costs and spills less wind generation compared to deferrable demand, but the total amount of conventional generating capacity needed for system adequacy is higher. In terms of the total system costs, that include the capital cost of conventional generating capacity, the costs with deferrable demand is substantially lower because the daily demand profile is flattened and less conventional generation capacity is then needed for reliability purposes. The analysis also demonstrates that the optimum daily pattern of dispatch and reserves is seriously distorted if the stochastic characteristics of wind generation are ignored.
Estimated use of water in the United States, 1955
MacKichan, Kenneth Allen
1957-01-01
The estimated withdrawal use of water in the United States during 1955 was about 740,000 mgd (million gallons per day). Withdrawal use of water requires that it be removed from the ground or diverted from a stream or lake. In this report it is divided into five types: public supplies, rural, irrigation, self-supplied industrial, and waterpower. Consumptive use of water is the quantity discharged to the atmosphere or incorporated in the products of the process in which it was used. Only a small part of the water withdrawn for industry was consumed, but as much as 60 percent of the water withdrawn for irrigation may have been consumed.Of the water withdrawn in 1955 about 1,500,000 mgd was for generation of waterpower, and all other withdrawal uses amounted to only about 240,000 mgd. Surface-water sources supplied 194,000 mgd and groundwater sources supplied 46,000 mgd. The amount of water withdrawn in each State and in each of 19 geographic regions is given.The quantity of water used without being withdrawn for such purposes as navigation, recreation, and conservation of fish and wildlife was not determined. The water surface area of the reservoirs and lakes used to store water for these purposes is sufficiently large that the evaporation from this source is greater than the quantity of water withdrawn for rural and public supplies.The amount of water used for generation of waterpower has increased 36 percent since 1950. The largest increase, 43 percent, was in self-supplied industrial water. Rural use, excluding irrigation, decreased 31 percent.The upper limit of our water supply is the average annual runoff, nearly 1,200, 000 mgd. The supply is depleted by the quantity of water consumed rather than by the quantity withdrawn. In 1955 about one-fourth of the water withdrawn was consumed. The amount thus consumed is about one-twentieth of the supply.
Occupational health hazards related to informal recycling of E-waste in India: An overview.
Annamalai, Jayapradha
2015-01-01
The innovation in science and technology coupled with the change in lifestyle of an individual has made an incredible change in the electronic industry show casing an assorted range of new products every day to the world. India too has been impacted by this digital revolution where consumption of electronics goods grows at a rapid rate producing a large amount of waste electrical and electronic equipment. This substantial generation of electronic waste referred to as e-waste accompanied with the lack of stringent environmental laws and regulations for handling the hazardous e-waste has resulted in the cropping of number of informal sectors. Over 95% of the e-waste is treated and processed in the majority of urban slums of the country, where untrained workers carry out the dangerous procedures without personal protective equipment, which are detrimental not only to their health but also to the environment. This paper focuses on the occupational health hazards due to the informal recycling of e-waste and then proceeds to show the safe disposal methods for handling the large quantities of e-waste generated in this electronic era and thus finds a sustainable solution for the formal processing of e-waste.
Identification of mutated driver pathways in cancer using a multi-objective optimization model.
Zheng, Chun-Hou; Yang, Wu; Chong, Yan-Wen; Xia, Jun-Feng
2016-05-01
New-generation high-throughput technologies, including next-generation sequencing technology, have been extensively applied to solve biological problems. As a result, large cancer genomics projects such as the Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium are producing large amount of rich and diverse data in multiple cancer types. The identification of mutated driver genes and driver pathways from these data is a significant challenge. Genome aberrations in cancer cells can be divided into two types: random 'passenger mutation' and functional 'driver mutation'. In this paper, we introduced a Multi-objective Optimization model based on a Genetic Algorithm (MOGA) to solve the maximum weight submatrix problem, which can be employed to identify driver genes and driver pathways promoting cancer proliferation. The maximum weight submatrix problem defined to find mutated driver pathways is based on two specific properties, i.e., high coverage and high exclusivity. The multi-objective optimization model can adjust the trade-off between high coverage and high exclusivity. We proposed an integrative model by combining gene expression data and mutation data to improve the performance of the MOGA algorithm in a biological context. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bai, Jing; Wang, Rui; Li, Yunpo; Tang, Yuanyuan; Zeng, Qingyi; Xia, Ligang; Li, Xuejin; Li, Jinhua; Li, Caolong; Zhou, Baoxue
2016-07-05
In this paper, a novel dual heterojunction Photocatalytic Fuel Cell (PFC) system based on BiVO4/TiO2 nanotubes/FTO photoanode and ZnO/CuO nanowires/FTO photocathode has been designed. Compared with the electrodes in PFCs reported in earlier literatures, the proposed heterojunction not only enhances the visible light absorption but also offers a higher photoconversion efficiency. In addition, the nanostructured heterojunction owns a large surface area that ensures a large amount of active sites for organics degradation. The performance of the PFC base on the dual photoelectrodes was also studied herein. The results indicated that the PFC in ths paper exhibits a superior performance and its JV(max) reached 0.116 mw cm(-2), which is higher than that in most of reported PFCs with a Pt-free photocathode. When hazardous organic compounds such as methyl orange, Congo red and methylene blue were decomposed, the degradation rates obtained is to be 76%, 83%, and 90% respectively after 80 mins reaction. The proposed heterojunction photoelectrodes provided great potential for cost-effective and high-efficiency organic pollutants degradation and electricity generation in a PFC system. Copyright © 2016 Elsevier B.V. All rights reserved.
Dams in the Amazon: Belo Monte and Brazil's hydroelectric development of the Xingu River Basin.
Fearnside, Phillip M
2006-07-01
Hydroelectric dams represent major investments and major sources of environmental and social impacts. Powerful forces surround the decision-making process on public investments in the various options for the generation and conservation of electricity. Brazil's proposed Belo Monte Dam (formerly Kararaô) and its upstream counterpart, the Altamira Dam (better known by its former name of Babaquara) are at the center of controversies on the decision-making process for major infrastructure projects in Amazonia. The Belo Monte Dam by itself would have a small reservoir area (440 km2) and large installed capacity (11, 181.3 MW), but the Altamira/Babaquara Dam that would regulate the flow of the Xingu River (thereby increasing power generation at Belo Monte) would flood a vast area (6140 km2). The great impact of dams provides a powerful reason for Brazil to reassess its current policies that allocate large amounts of energy in the country's national grid to subsidized aluminum smelting for export. The case of Belo Monte and the five additional dams planned upstream (including the Altamira/Babaquara Dam) indicate the need for Brazil to reform its environmental assessment and licensing system to include the impacts of multiple interdependent projects.
Low Temperature Activation of Supported Metathesis Catalysts by Organosilicon Reducing Agents
2016-01-01
Alkene metathesis is a widely and increasingly used reaction in academia and industry because of its efficiency in terms of atom economy and its wide applicability. This reaction is notably responsible for the production of several million tons of propene annually. Such industrial processes rely on inexpensive silica-supported tungsten oxide catalysts, which operate at high temperatures (>350 °C), in contrast with the mild room temperature reaction conditions typically used with the corresponding molecular alkene metathesis homogeneous catalysts. This large difference in the temperature requirements is generally thought to arise from the difficulty in generating active sites (carbenes or metallacyclobutanes) in the classical metal oxide catalysts and prevents broader applicability, notably with functionalized substrates. We report here a low temperature activation process of well-defined metal oxo surface species using organosilicon reductants, which generate a large amount of active species at only 70 °C (0.6 active sites/W). This high activity at low temperature broadens the scope of these catalysts to functionalized substrates. This activation process can also be applied to classical industrial catalysts. We provide evidence for the formation of a metallacyclopentane intermediate and propose how the active species are formed. PMID:27610418
A Hierarchical Framework for Demand-Side Frequency Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moya, Christian; Zhang, Wei; Lian, Jianming
2014-06-02
With large-scale plans to integrate renewable generation, more resources will be needed to compensate for the uncertainty associated with intermittent generation resources. Under such conditions, performing frequency control using only supply-side resources become not only prohibitively expensive but also technically difficult. It is therefore important to explore how a sufficient proportion of the loads could assume a routine role in frequency control to maintain the stability of the system at an acceptable cost. In this paper, a novel hierarchical decentralized framework for frequency based load control is proposed. The framework involves two decision layers. The top decision layer determines themore » optimal droop gain required from the aggregated load response on each bus using a robust decentralized control approach. The second layer consists of a large number of devices, which switch probabilistically during contingencies so that the aggregated power change matches the desired droop amount according to the updated gains. The proposed framework is based on the classical nonlinear multi-machine power system model, and can deal with timevarying system operating conditions while respecting the physical constraints of individual devices. Realistic simulation results based on a 68-bus system are provided to demonstrate the effectiveness of the proposed strategy.« less
Occupational health hazards related to informal recycling of E-waste in India: An overview
Annamalai, Jayapradha
2015-01-01
The innovation in science and technology coupled with the change in lifestyle of an individual has made an incredible change in the electronic industry show casing an assorted range of new products every day to the world. India too has been impacted by this digital revolution where consumption of electronics goods grows at a rapid rate producing a large amount of waste electrical and electronic equipment. This substantial generation of electronic waste referred to as e-waste accompanied with the lack of stringent environmental laws and regulations for handling the hazardous e-waste has resulted in the cropping of number of informal sectors. Over 95% of the e-waste is treated and processed in the majority of urban slums of the country, where untrained workers carry out the dangerous procedures without personal protective equipment, which are detrimental not only to their health but also to the environment. This paper focuses on the occupational health hazards due to the informal recycling of e-waste and then proceeds to show the safe disposal methods for handling the large quantities of e-waste generated in this electronic era and thus finds a sustainable solution for the formal processing of e-waste. PMID:26023273
Global environmental effects of impact-generated aerosols: Results from a general circulation model
NASA Technical Reports Server (NTRS)
Covey, Curt; Ghan, Steven J.; Walton, John J.; Weissman, Paul R.
1989-01-01
Interception of sunlight by the high altitude worldwide dust cloud generated by impact of a large asteroid or comet would lead to substantial land surface cooling, according to the three-dimensional atmospheric general circulation model (GCM). This result is qualitatively similar to conclusions drawn from an earlier study that employed a one-dimensional atmospheric model, but in the GCM simulation the heat capacity of the oceans, not included in the one-dimensional model, substantially mitigates land surface cooling. On the other hand, the low heat capacity of the GCM's land surface allows temperatures to drop more rapidly in the initial stages of cooling than in the one-dimensional model study. GCM-simulated climatic changes in the scenario of asteroid/comet winter are more severe than in nuclear winter because the assumed aerosol amount is large enough to intercept all sunlight falling on earth. Impacts of smaller objects could also lead to dramatic, though of course less severe, climatic changes, according to the GCM. An asteroid or comet impact would not lead to anything approaching complete global freezing, but quite reasonable to assume that impacts would dramatically alter the climate in at least a patchy sense.
2011-01-01
Background The recent development of next generation sequencing technologies has made it possible to generate very large amounts of sequence data in species with little or no genome information. Combined with the large phenotypic databases available for wild and non-model species, these data will provide an unprecedented opportunity to "genomicise" ecological model organisms and establish the genetic basis of quantitative traits in natural populations. Results This paper describes the sequencing, de novo assembly and analysis from the transcriptome of eight tissues of ten wild great tits. Approximately 4.6 million sequences and 1.4 billion bases of DNA were generated and assembled into 95,979 contigs, one third of which aligned with known Taeniopygia guttata (zebra finch) and Gallus gallus (chicken) transcripts. The majority (78%) of the remaining contigs aligned within or very close to regions of the zebra finch genome containing known genes, suggesting that they represented precursor mRNA rather than untranscribed genomic DNA. More than 35,000 single nucleotide polymorphisms and 10,000 microsatellite repeats were identified. Eleven percent of contigs were expressed in every tissue, while twenty one percent of contigs were expressed in only one tissue. The function of those contigs with strong evidence for tissue specific expression and contigs expressed in every tissue was inferred from the gene ontology (GO) terms associated with these contigs; heart and pancreas had the highest number of highly tissue specific GO terms (21.4% and 28.5% respectively). Conclusions In summary, the transcriptomic data generated in this study will contribute towards efforts to assemble and annotate the great tit genome, as well as providing the markers required to perform gene mapping studies in wild populations. PMID:21635727
The effect of compliant prisms on subduction zone earthquakes and tsunamis
NASA Astrophysics Data System (ADS)
Lotto, Gabriel C.; Dunham, Eric M.; Jeppson, Tamara N.; Tobin, Harold J.
2017-01-01
Earthquakes generate tsunamis by coseismically deforming the seafloor, and that deformation is largely controlled by the shallow rupture process. Therefore, in order to better understand how earthquakes generate tsunamis, one must consider the material structure and frictional properties of the shallowest part of the subduction zone, where ruptures often encounter compliant sedimentary prisms. Compliant prisms have been associated with enhanced shallow slip, seafloor deformation, and tsunami heights, particularly in the context of tsunami earthquakes. To rigorously quantify the role compliant prisms play in generating tsunamis, we perform a series of numerical simulations that directly couple dynamic rupture on a dipping thrust fault to the elastodynamic response of the Earth and the acoustic response of the ocean. Gravity is included in our simulations in the context of a linearized Eulerian description of the ocean, which allows us to model tsunami generation and propagation, including dispersion and related nonhydrostatic effects. Our simulations span a three-dimensional parameter space of prism size, prism compliance, and sub-prism friction - specifically, the rate-and-state parameter b - a that determines velocity-weakening or velocity-strengthening behavior. We find that compliant prisms generally slow rupture velocity and, for larger prisms, generate tsunamis more efficiently than subduction zones without prisms. In most but not all cases, larger, more compliant prisms cause greater amounts of shallow slip and larger tsunamis. Furthermore, shallow friction is also quite important in determining overall slip; increasing sub-prism b - a enhances slip everywhere along the fault. Counterintuitively, we find that in simulations with large prisms and velocity-strengthening friction at the base of the prism, increasing prism compliance reduces rather than enhances shallow slip and tsunami wave height.
Discriminative motif optimization based on perceptron training
Patel, Ronak Y.; Stormo, Gary D.
2014-01-01
Motivation: Generating accurate transcription factor (TF) binding site motifs from data generated using the next-generation sequencing, especially ChIP-seq, is challenging. The challenge arises because a typical experiment reports a large number of sequences bound by a TF, and the length of each sequence is relatively long. Most traditional motif finders are slow in handling such enormous amount of data. To overcome this limitation, tools have been developed that compromise accuracy with speed by using heuristic discrete search strategies or limited optimization of identified seed motifs. However, such strategies may not fully use the information in input sequences to generate motifs. Such motifs often form good seeds and can be further improved with appropriate scoring functions and rapid optimization. Results: We report a tool named discriminative motif optimizer (DiMO). DiMO takes a seed motif along with a positive and a negative database and improves the motif based on a discriminative strategy. We use area under receiver-operating characteristic curve (AUC) as a measure of discriminating power of motifs and a strategy based on perceptron training that maximizes AUC rapidly in a discriminative manner. Using DiMO, on a large test set of 87 TFs from human, drosophila and yeast, we show that it is possible to significantly improve motifs identified by nine motif finders. The motifs are generated/optimized using training sets and evaluated on test sets. The AUC is improved for almost 90% of the TFs on test sets and the magnitude of increase is up to 39%. Availability and implementation: DiMO is available at http://stormo.wustl.edu/DiMO Contact: rpatel@genetics.wustl.edu, ronakypatel@gmail.com PMID:24369152
Rooftop Solar Photovoltaic Technical Potential in the United States. A Detailed Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagnon, Pieter; Margolis, Robert; Melius, Jennifer
2016-01-01
How much energy could be generated if PV modules were installed on all of the suitable roof area in the nation? To answer this question, we first use GIS methods to process a lidar dataset and determine the amount of roof area that is suitable for PV deployment in 128 cities nationwide, containing 23% of U.S. buildings, and provide PV-generation results for a subset of those cities. We then extend the insights from that analysis to the entire continental United States. We develop two statistical models--one for small buildings and one for medium and large buildings--and populate them with geographicmore » variables that correlate with rooftop's suitability for PV. We simulate the productivity of PV installed on the suitable roof area, and present the technical potential of PV on both small buildings and medium/large buildings for every state in the continental US. Within the 128 cities covered by lidar data, 83% of small buildings have a location suitable for a PV installation, but only 26% of the total rooftop area of small buildings is suitable for development. The sheer number of buildings in this class, however, gives small buildings the greatest technical potential. Small building rooftops could accommodate 731 GW of PV capacity and generate 926 TWh/year of PV energy, approximately 65% of rooftop PV's total technical potential. We conclude by summing the PV-generation results for all building sizes and therefore answering our original question, estimating that the total national technical potential of rooftop PV is 1,118 GW of installed capacity and 1,432 TWh of annual energy generation. This equates to 39% of total national electric-sector sales.« less
Rooftop Solar Photovoltaic Technical Potential in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagnon, Pieter; Margolis, Robert; Melius, Jennifer
2016-01-01
How much energy could we generate if PV modules were installed on all of the suitable roof area in the nation? To answer this question, we first use GIS methods to process a lidar dataset and determine the amount of roof area that is suitable for PV deployment in 128 cities nationwide, containing 23% of U.S. buildings, and provide PV-generation results for a subset of those cities. We then extend the insights from that analysis to the entire continental United States. We develop two statistical models -- one for small buildings and one for medium and large buildings -- andmore » populate them with geographic variables that correlate with rooftop's suitability for PV. We simulate the productivity of PV installed on the suitable roof area, and present the technical potential of PV on both small buildings and medium/large buildings for every state in the continental US. Within the 128 cities covered by lidar data, 83% of small buildings have a location suitable for a PV installation, but only 26% of the total rooftop area of small buildings is suitable for development. The sheer number of buildings in this class, however, gives small buildings the greatest technical potential. Small building rooftops could accommodate 731 GW of PV capacity and generate 926 TWh/year of PV energy, approximately 65% of rooftop PV's total technical potential. We conclude by summing the PV-generation results for all building sizes and therefore answering our original question, estimating that the total national technical potential of rooftop PV is 1,118 GW of installed capacity and 1,432 TWh of annual energy generation. This equates to 39% of total national electric-sector sales.« less
Won, Jongsung; Cheng, Jack C P; Lee, Ghang
2016-03-01
Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.
Flexible reserve markets for wind integration
NASA Astrophysics Data System (ADS)
Fernandez, Alisha R.
The increased interconnection of variable generation has motivated the use of improved forecasting to more accurately predict future production with the purpose to lower total system costs for balancing when the expected output exceeds or falls short of the actual output. Forecasts are imperfect, and the forecast errors associated with utility-scale generation from variable generators need new balancing capabilities that cannot be handled by existing ancillary services. Our work focuses on strategies for integrating large amounts of wind generation under the flex reserve market, a market that would called upon for short-term energy services during an under or oversupply of wind generation to maintain electric grid reliability. The flex reserve market would be utilized for time intervals that fall in-between the current ancillary services markets that would be longer than second-to-second energy services for maintaining system frequency and shorter than reserve capacity services that are called upon for several minutes up to an hour during an unexpected contingency on the grid. In our work, the wind operator would access the flex reserve market as an energy service to correct for unanticipated forecast errors, akin to paying the generators participating in the market to increase generation during a shortfall or paying the other generators to decrease generation during an excess of wind generation. Such a market does not currently exist in the Mid-Atlantic United States. The Pennsylvania-New Jersey-Maryland Interconnection (PJM) is the Mid-Atlantic electric grid case study that was used to examine if a flex reserve market can be utilized for integrating large capacities of wind generation in a lowcost manner for those providing, purchasing and dispatching these short-term balancing services. The following work consists of three studies. The first examines the ability of a hydroelectric facility to provide short-term forecast error balancing services via a flex reserve market, identifying the operational constraints that inhibit a multi-purpose dam facility to meet the desired flexible energy demand. The second study transitions from the hydroelectric facility as the decision maker providing flex reserve services to the wind plant as the decision maker purchasing these services. In this second study, methods for allocating the costs of flex reserve services under different wind policy scenarios are explored that aggregate farms into different groupings to identify the least-cost strategy for balancing the costs of hourly day-ahead forecast errors. The least-cost strategy may be different for an individual wind plant and for the system operator, noting that the least-cost strategy is highly sensitive to cost allocation and aggregation schemes. The latter may also cause cross-subsidies in the cost for balancing wind forecast errors among the different wind farms. The third study builds from the second, with the objective to quantify the amount of flex reserves needed for balancing future forecast errors using a probabilistic approach (quantile regression) to estimating future forecast errors. The results further examine the usefulness of separate flexible markets PJM could use for balancing oversupply and undersupply events, similar to the regulation up and down markets used in Europe. These three studies provide the following results and insights to large-scale wind integration using actual PJM wind farm data that describe the markets and generators within PJM. • Chapter 2 provides an in-depth analysis of the valuable, yet highly-constrained, energy services multi-purpose hydroelectric facilities can provide, though the opportunity cost for providing these services can result in large deviations from the reservoir policies with minimal revenue gain in comparison to dedicating the whole of dam capacity to providing day-ahead, baseload generation. • Chapter 3 quantifies the system-wide efficiency gains and the distributive effects of PJM's decision to act as a single balancing authority, which means that it procures ancillary services across its entire footprint simultaneously. This can be contrasted to Midwest Independent System Operator (MISO), which has several balancing authorities operating under its footprint. • Chapter 4 uses probabilistic methods to estimate the uncertainty in the forecast errors and the quantity of energy needed to balance these forecast errors at a certain percentile. Current practice is to use a point forecast that describes the conditional expectation of the dependent variable at each time step. The approach here uses quantile regression to describe the relationship between independent variable and the conditional quantiles (equivalently the percentiles) of the dependent variable. An estimate of the conditional density is performed, which contains information about the covariate relationship of the sign of the forecast errors (negative for too much wind generation and positive for too little wind generation) and the wind power forecast. This additional knowledge may be implemented in the decision process to more accurately schedule day-ahead wind generation bids and provide an example for using separate markets for balancing an oversupply and undersupply of generation. Such methods are currently used for coordinating large footprints of wind generation in Europe.
Huang, Hsin-Tao; Tsai, Chuang-Chuang; Huang, Yi-Pai
2010-08-01
The UV-excited flat lighting (UFL) technique differs from conventional fluorescent lamp or LED illumination. It involves using a remote phosphor film to convert the wavelength of UV light to visible light, achieving high brightness and planar and uniform illumination. In particular, UFL can accomplish compact size, low power consumption, and symmetrical dual-sided illumination. Additionally, UFL utilizes a thermal radiation mechanism to release the large amount of heat that is generated upon illumination without thermal accumulation. These characteristics of the UFL technique can motivate a wide range of lighting applications in thin-film transistor LCD backlighting or general lighting.
Onset of thermally induced gas convection in mine wastes
Lu, N.; Zhang, Y.
1997-01-01
A mine waste dump in which active oxidation of pyritic materials occurs can generate a large amount of heat to form convection cells. We analyze the onset of thermal convection in a two-dimensional, infinite horizontal layer of waste rock filled with moist gas, with the top surface of the waste dump open to the atmosphere and the bedrock beneath the waste dump forming a horizontal and impermeable boundary. Our analysis shows that the thermal regime of a waste rock system depends heavily on the atmospheric temperature, the strength of the heat source and the vapor pressure. ?? 1997 Elsevier Science Ltd. All rights reserved.
Estimates of Internal Tide Energy Fluxes from Topex/Poseidon Altimetry: Central North Pacific
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Cartwright, David E.; Smith, David E. (Technical Monitor)
2000-01-01
Energy fluxes for first-mode M(sub 2) internal tides are deduced throughout the central North Pacific Ocean from Topex/Poseidon satellite altimeter data. Temporally coherent internal tide signals in the altimetry, combined with climatological hydrographic data, determine the tidal displacements, pressures, and currents at depth, which yield power transmission rates. For a variety of reasons the deduced rates should be considered lower bounds. Internal tides were found to emanate from several large bathymetric structures, especially the Hawaiian Ridge, where the integrated flux amounts to about six gigawatts. Internal tides are generated at the Aleutian Trench near 172 deg west and propagate southwards nearly 2000 km.
Shumaker, L; Fetterolf, D E; Suhrie, J
1998-01-01
The recent availability of inexpensive document scanners and optical character recognition technology has created the ability to process surveys in large numbers with a minimum of operator time. Programs, which allow computer entry of such scanned questionnaire results directly into PC based relational databases, have further made it possible to quickly collect and analyze significant amounts of information. We have created an internal capability to easily generate survey data and conduct surveillance across a number of medical practice sites within a managed care/practice management organization. Patient satisfaction surveys, referring physician surveys and a variety of other evidence gathering tools have been deployed.
The impact of image storage organization on the effectiveness of PACS.
Hindel, R
1990-11-01
Picture archiving communication system (PACS) requires efficient handling of large amounts of data. Mass storage systems are cost effective but slow, while very fast systems, like frame buffers and parallel transfer disks, are expensive. The image traffic can be divided into inbound traffic generated by diagnostic modalities and outbound traffic into workstations. At the contact points with medical professionals, the responses must be fast. Archiving, on the other hand, can employ slower but less expensive storage systems, provided that the primary activities are not impeded. This article illustrates a segmentation architecture meeting these requirements based on a clearly defined PACS concept.
ASCOT: A Collaborative Platform for the Virtual Observatory
NASA Astrophysics Data System (ADS)
Marcos, D.; Connolly, A. J.; Krughoff, K. S.; Smith, I.; Wallace, S. C.
2012-09-01
The digital networks are changing the way that knowledge is created, structured, curated, consumed, archived and referenced. Projects like Wikipedia, Github or Galaxy Zoo have shown the potential of online communities to develop and communicate ideas. ASCOT is a web based framework that facilitates collaboration among astronomers providing a simple way to share, explore, interact and analyze large amounts of data from a broad range of sources available trough the Virtual Observatories (VO). Designed with a strong emphasis on usability, ASCOT takes advantage of the latest generation of web standards and cloud technologies to implement an extendable and customizable stack of web tools and services.
Safeguarding Structural Data Repositories against Bad Apples.
Minor, Wladek; Dauter, Zbigniew; Helliwell, John R; Jaskolski, Mariusz; Wlodawer, Alexander
2016-02-02
Structural biology research generates large amounts of data, some deposited in public databases or repositories, but a substantial remainder never becomes available to the scientific community. In addition, some of the deposited data contain less or more serious errors that may bias the results of data mining. Thorough analysis and discussion of these problems is needed to ameliorate this situation. This perspective is an attempt to propose some solutions and encourage both further discussion and action on the part of the relevant organizations, in particular the PDB and various bodies of the International Union of Crystallography. Copyright © 2016 Elsevier Ltd. All rights reserved.
Anzai, Kazunori; Ban, Nobuhiko; Ozawa, Toshihiko; Tokonami, Shinji
2012-01-01
On March 11, 2011, an earthquake led to major problems at the Fukushima Daiichi Nuclear Power Plant. A 14-m high tsunami triggered by the earthquake disabled all AC power to Units 1, 2, and 3 of the Power Plant, and carried off fuel tanks for emergency diesel generators. Despite many efforts, cooling systems did not work and hydrogen explosions damaged the facilities, releasing a large amount of radioactive material into the environment. In this review, we describe the environmental impact of the nuclear accident, and the fundamental biological effects, acute and late, of the radiation. Possible medical countermeasures to radiation exposure are also discussed.