Sample records for require large amounts

  1. Querying Large Biological Network Datasets

    ERIC Educational Resources Information Center

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  2. Low-authority control synthesis for large space structures

    NASA Technical Reports Server (NTRS)

    Aubrun, J. N.; Margulies, G.

    1982-01-01

    The control of vibrations of large space structures by distributed sensors and actuators is studied. A procedure is developed for calculating the feedback loop gains required to achieve specified amounts of damping. For moderate damping (Low Authority Control) the procedure is purely algebraic, but it can be applied iteratively when larger amounts of damping are required and is generalized for arbitrary time invariant systems.

  3. Use of tropical maize for bioethanol production

    USDA-ARS?s Scientific Manuscript database

    Tropical maize is an alternative energy crop being considered as a feedstock for bioethanol production in the North Central and Midwest United States. Tropical maize is advantageous because it produces large amounts of soluble sugars in its stalks, creates a large amount of biomass, and requires lo...

  4. Energy Storage Requirements for Achieving 50% Penetration of Solar Photovoltaic Energy in California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, Paul; Margolis, Robert

    2016-09-01

    We estimate the storage required to enable PV penetration up to 50% in California (with renewable penetration over 66%), and we quantify the complex relationships among storage, PV penetration, grid flexibility, and PV costs due to increased curtailment. We find that the storage needed depends strongly on the amount of other flexibility resources deployed. With very low-cost PV (three cents per kilowatt-hour) and a highly flexible electric power system, about 19 gigawatts of energy storage could enable 50% PV penetration with a marginal net PV levelized cost of energy (LCOE) comparable to the variable costs of future combined-cycle gas generatorsmore » under carbon constraints. This system requires extensive use of flexible generation, transmission, demand response, and electrifying one quarter of the vehicle fleet in California with largely optimized charging. A less flexible system, or more expensive PV would require significantly greater amounts of storage. The amount of storage needed to support very large amounts of PV might fit within a least-cost framework driven by declining storage costs and reduced storage-duration needs due to high PV penetration.« less

  5. Energy Storage Requirements for Achieving 50% Solar Photovoltaic Energy Penetration in California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, Paul; Margolis, Robert

    2016-08-01

    We estimate the storage required to enable PV penetration up to 50% in California (with renewable penetration over 66%), and we quantify the complex relationships among storage, PV penetration, grid flexibility, and PV costs due to increased curtailment. We find that the storage needed depends strongly on the amount of other flexibility resources deployed. With very low-cost PV (three cents per kilowatt-hour) and a highly flexible electric power system, about 19 gigawatts of energy storage could enable 50% PV penetration with a marginal net PV levelized cost of energy (LCOE) comparable to the variable costs of future combined-cycle gas generatorsmore » under carbon constraints. This system requires extensive use of flexible generation, transmission, demand response, and electrifying one quarter of the vehicle fleet in California with largely optimized charging. A less flexible system, or more expensive PV would require significantly greater amounts of storage. The amount of storage needed to support very large amounts of PV might fit within a least-cost framework driven by declining storage costs and reduced storage-duration needs due to high PV penetration.« less

  6. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  7. Autonomous Object Characterization with Large Datasets

    DTIC Science & Technology

    2015-10-18

    desk, where a substantial amount of effort is required to transform raw photometry into a data product, minimizing the amount of time the analyst has...were used to explore concepts in satellite characterization and satellite state change. The first algorithm provides real- time stability estimation... Timely and effective space object (SO) characterization is a challenge, and requires advanced data processing techniques. Detection and identification

  8. Program Design for Retrospective Searches on Large Data Bases

    ERIC Educational Resources Information Center

    Thiel, L. H.; Heaps, H. S.

    1972-01-01

    Retrospective search of large data bases requires development of special techniques for automatic compression of data and minimization of the number of input-output operations to the computer files. The computer program should require a relatively small amount of internal memory. This paper describes the structure of such a program. (9 references)…

  9. Evaluation of Soil Venting Application

    EPA Pesticide Factsheets

    The ability of soil venting to inexpensively remove large amounts of volatile organic compounds (VOCs) from contaminated soils is well established. However, the time required using venting to remediate soils to low contaminant levels often required by..

  10. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  11. Empirical relationships between tree fall and landscape-level amounts of logging and fire

    PubMed Central

    Blanchard, Wade; Blair, David; McBurney, Lachlan; Stein, John; Banks, Sam C.

    2018-01-01

    Large old trees are critically important keystone structures in forest ecosystems globally. Populations of these trees are also in rapid decline in many forest ecosystems, making it important to quantify the factors that influence their dynamics at different spatial scales. Large old trees often occur in forest landscapes also subject to fire and logging. However, the effects on the risk of collapse of large old trees of the amount of logging and fire in the surrounding landscape are not well understood. Using an 18-year study in the Mountain Ash (Eucalyptus regnans) forests of the Central Highlands of Victoria, we quantify relationships between the probability of collapse of large old hollow-bearing trees at a site and the amount of logging and the amount of fire in the surrounding landscape. We found the probability of collapse increased with an increasing amount of logged forest in the surrounding landscape. It also increased with a greater amount of burned area in the surrounding landscape, particularly for trees in highly advanced stages of decay. The most likely explanation for elevated tree fall with an increasing amount of logged or burned areas in the surrounding landscape is change in wind movement patterns associated with cutblocks or burned areas. Previous studies show that large old hollow-bearing trees are already at high risk of collapse in our study area. New analyses presented here indicate that additional logging operations in the surrounding landscape will further elevate that risk. Current logging prescriptions require the protection of large old hollow-bearing trees on cutblocks. We suggest that efforts to reduce the probability of collapse of large old hollow-bearing trees on unlogged sites will demand careful landscape planning to limit the amount of timber harvesting in the surrounding landscape. PMID:29474487

  12. Empirical relationships between tree fall and landscape-level amounts of logging and fire.

    PubMed

    Lindenmayer, David B; Blanchard, Wade; Blair, David; McBurney, Lachlan; Stein, John; Banks, Sam C

    2018-01-01

    Large old trees are critically important keystone structures in forest ecosystems globally. Populations of these trees are also in rapid decline in many forest ecosystems, making it important to quantify the factors that influence their dynamics at different spatial scales. Large old trees often occur in forest landscapes also subject to fire and logging. However, the effects on the risk of collapse of large old trees of the amount of logging and fire in the surrounding landscape are not well understood. Using an 18-year study in the Mountain Ash (Eucalyptus regnans) forests of the Central Highlands of Victoria, we quantify relationships between the probability of collapse of large old hollow-bearing trees at a site and the amount of logging and the amount of fire in the surrounding landscape. We found the probability of collapse increased with an increasing amount of logged forest in the surrounding landscape. It also increased with a greater amount of burned area in the surrounding landscape, particularly for trees in highly advanced stages of decay. The most likely explanation for elevated tree fall with an increasing amount of logged or burned areas in the surrounding landscape is change in wind movement patterns associated with cutblocks or burned areas. Previous studies show that large old hollow-bearing trees are already at high risk of collapse in our study area. New analyses presented here indicate that additional logging operations in the surrounding landscape will further elevate that risk. Current logging prescriptions require the protection of large old hollow-bearing trees on cutblocks. We suggest that efforts to reduce the probability of collapse of large old hollow-bearing trees on unlogged sites will demand careful landscape planning to limit the amount of timber harvesting in the surrounding landscape.

  13. A MBD-seq protocol for large-scale methylome-wide studies with (very) low amounts of DNA.

    PubMed

    Aberg, Karolina A; Chan, Robin F; Shabalin, Andrey A; Zhao, Min; Turecki, Gustavo; Staunstrup, Nicklas Heine; Starnawska, Anna; Mors, Ole; Xie, Lin Y; van den Oord, Edwin Jcg

    2017-09-01

    We recently showed that, after optimization, our methyl-CpG binding domain sequencing (MBD-seq) application approximates the methylome-wide coverage obtained with whole-genome bisulfite sequencing (WGB-seq), but at a cost that enables adequately powered large-scale association studies. A prior drawback of MBD-seq is the relatively large amount of genomic DNA (ideally >1 µg) required to obtain high-quality data. Biomaterials are typically expensive to collect, provide a finite amount of DNA, and may simply not yield sufficient starting material. The ability to use low amounts of DNA will increase the breadth and number of studies that can be conducted. Therefore, we further optimized the enrichment step. With this low starting material protocol, MBD-seq performed equally well, or better, than the protocol requiring ample starting material (>1 µg). Using only 15 ng of DNA as input, there is minimal loss in data quality, achieving 93% of the coverage of WGB-seq (with standard amounts of input DNA) at similar false/positive rates. Furthermore, across a large number of genomic features, the MBD-seq methylation profiles closely tracked those observed for WGB-seq with even slightly larger effect sizes. This suggests that MBD-seq provides similar information about the methylome and classifies methylation status somewhat more accurately. Performance decreases with <15 ng DNA as starting material but, even with as little as 5 ng, MBD-seq still achieves 90% of the coverage of WGB-seq with comparable genome-wide methylation profiles. Thus, the proposed protocol is an attractive option for adequately powered and cost-effective methylome-wide investigations using (very) low amounts of DNA.

  14. Reliability-based optimization design of geosynthetic reinforced road embankment.

    DOT National Transportation Integrated Search

    2014-07-01

    Road embankments are typically large earth structures, the construction of which requires for large amounts of competent fill soil. In order to limit costs, the utilization of geosynthetics in road embankments allows for construction of steep slopes ...

  15. A Cost Benefit Analysis of Emerging LED Water Purification Systems in Expeditionary Environments

    DTIC Science & Technology

    2017-03-23

    the initial contingency response phase, ROWPUs are powered by large generators which require relatively large amounts of fossil fuels. The amount of...they attract and cling together forming a larger particle (Chem Treat, 2016). Flocculation is the addition of a polymer to water that clumps...smaller particles together to form larger particles. The idea for both methods is that larger particles will either settle out of or be removed from the

  16. EVALUATION OF SOIL VENTING APPLICATION

    EPA Science Inventory

    The ability of soil venting to inexpensively remove large amounts of volatile organic compounds (VOCs) from contaminated soils is well established. However, the time required using venting to remediate soils to low contaminant levels often required by state and federal regulators...

  17. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    PubMed

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  18. Effect of noble gases on an atmospheric greenhouse /Titan/.

    NASA Technical Reports Server (NTRS)

    Cess, R.; Owen, T.

    1973-01-01

    Several models for the atmosphere of Titan have been investigated, taking into account various combinations of neon and argon. The investigation shows that the addition of large amounts of Ne and/or Ar will substantially reduce the hydrogen abundance required for a given greenhouse effect. The fact that a large amount of neon should be present if the atmosphere is a relic of the solar nebula is an especially attractive feature of the models, because it is hard to justify appropriate abundances of other enhancing agents.

  19. 75 FR 54059 - Extension of Filing Accommodation for Static Pool Information in Filings With Respect to Asset...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-03

    ... information could include a significant amount of statistical information that would be difficult to file... required static pool information. Given the large amount of statistical information involved, commentators....; and 18 U.S.C. 1350. * * * * * 2. Amend Sec. 232.312 paragraph (a) introductory text by removing...

  20. Sidewall-box airlift pump provides large flows for aeration, CO2 stripping, and water rotation in large dual-drain circular tanks

    USDA-ARS?s Scientific Manuscript database

    Conventional gas transfer technologies for aquaculture systems occupy a large amount of space, require a considerable capital investment, and can contribute to high electricity demand. In addition, diffused aeration in a circular culture tank can interfere with the hydrodynamics of water rotation a...

  1. Expression, purification, and characterization of almond (Prunus dulcis) allergen Pru du 4

    USDA-ARS?s Scientific Manuscript database

    Biochemical characterizations of food allergens are required for understanding the allergenicity of food allergens. Such studies require a relatively large amount of highly purified allergens. Profilins from numerous species are known to be allergens, including food allergens, such as almond (Prunus...

  2. Really big data: Processing and analysis of large datasets

    USDA-ARS?s Scientific Manuscript database

    Modern animal breeding datasets are large and getting larger, due in part to the recent availability of DNA data for many animals. Computational methods for efficiently storing and analyzing those data are under development. The amount of storage space required for such datasets is increasing rapidl...

  3. 75 FR 60333 - Hazardous Material; Miscellaneous Packaging Amendments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ... minimum thickness requirements for remanufactured steel and plastic drums; (2) reinstate the previous... communication problem for emergency responders in that it may interfere with them discovering a large amount of... prescribed in Sec. 178.2(c). D. Minimum Thickness Requirement for Remanufactured Steel and Plastic Drums...

  4. Species Profiles: Life Histories and Environmental Requirements of Coastal Fishes and Invertebrates (Mid-Atlantic): Atlantic Menhaden

    DTIC Science & Technology

    1989-08-01

    metabolism, and Important Atlantic menhaden growth. Low salinities decreased predators include bluefish (Pomatomus survival at temperatures below 5 ’C...large amounts of energy and materials. They are also important prey for large game fishes such as bluefish (Pomatomus saltatrix), striped bass

  5. Your Guide to Smart Year-Round Fundraising.

    ERIC Educational Resources Information Center

    Gensheimer, Cynthia Francis

    1994-01-01

    Describes three seasonal fund-raising projects that can be linked with the curriculum and require only a few parent volunteers. The projects are profitable without requiring large amounts of effort. They include selling holiday cards, conducting read-a-thons, and participating in save-the-rainforest group sales. Tips for holding successful fund…

  6. Crew size affects fire fighting efficiency: A progress report on time studies of the fire fighting job.

    Treesearch

    Donald N. Matthews

    1940-01-01

    Fire fighting is still largely a hand-work job in the heavy cover and fuel conditions and rugged topography of the Douglas fir region, in spite of recent advances that have been made in %he use of machinery. Controlling a fire in this region requires immense amounts of work per unit of fire perimeter, so that large numbers of men are required to attack all but the...

  7. An Internet of Things platform architecture for supporting ambient assisted living environments.

    PubMed

    Tsirmpas, Charalampos; Kouris, Ioannis; Anastasiou, Athanasios; Giokas, Kostas; Iliopoulou, Dimitra; Koutsouris, Dimitris

    2017-01-01

    Internet of Things (IoT) is the logical further development of today's Internet, enabling a huge amount of devices to communicate, compute, sense and act. IoT sensors placed in Ambient Assisted Living (AAL) environments, enable the context awareness and allow the support of the elderly in their daily routines, ultimately allowing an independent and safe lifestyle. The vast amount of data that are generated and exchanged between the IoT nodes require innovative context modeling approaches that go beyond currently used models. Current paper presents and evaluates an open interoperable platform architecture in order to utilize the technical characteristics of IoT and handle the large amount of generated data, as a solution to the technical requirements of AAL applications.

  8. Assays for the activities of polyamine biosynthetic enzymes using intact tissues

    Treesearch

    Rakesh Minocha; Stephanie Long; Hisae Maki; Subhash C. Minocha

    1999-01-01

    Traditionally, most enzyme assays utilize homogenized cell extracts with or without dialysis. Homogenization and centrifugation of large numbers of samples for screening of mutants and transgenic cell lines is quite cumbersome and generally requires sufficiently large amounts (hundreds of milligrams) of tissue. However, in situations where the tissue is available in...

  9. Testing Effect and Complex Comprehension in a Large Introductory Undergraduate Biology Course

    ERIC Educational Resources Information Center

    Pagliarulo, Christopher L.

    2011-01-01

    Traditional undergraduate biology courses are content intensive, requiring students to understand and remember large amounts of information in short periods of time. Yet most students maintain little of the material encountered during their education. Poor knowledge retention is a main cause of academic failure and high undergraduate attrition…

  10. Enabling PBPK model development through the application of freely available techniques for the creation of a chemically-annotatedcollection of literature

    EPA Science Inventory

    The creation of Physiologically Based Pharmacokinetic (PBPK) models for a new chemical requires the selection of an appropriate model structure and the collection of a large amount of data for parameterization. Commonly, a large proportion of the needed information is collected ...

  11. REDUCING THE WASTE STREAM: BRINGING ENVIRONMENTAL, ECONOMICAL, AND EDUCATIONAL COMPOSTING TO A LIBERAL ARTS COLLEGE

    EPA Science Inventory

    The Northfield, Minnesota area contains three institutions that produce a large amount of compostable food waste. St. Olaf College uses a large-scale on-site composting machine that effectively transforms the food waste to compost, but the system requires an immense start-up c...

  12. User Oriented Techniques to Support Interaction and Decision Making with Large Educational Databases

    ERIC Educational Resources Information Center

    Hartley, Roger; Almuhaidib, Saud M. Y.

    2007-01-01

    Information Technology is developing rapidly and providing policy/decision makers with large amounts of information that require processing and analysis. Decision support systems (DSS) aim to provide tools that not only help such analyses, but enable the decision maker to experiment and simulate the effects of different policies and selection…

  13. Correlation between Academic and Skills-Based Tests in Computer Networks

    ERIC Educational Resources Information Center

    Buchanan, William

    2006-01-01

    Computing-related programmes and modules have many problems, especially related to large class sizes, large-scale plagiarism, module franchising, and an increased requirement from students for increased amounts of hands-on, practical work. This paper presents a practical computer networks module which uses a mixture of online examinations and a…

  14. Plasma issues associated with the use of electrodynamic tethers

    NASA Technical Reports Server (NTRS)

    Hastings, D. E.

    1986-01-01

    The use of an electrodynamic tether to generate power or thrust on the space station raises important plasma issues associted with the current flow. In addition to the issue of current closure through the space station, high power tethers (equal to or greater than tens of kilowatts) require the use of plasma contactors to enhance the current flow. They will generate large amounts of electrostatic turbulence in the vicinity of the space station. This is because the contactors work best when a large amount of current driven turbulence is excited. Current work is reviewed and future directions suggested.

  15. Models of resource planning during formation of calendar construction plans for erection of high-rise buildings

    NASA Astrophysics Data System (ADS)

    Pocebneva, Irina; Belousov, Vadim; Fateeva, Irina

    2018-03-01

    This article provides a methodical description of resource-time analysis for a wide range of requirements imposed for resource consumption processes in scheduling tasks during the construction of high-rise buildings and facilities. The core of the proposed approach and is the resource models being determined. The generalized network models are the elements of those models, the amount of which can be too large to carry out the analysis of each element. Therefore, the problem is to approximate the original resource model by simpler time models, when their amount is not very large.

  16. Effects of consumption of choline and lecithin on neurological and cardiovascular systems.

    PubMed

    Wood, J L; Allison, R G

    1982-12-01

    This report concerns possible adverse health effects and benefits that might result from consumption of large amounts of choline, lecithin, or phosphatidylcholine. Indications from preliminary investigations that administration of choline or lecithin might alleviate some neurological disturbances, prevent hypercholesteremia and atherosclerosis, and restore memory and cognition have resulted in much research and public interest. Symptoms of tardive dyskinesia and Alzheimer's disease have been ameliorated in some patients and varied responses have been observed in the treatment of Gilles de la Tourette's disease, Friedreich's ataxia, levodopa-induced dyskinesia, mania, Huntington's disease, and myasthenic syndrome. Further clinical trials, especially in conjunction with cholinergic drugs, are considered worthwhile but will require sufficient amounts of pure phosphatidylcholine. The public has access to large amounts of commercial lecithin. Because high intakes of lecithin or choline produce acute gastrointestinal distress, sweating, salivation, and anorexia, it is improbable that individuals will incur lasting health hazards from self-administration of either compound. Development of depression or supersensitivity of dopamine receptors and disturbance of the cholinergic-dopaminergic-serotinergic balance is a concern with prolonged, repeated intakes of large amounts of lecithin.

  17. DEP : a computer program for evaluating lumber drying costs and investments

    Treesearch

    Stewart Holmes; George B. Harpole; Edward Bilek

    1983-01-01

    The DEP computer program is a modified discounted cash flow computer program designed for analysis of problems involving economic analysis of wood drying processes. Wood drying processes are different from other processes because of the large amounts of working capital required to finance inventories, and because of relatively large shares of costs charged to inventory...

  18. Multiresource inventories incorporating GIS, GPS, and database management systems

    Treesearch

    Loukas G. Arvanitis; Balaji Ramachandran; Daniel P. Brackett; Hesham Abd-El Rasol; Xuesong Du

    2000-01-01

    Large-scale natural resource inventories generate enormous data sets. Their effective handling requires a sophisticated database management system. Such a system must be robust enough to efficiently store large amounts of data and flexible enough to allow users to manipulate a wide variety of information. In a pilot project, related to a multiresource inventory of the...

  19. Integration of Digital Technology and Innovative Strategies for Learning and Teaching Large Classes: A Calculus Case Study

    ERIC Educational Resources Information Center

    Vajravelu, Kuppalapalle; Muhs, Tammy

    2016-01-01

    Successful science and engineering programs require proficiency and dynamics in mathematics classes to enhance the learning of complex subject matter with a sufficient amount of practical problem solving. Improving student performance and retention in mathematics classes requires inventive approaches. At the University of Central Florida (UCF) the…

  20. Laboratory Exercise to Evaluate Hay Preservatives.

    ERIC Educational Resources Information Center

    McGraw, R. L.; And Others

    1990-01-01

    Presented is a laboratory exercise designed to demonstrate the effects of moisture on hay preservation products in a manner that does not require large amounts of equipment or instructor time. Materials, procedures, and probable results are discussed. (CW)

  1. Bluetooth-based travel time/speed measuring systems development.

    DOT National Transportation Integrated Search

    2010-06-01

    Agencies in the Houston region have traditionally used toll tag readers to provide travel times on : freeways and High Occupancy Vehicle (HOV) lanes, but these systems require large amounts of costly and : physically invasive infrastructure. Bluetoot...

  2. Use of cloud computing in biomedicine.

    PubMed

    Sobeslav, Vladimir; Maresova, Petra; Krejcar, Ondrej; Franca, Tanos C C; Kuca, Kamil

    2016-12-01

    Nowadays, biomedicine is characterised by a growing need for processing of large amounts of data in real time. This leads to new requirements for information and communication technologies (ICT). Cloud computing offers a solution to these requirements and provides many advantages, such as cost savings, elasticity and scalability of using ICT. The aim of this paper is to explore the concept of cloud computing and the related use of this concept in the area of biomedicine. Authors offer a comprehensive analysis of the implementation of the cloud computing approach in biomedical research, decomposed into infrastructure, platform and service layer, and a recommendation for processing large amounts of data in biomedicine. Firstly, the paper describes the appropriate forms and technological solutions of cloud computing. Secondly, the high-end computing paradigm of cloud computing aspects is analysed. Finally, the potential and current use of applications in scientific research of this technology in biomedicine is discussed.

  3. A machine learning model with human cognitive biases capable of learning from small and biased datasets.

    PubMed

    Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro

    2018-05-09

    Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.

  4. Cutting Edge: Protection by Antiviral Memory CD8 T Cells Requires Rapidly Produced Antigen in Large Amounts.

    PubMed

    Remakus, Sanda; Ma, Xueying; Tang, Lingjuan; Xu, Ren-Huan; Knudson, Cory; Melo-Silva, Carolina R; Rubio, Daniel; Kuo, Yin-Ming; Andrews, Andrew; Sigal, Luis J

    2018-05-15

    Numerous attempts to produce antiviral vaccines by harnessing memory CD8 T cells have failed. A barrier to progress is that we do not know what makes an Ag a viable target of protective CD8 T cell memory. We found that in mice susceptible to lethal mousepox (the mouse homolog of human smallpox), a dendritic cell vaccine that induced memory CD8 T cells fully protected mice when the infecting virus produced Ag in large quantities and with rapid kinetics. Protection did not occur when the Ag was produced in low amounts, even with rapid kinetics, and protection was only partial when the Ag was produced in large quantities but with slow kinetics. Hence, the amount and timing of Ag expression appear to be key determinants of memory CD8 T cell antiviral protective immunity. These findings may have important implications for vaccine design. Copyright © 2018 by The American Association of Immunologists, Inc.

  5. Energy Contents of Frequently Ordered Restaurant Meals and Comparison with Human Energy Requirements and US Department of Agriculture Database Information: A Multisite Randomized Study

    PubMed Central

    Urban, Lorien E.; Weber, Judith L.; Heyman, Melvin B.; Schichtl, Rachel L.; Verstraete, Sofia; Lowery, Nina S.; Das, Sai Krupa; Schleicher, Molly M.; Rogers, Gail; Economos, Christina; Masters, William A.; Roberts, Susan B.

    2017-01-01

    Background Excess energy intake from meals consumed away from home is implicated as a major contributor to obesity, and ~50% of US restaurants are individual or small-chain (non–chain) establishments that do not provide nutrition information. Objective To measure the energy content of frequently ordered meals in non–chain restaurants in three US locations, and compare with the energy content of meals from large-chain restaurants, energy requirements, and food database information. Design A multisite random-sampling protocol was used to measure the energy contents of the most frequently ordered meals from the most popular cuisines in non–chain restaurants, together with equivalent meals from large-chain restaurants. Setting Meals were obtained from restaurants in San Francisco, CA; Boston, MA; and Little Rock, AR, between 2011 and 2014. Main outcome measures Meal energy content determined by bomb calorimetry. Statistical analysis performed Regional and cuisine differences were assessed using a mixed model with restaurant nested within region×cuisine as the random factor. Paired t tests were used to evaluate differences between non–chain and chain meals, human energy requirements, and food database values. Results Meals from non–chain restaurants contained 1,205±465 kcal/meal, amounts that were not significantly different from equivalent meals from large-chain restaurants (+5.1%; P=0.41). There was a significant effect of cuisine on non–chain meal energy, and three of the four most popular cuisines (American, Italian, and Chinese) had the highest mean energy (1,495 kcal/meal). Ninety-two percent of meals exceeded typical energy requirements for a single eating occasion. Conclusions Non–chain restaurants lacking nutrition information serve amounts of energy that are typically far in excess of human energy requirements for single eating occasions, and are equivalent to amounts served by the large-chain restaurants that have previously been criticized for providing excess energy. Restaurants in general, rather than specific categories of restaurant, expose patrons to excessive portions that induce overeating through established biological mechanisms. PMID:26803805

  6. Energy Contents of Frequently Ordered Restaurant Meals and Comparison with Human Energy Requirements and U.S. Department of Agriculture Database Information: A Multisite Randomized Study.

    PubMed

    Urban, Lorien E; Weber, Judith L; Heyman, Melvin B; Schichtl, Rachel L; Verstraete, Sofia; Lowery, Nina S; Das, Sai Krupa; Schleicher, Molly M; Rogers, Gail; Economos, Christina; Masters, William A; Roberts, Susan B

    2016-04-01

    Excess energy intake from meals consumed away from home is implicated as a major contributor to obesity, and ∼50% of US restaurants are individual or small-chain (non-chain) establishments that do not provide nutrition information. To measure the energy content of frequently ordered meals in non-chain restaurants in three US locations, and compare with the energy content of meals from large-chain restaurants, energy requirements, and food database information. A multisite random-sampling protocol was used to measure the energy contents of the most frequently ordered meals from the most popular cuisines in non-chain restaurants, together with equivalent meals from large-chain restaurants. Meals were obtained from restaurants in San Francisco, CA; Boston, MA; and Little Rock, AR, between 2011 and 2014. Meal energy content determined by bomb calorimetry. Regional and cuisine differences were assessed using a mixed model with restaurant nested within region×cuisine as the random factor. Paired t tests were used to evaluate differences between non-chain and chain meals, human energy requirements, and food database values. Meals from non-chain restaurants contained 1,205±465 kcal/meal, amounts that were not significantly different from equivalent meals from large-chain restaurants (+5.1%; P=0.41). There was a significant effect of cuisine on non-chain meal energy, and three of the four most popular cuisines (American, Italian, and Chinese) had the highest mean energy (1,495 kcal/meal). Ninety-two percent of meals exceeded typical energy requirements for a single eating occasion. Non-chain restaurants lacking nutrition information serve amounts of energy that are typically far in excess of human energy requirements for single eating occasions, and are equivalent to amounts served by the large-chain restaurants that have previously been criticized for providing excess energy. Restaurants in general, rather than specific categories of restaurant, expose patrons to excessive portions that induce overeating through established biological mechanisms. Copyright © 2016 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  7. NASTRAN application for the prediction of aircraft interior noise

    NASA Technical Reports Server (NTRS)

    Marulo, Francesco; Beyer, Todd B.

    1987-01-01

    The application of a structural-acoustic analogy within the NASTRAN finite element program for the prediction of aircraft interior noise is presented. Some refinements of the method, which reduce the amount of computation required for large, complex structures, are discussed. Also, further improvements are proposed and preliminary comparisons with structural and acoustic modal data obtained for a large, composite cylinder are presented.

  8. Integrated Data Capturing Requirements for 3d Semantic Modelling of Cultural Heritage: the Inception Protocol

    NASA Astrophysics Data System (ADS)

    Di Giulio, R.; Maietti, F.; Piaia, E.; Medici, M.; Ferrari, F.; Turillazzi, B.

    2017-02-01

    The generation of high quality 3D models can be still very time-consuming and expensive, and the outcome of digital reconstructions is frequently provided in formats that are not interoperable, and therefore cannot be easily accessed. This challenge is even more crucial for complex architectures and large heritage sites, which involve a large amount of data to be acquired, managed and enriched by metadata. In this framework, the ongoing EU funded project INCEPTION - Inclusive Cultural Heritage in Europe through 3D semantic modelling proposes a workflow aimed at the achievements of efficient 3D digitization methods, post-processing tools for an enriched semantic modelling, web-based solutions and applications to ensure a wide access to experts and non-experts. In order to face these challenges and to start solving the issue of the large amount of captured data and time-consuming processes in the production of 3D digital models, an Optimized Data Acquisition Protocol (DAP) has been set up. The purpose is to guide the processes of digitization of cultural heritage, respecting needs, requirements and specificities of cultural assets.

  9. Enhancing Defense Support of Civil Authorities within the National Capital Region

    DTIC Science & Technology

    2011-04-04

    tenorist organizations center of gravity, critical capabilities, critical requirements, and critical vulnerabilities. For example, to execute the...London Underground bombings ( critical capability), the group (center of gravity) needed money ( critical requirement! critical vulnerability). 8 Usually...the movement of large amounts of money through banks, the internet, or via credit cards creates a critical vulnerability because law enforcement

  10. Isolation of High-Molecular-Weight DNA from Mammalian Tissues Using Proteinase K and Phenol.

    PubMed

    Green, Michael R; Sambrook, Joseph

    2017-03-01

    This procedure is the method of choice for purification of genomic DNA from mammalian tissues when large amounts of DNA are required, for example, for Southern blotting. © 2017 Cold Spring Harbor Laboratory Press.

  11. Computationally efficient simulation of unsteady aerodynamics using POD on the fly

    NASA Astrophysics Data System (ADS)

    Moreno-Ramos, Ruben; Vega, José M.; Varas, Fernando

    2016-12-01

    Modern industrial aircraft design requires a large amount of sufficiently accurate aerodynamic and aeroelastic simulations. Current computational fluid dynamics (CFD) solvers with aeroelastic capabilities, such as the NASA URANS unstructured solver FUN3D, require very large computational resources. Since a very large amount of simulation is necessary, the CFD cost is just unaffordable in an industrial production environment and must be significantly reduced. Thus, a more inexpensive, yet sufficiently precise solver is strongly needed. An opportunity to approach this goal could follow some recent results (Terragni and Vega 2014 SIAM J. Appl. Dyn. Syst. 13 330-65 Rapun et al 2015 Int. J. Numer. Meth. Eng. 104 844-68) on an adaptive reduced order model that combines ‘on the fly’ a standard numerical solver (to compute some representative snapshots), proper orthogonal decomposition (POD) (to extract modes from the snapshots), Galerkin projection (onto the set of POD modes), and several additional ingredients such as projecting the equations using a limited amount of points and fairly generic mode libraries. When applied to the complex Ginzburg-Landau equation, the method produces acceleration factors (comparing with standard numerical solvers) of the order of 20 and 300 in one and two space dimensions, respectively. Unfortunately, the extension of the method to unsteady, compressible flows around deformable geometries requires new approaches to deal with deformable meshes, high-Reynolds numbers, and compressibility. A first step in this direction is presented considering the unsteady compressible, two-dimensional flow around an oscillating airfoil using a CFD solver in a rigidly moving mesh. POD on the Fly gives results whose accuracy is comparable to that of the CFD solver used to compute the snapshots.

  12. Requirement analysis to promote small-sized E-waste collection from consumers.

    PubMed

    Mishima, Kuniko; Nishimura, Hidekazu

    2016-02-01

    The collection and recycling of small-sized waste electrical and electronic equipment is an emerging problem, since these products contain certain amounts of critical metals and rare earths. Even if the amount is not large, having a few supply routes for such recycled resources could be a good strategy to be competitive in a world of finite resources. The small-sized e-waste sometimes contains personal information, therefore, consumers are often reluctant to put them into recycling bins. In order to promote the recycling of E-waste, collection of used products from the consumer becomes important. Effective methods involving incentives for consumers might be necessary. Without such methods, it will be difficult to achieve the critical amounts necessary for an efficient recycling system. This article focused on used mobile phones among information appliances as the first case study, since it contains relatively large amounts of valuable metals compared with other small-sized waste electrical and electronic equipment and there are a large number of products existing in the market. The article carried out surveys to determine what kind of recycled material collection services are preferred by consumers. The results clarify that incentive or reward money alone is not a driving force for recycling behaviour. The article discusses the types of effective services required to promote recycling behaviour. The article concludes that securing information, transferring data and providing proper information about resources and environment can be an effective tool to encourage a recycling behaviour strategy to promote recycling, plus the potential discount service on purchasing new products associated with the return of recycled mobile phones. © The Author(s) 2015.

  13. Open-Ended Electric Motor

    ERIC Educational Resources Information Center

    Gould, Mauri

    1975-01-01

    Presents complete instructions for assembling an electric motor which does not require large amounts of power to operate and which is inexpensive as well as reliable. Several open-ended experiments with the motor are included as well as information for obtaining a kit of parts and instructions. (BR)

  14. USE OF ALTERED MICROORGANISMS FOR FIELD BIODEGRADATION OF HAZARDOUS MATERIALS

    EPA Science Inventory

    The large amount of hazardous waste generated and disposed of has given rise to environmental conditions requiring remedial treatment. he use of landfills has traditionally been a cost-effective means to dispose of waste. owever, increased costs of transportation and decreasing n...

  15. Techniques for increasing the efficiency of Earth gravity calculations for precision orbit determination

    NASA Technical Reports Server (NTRS)

    Smith, R. L.; Lyubomirsky, A. S.

    1981-01-01

    Two techniques were analyzed. The first is a representation using Chebyshev expansions in three-dimensional cells. The second technique employs a temporary file for storing the components of the nonspherical gravity force. Computer storage requirements and relative CPU time requirements are presented. The Chebyshev gravity representation can provide a significant reduction in CPU time in precision orbit calculations, but at the cost of a large amount of direct-access storage space, which is required for a global model.

  16. Selected Papers on Low-Energy Antiprotons and Possible Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noble, Robert

    1998-09-19

    The only realistic means by which to create a facility at Fermilab to produce large amounts of low energy antiprotons is to use resources which already exist. There is simply too little money and manpower at this point in time to generate new accelerators on a time scale before the turn of the century. Therefore, innovation is required to modify existing equipment to provide the services required by experimenters.

  17. Atmospheric density models

    NASA Technical Reports Server (NTRS)

    Mueller, A. C.

    1977-01-01

    An atmospheric model developed by Jacchia, quite accurate but requiring a large amount of computer storage and execution time, was found to be ill-suited for the space shuttle onboard program. The development of a simple atmospheric density model to simulate the Jacchia model was studied. Required characteristics including variation with solar activity, diurnal variation, variation with geomagnetic activity, semiannual variation, and variation with height were met by the new atmospheric density model.

  18. A link between eumelanism and calcium physiology in the barn owl

    NASA Astrophysics Data System (ADS)

    Roulin, Alexandre; Dauwe, Tom; Blust, Ronny; Eens, Marcel; Beaud, Michel

    2006-09-01

    In many animals, melanin-based coloration is strongly heritable and is largely insensitive to the environment and body condition. According to the handicap principle, such a trait may not reveal individual quality because the production of different melanin-based colorations often entails similar costs. However, a recent study showed that the production of eumelanin pigments requires relatively large amounts of calcium, potentially implying that melanin-based coloration is associated with physiological processes requiring calcium. If this is the case, eumelanism may be traded-off against other metabolic processes that require the same elements. We used a correlative approach to examine, for the first time, this proposition in the barn owl, a species in which individuals vary in the amount, size, and blackness of eumelanic spots. For this purpose, we measured calcium concentration in the left humerus of 85 dead owls. Results showed that the humeri of heavily spotted individuals had a higher concentration of calcium. This suggests either that plumage spottiness signals the ability to absorb calcium from the diet for both eumelanin production and storage in bones, or that lightly spotted individuals use more calcium for metabolic processes at the expense of calcium storage in bones. Our study supports the idea that eumelanin-based coloration is associated with a number of physiological processes requiring calcium.

  19. Beyond methane: Towards a theory for the Paleocene-Eocene Thermal Maximum

    NASA Astrophysics Data System (ADS)

    Higgins, John A.; Schrag, Daniel P.

    2006-05-01

    Extreme global warmth and an abrupt negative carbon isotope excursion during the Paleocene-Eocene Thermal Maximum (PETM) have been attributed to a massive release of methane hydrate from sediments on the continental slope [1]. However, the magnitude of the warming (5 to 6 °C [2],[3]) and rise in the depth of the CCD (> 2 km; [4]) indicate that the size of the carbon addition was larger than can be accounted for by the methane hydrate hypothesis. Additional carbon sources associated with methane hydrate release (e.g. pore-water venting and turbidite oxidation) are also insufficient. We find that the oxidation of at least 5000 Gt C of organic carbon is the most likely explanation for the observed geochemical and climatic changes during the PETM, for which there are several potential mechanisms. Production of thermogenic CH4 and CO2 during contact metamorphism associated with the intrusion of a large igneous province into organic rich sediments [5] is capable of supplying large amounts of carbon, but is inconsistent with the lack of extensive carbon loss in metamorphosed sediments, as well as the abrupt onset and termination of carbon release during the PETM. A global conflagration of Paleocene peatlands [6] highlights a large terrestrial carbon source, but massive carbon release by fire seems unlikely as it would require that all peatlands burn at once and then for only 10 to 30 ky. In addition, this hypothesis requires an order of magnitude increase in the amount of carbon stored in peat. The isolation of a large epicontinental seaway by tectonic uplift associated with volcanism or continental collision, followed by desiccation and bacterial respiration of the aerated organic matter is another potential mechanism for the rapid release of large amounts of CO2. In addition to the oxidation of the underlying marine sediments, the desiccation of a major epicontinental seaway would remove a large source of moisture for the continental interior, resulting in the desiccation and bacterial oxidation of adjacent terrestrial wetlands.

  20. Phosphorus: a limiting nutrient for humanity?

    PubMed

    Elser, James J

    2012-12-01

    Phosphorus is a chemical element that is essential to life because of its role in numerous key molecules, including DNA and RNA; indeed, organisms require large amounts of P to grow rapidly. However, the supply of P from the environment is often limiting to production, including to crops. Thus, large amounts of P are mined annually to produce fertilizer that is applied in support of the 'Green Revolution.' However, much of this fertilizer eventually ends up in rivers, lakes and oceans where it causes costly eutrophication. Furthermore, given increasing human population, expanding meat consumption, and proliferating bioenergy pressures, concerns have recently been raised about the long-term geological, economic, and geopolitical viability of mined P for fertilizer production. Together, these issues highlight the non-sustainable nature of current human P use. To achieve P sustainability, farms need to become more efficient in how they use P while society as a whole must develop technologies and practices to recycle P from the food chain. Such large-scale changes will probably require a radical restructuring of the entire food system, highlighting the need for prompt but sustained action. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Fourier transform infrared microspectroscopy for the analysis of the biochemical composition of C. elegans worms.

    PubMed

    Sheng, Ming; Gorzsás, András; Tuck, Simon

    2016-01-01

    Changes in intermediary metabolism have profound effects on many aspects of C. elegans biology including growth, development and behavior. However, many traditional biochemical techniques for analyzing chemical composition require relatively large amounts of starting material precluding the analysis of mutants that cannot be grown in large amounts as homozygotes. Here we describe a technique for detecting changes in the chemical compositions of C. elegans worms by Fourier transform infrared microspectroscopy. We demonstrate that the technique can be used to detect changes in the relative levels of carbohydrates, proteins and lipids in one and the same worm. We suggest that Fourier transform infrared microspectroscopy represents a useful addition to the arsenal of techniques for metabolic studies of C. elegans worms.

  2. Data Processing Factory for the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Stoughton, Christopher; Adelman, Jennifer; Annis, James T.; Hendry, John; Inkmann, John; Jester, Sebastian; Kent, Steven M.; Kuropatkin, Nickolai; Lee, Brian; Lin, Huan; Peoples, John, Jr.; Sparks, Robert; Tucker, Douglas; Vanden Berk, Dan; Yanny, Brian; Yocum, Dan

    2002-12-01

    The Sloan Digital Sky Survey (SDSS) data handling presents two challenges: large data volume and timely production of spectroscopic plates from imaging data. A data processing factory, using technologies both old and new, handles this flow. Distribution to end users is via disk farms, to serve corrected images and calibrated spectra, and a database, to efficiently process catalog queries. For distribution of modest amounts of data from Apache Point Observatory to Fermilab, scripts use rsync to update files, while larger data transfers are accomplished by shipping magnetic tapes commercially. All data processing pipelines are wrapped in scripts to address consecutive phases: preparation, submission, checking, and quality control. We constructed the factory by chaining these pipelines together while using an operational database to hold processed imaging catalogs. The science database catalogs all imaging and spectroscopic object, with pointers to the various external files associated with them. Diverse computing systems address particular processing phases. UNIX computers handle tape reading and writing, as well as calibration steps that require access to a large amount of data with relatively modest computational demands. Commodity CPUs process steps that require access to a limited amount of data with more demanding computations requirements. Disk servers optimized for cost per Gbyte serve terabytes of processed data, while servers optimized for disk read speed run SQLServer software to process queries on the catalogs. This factory produced data for the SDSS Early Data Release in June 2001, and it is currently producing Data Release One, scheduled for January 2003.

  3. Antibody recognition of the glycoprotein g of viral haemorrhagic septicemia virus (VHSV) purified in large amounts from insect larvae

    PubMed Central

    2011-01-01

    Background There are currently no purification methods capable of producing the large amounts of fish rhabdoviral glycoprotein G (gpG) required for diagnosis and immunisation purposes or for studying structure and molecular mechanisms of action of this molecule (ie. pH-dependent membrane fusion). As a result of the unavailability of large amounts of the gpG from viral haemorrhagic septicaemia rhabdovirus (VHSV), one of the most dangerous viruses affecting cultured salmonid species, research interests in this field are severely hampered. Previous purification methods to obtain recombinant gpG from VHSV in E. coli, yeast and baculovirus grown in insect cells have not produced soluble conformations or acceptable yields. The development of large-scale purification methods for gpGs will also further research into other fish rhabdoviruses, such as infectious haematopoietic necrosis virus (IHNV), spring carp viremia virus (SVCV), hirame rhabdovirus (HIRRV) and snakehead rhabdovirus (SHRV). Findings Here we designed a method to produce milligram amounts of soluble VHSV gpG. Only the transmembrane and carboxy terminal-deleted (amino acid 21 to 465) gpG was efficiently expressed in insect larvae. Recognition of G21-465 by ß-mercaptoethanol-dependent neutralizing monoclonal antibodies (N-MAbs) and pH-dependent recognition by sera from VHSV-hyperimmunized or VHSV-infected rainbow trout (Oncorhynchus mykiss) was demonstrated. Conclusions Given that the purified G21-465 conserved some of its most important properties, this method might be suitable for the large-scale production of fish rhabdoviral gpGs for use in diagnosis, fusion and antigenicity studies. PMID:21693048

  4. Antibody recognition of the glycoprotein g of viral haemorrhagic septicemia virus (VHSV) purified in large amounts from insect larvae.

    PubMed

    Encinas, Paloma; Gomez-Sebastian, Silvia; Nunez, Maria Carmen; Gomez-Casado, Eduardo; Escribano, Jose M; Estepa, Amparo; Coll, Julio

    2011-06-21

    There are currently no purification methods capable of producing the large amounts of fish rhabdoviral glycoprotein G (gpG) required for diagnosis and immunisation purposes or for studying structure and molecular mechanisms of action of this molecule (ie. pH-dependent membrane fusion). As a result of the unavailability of large amounts of the gpG from viral haemorrhagic septicaemia rhabdovirus (VHSV), one of the most dangerous viruses affecting cultured salmonid species, research interests in this field are severely hampered. Previous purification methods to obtain recombinant gpG from VHSV in E. coli, yeast and baculovirus grown in insect cells have not produced soluble conformations or acceptable yields. The development of large-scale purification methods for gpGs will also further research into other fish rhabdoviruses, such as infectious haematopoietic necrosis virus (IHNV), spring carp viremia virus (SVCV), hirame rhabdovirus (HIRRV) and snakehead rhabdovirus (SHRV). Here we designed a method to produce milligram amounts of soluble VHSV gpG. Only the transmembrane and carboxy terminal-deleted (amino acid 21 to 465) gpG was efficiently expressed in insect larvae. Recognition of G21-465 by ß-mercaptoethanol-dependent neutralizing monoclonal antibodies (N-MAbs) and pH-dependent recognition by sera from VHSV-hyperimmunized or VHSV-infected rainbow trout (Oncorhynchus mykiss) was demonstrated. Given that the purified G21-465 conserved some of its most important properties, this method might be suitable for the large-scale production of fish rhabdoviral gpGs for use in diagnosis, fusion and antigenicity studies.

  5. Large, horizontal-axis wind turbines

    NASA Technical Reports Server (NTRS)

    Linscott, B. S.; Perkins, P.; Dennett, J. T.

    1984-01-01

    Development of the technology for safe, reliable, environmentally acceptable large wind turbines that have the potential to generate a significant amount of electricity at costs competitive with conventional electric generating systems are presented. In addition, these large wind turbines must be fully compatible with electric utility operations and interface requirements. There are several ongoing large wind system development projects and applied research efforts directed toward meeting the technology requirements for utility applications. Detailed information on these projects is provided. The Mod-O research facility and current applied research effort in aerodynamics, structural dynamics and aeroelasticity, composite and hybrid composite materials, and multiple system interaction are described. A chronology of component research and technology development for large, horizontal axis wind turbines is presented. Wind characteristics, wind turbine economics, and the impact of wind turbines on the environment are reported. The need for continued wind turbine research and technology development is explored. Over 40 references are sited and a bibliography is included.

  6. The IBM PC at NASA Ames

    NASA Technical Reports Server (NTRS)

    Peredo, James P.

    1988-01-01

    Like many large companies, Ames relies very much on its computing power to get work done. And, like many other large companies, finding the IBM PC a reliable tool, Ames uses it for many of the same types of functions as other companies. Presentation and clarification needs demand much of graphics packages. Programming and text editing needs require simpler, more-powerful packages. The storage space needed by NASA's scientists and users for the monumental amounts of data that Ames needs to keep demand the best database packages that are large and easy to use. Availability to the Micom Switching Network combines the powers of the IBM PC with the capabilities of other computers and mainframes and allows users to communicate electronically. These four primary capabilities of the PC are vital to the needs of NASA's users and help to continue and support the vast amounts of work done by the NASA employees.

  7. When is an INP not an INP?

    NASA Astrophysics Data System (ADS)

    Simpson, Emma; Connolly, Paul; McFiggans, Gordon

    2016-04-01

    Processes such as precipitation and radiation depend on the concentration and size of different hydrometeors within clouds therefore it is important to accurately predict them in weather and climate models. A large fraction of clouds present in our atmosphere are mixed phase; contain both liquid and ice particles. The number of drops and ice crystals present in mixed phase clouds strongly depends on the size distribution of aerosols. Cloud condensation nuclei (CCN), a subset of atmospheric aerosol particles, are required for liquid drops to form in the atmosphere. These particles are ubiquitous in the atmosphere. To nucleate ice particles in mixed phase clouds ice nucleating particles (INP) are required. These particles are rarer than CCN. Here we investigate the case where CCN and INPs are in direct competition with each other for water vapour within a cloud. Focusing on the immersion and condensation modes of freezing (where an INP must be immersed within a liquid drop before it can freeze) we show that the presence of CCN can suppress the formation of ice. CCN are more hydrophilic than IN and as such are better able to compete for water vapour than, typically insoluble, INPs. Therefore water is more likely to condense onto a CCN than INP, leaving the INP without enough condensed water on it to be able to freeze in the immersion or condensation mode. The magnitude of this suppression effect strongly depends on a currently unconstrained quantity. Here we refer to this quantity as the critical mass of condensed water required for freezing, Mwc. Mwc is the threshold amount of water that must be condensed onto a INP before it can freeze in the immersion or condensation mode. Using the detailed cloud parcel model, Aerosol-Cloud-Precipiation-Interaction Model (ACPIM), developed at the University of Manchester we show that if only a small amount of water is required for freezing there is little suppression effect and if a large amount of water is required there is a large suppression effect. In this poster possible ways to constrain Mwc are discussed as well as conditions where the suppression effect is likely to be greatest. Key Words: Clouds, aerosol, CCN, IN, modelling

  8. Atmospheric considerations regarding the impact of heat dissipation from a nuclear energy center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rotty, R.M.; Bauman, H.; Bennett, L.L.

    1976-05-01

    Potential changes in climate resulting from a large nuclear energy center are discussed. On a global scale, no noticeable changes are likely, but on both a regional and a local scale, changes can be expected. Depending on the cooling system employed, the amount of fog may increase, the amount and distribution of precipitation will change, and the frequency or location of severe storms may change. Very large heat releases over small surface areas can result in greater atmospheric instability; a large number of closely spaced natural-draft cooling towers have this disadvantage. On the other hand, employment of natural-draft towers makesmore » an increase in the occurrence of ground fog unlikely. The analysis suggests that the cooling towers for a large nuclear energy center should be located in clusters of four with at least 2.5-mile spacing between the clusters. This is equivalent to the requirement of one acre of land surface per each two megawatts of heat being rejected.« less

  9. Large Animal Models of an In Vivo Bioreactor for Engineering Vascularized Bone.

    PubMed

    Akar, Banu; Tatara, Alexander M; Sutradhar, Alok; Hsiao, Hui-Yi; Miller, Michael; Cheng, Ming-Huei; Mikos, Antonios G; Brey, Eric M

    2018-04-12

    Reconstruction of large skeletal defects is challenging due to the requirement for large volumes of donor tissue and the often complex surgical procedures. Tissue engineering has the potential to serve as a new source of tissue for bone reconstruction, but current techniques are often limited in regards to the size and complexity of tissue that can be formed. Building tissue using an in vivo bioreactor approach may enable the production of appropriate amounts of specialized tissue, while reducing issues of donor site morbidity and infection. Large animals are required to screen and optimize new strategies for growing clinically appropriate volumes of tissues in vivo. In this article, we review both ovine and porcine models that serve as models of the technique proposed for clinical engineering of bone tissue in vivo. Recent findings are discussed with these systems, as well as description of next steps required for using these models, to develop clinically applicable tissue engineering applications.

  10. A prospective survey of nutritional support practices in intensive care unit patients: what is prescribed? What is delivered?

    PubMed

    De Jonghe, B; Appere-De-Vechi, C; Fournier, M; Tran, B; Merrer, J; Melchior, J C; Outin, H

    2001-01-01

    To assess the amount of nutrients delivered, prescribed, and required for critically ill patients and to identify the reasons for discrepancies between prescriptions and requirements and between prescriptions and actual delivery of nutrition. Prospective cohort study. Twelve-bed medical intensive care unit in a university-affiliated general hospital. Fifty-one consecutive patients, receiving nutritional support either enterally or intravenously for > or = 2 days. We followed patients for the first 14 days of nutritional delivery. The amount of calories prescribed and the amount actually delivered were recorded daily and compared with the theoretical energy requirements. A combined regimen of enteral and parenteral nutrition was administered on 58% of the 484 nutrition days analyzed, and 63.5% of total caloric intake was delivered enterally. Seventy-eight percent of the mean caloric amount required was prescribed, and 71% was effectively delivered. The amount of calories actually delivered compared with the amount prescribed was significantly lower in enteral than in parenteral administration (86.8% vs. 112.4%, p < .001). Discrepancies between prescription and delivery of enterally administered nutrients were attributable to interruptions caused by digestive intolerance (27.7%, mean daily wasted volume 641 mL), airway management (30.8%, wasted volume 745 mL), and diagnostic procedures (26.6%, wasted volume 567 mL). Factors significantly associated with a low prescription rate of nutritional support were the administration of vasoactive drugs, central venous catheterization, and the need for extrarenal replacement. An inadequate delivery of enteral nutrition and a low rate of nutrition prescription resulted in low caloric intake in our intensive care unit patients. A large volume of enterally administered nutrients was wasted because of inadequate timing in stopping and restarting enteral feeding. The inverse correlation between the prescription rate of nutrition and the intensity of care required suggests that physicians need to pay more attention to providing appropriate nutritional support for the most severely ill patients.

  11. Multi-Media in USAF Pilot Training.

    ERIC Educational Resources Information Center

    Wood, Milton E.

    The flight-line portion of flying training has traditionally required large amounts of airborne practice under an apprenticeship form of instruction. New developments in educational technology, from both a philosophical and device point of view, provide new opportunities to train airborne skills in a ground environment. Through the use of…

  12. Processes to improve energy efficiency during pumping and aeration of recirculating water in circular tank systems

    USDA-ARS?s Scientific Manuscript database

    Conventional gas transfer technologies for aquaculture systems occupy a large amount of space, require considerable capital investment, and can contribute to high electricity demand. In addition, diffused aeration in a circular tank can interfere with the hydrodynamics of water rotation and the spee...

  13. Achieving Continuous Improvement: Theories that Support a System Change.

    ERIC Educational Resources Information Center

    Armel, Donald

    Focusing on improvement is different than focusing on quality, quantity, customer satisfaction, and productivity. This paper discusses Open System Theory, and suggests ways to change large systems. Changing a system (meaning the way all the parts are connected) requires a considerable amount of data gathering and analysis. Choosing the proper…

  14. The Hard(ware) Choice

    ERIC Educational Resources Information Center

    Demski, Jennifer

    2012-01-01

    When it comes to implementing a large-scale 1-to-1 computing initiative, deciding which device students will use every day to support their learning requires a significant amount of thought and research. Laptop, netbook, Chromebook, tablet--each device has enough similarities to make the decision seem easy, but enough differences to make a big…

  15. Earbuds: A Method for Analyzing Nasality in the Field

    ERIC Educational Resources Information Center

    Stewart, Jesse; Kohlberger, Martin

    2017-01-01

    Existing methods for collecting and analyzing nasality data are problematic for linguistic fieldworkers: aerodynamic equipment can be expensive and difficult to transport, and acoustic analyses require large amounts of optimally-recorded data. In this paper, a highly mobile and low-cost method is proposed. By connecting low impedance earbuds into…

  16. Los Alamos high-power proton linac designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawrence, G.P.

    1995-10-01

    Medium-energy high-power proton linear accelerators have been studied at Los Alamos as drivers for spallation neutron applications requiring large amounts of beam power. Reference designs for such accelerators are discussed, important design factors are reviewed, and issues and concern specific to this unprecedented power regime are discussed.

  17. Knockout of an outer membrane protein operon of anaplasma marginale by transposon mutagenesis

    USDA-ARS?s Scientific Manuscript database

    Large amounts of data generated by genomics, transcriptomics and proteomics technologies have increased our understanding of the biology of Anaplasma marginale. However, these data have also led to new assumptions that require testing, ideally through classic genetic mutation. One example is the def...

  18. General Recommendations on Fatigue Risk Management for the Canadian Forces

    DTIC Science & Technology

    2010-04-01

    missions performed in aviation require an individual(s) to process large amount of information in a short period of time and to do this on a continuous...information processing required during sustained operations can deteriorate an individual’s ability to perform a task. Given the high operational tempo...memory, which, in turn, is utilized to perform human thought processes (Baddeley, 2003). While various versions of this theory exist, they all share

  19. Minutes of the Explosives Safety Seminar (24th) Held in St. Louis, Missouri on 28-30 August 1990. Volume 1

    DTIC Science & Technology

    1990-08-30

    concrete-soil-concrete and other soil-filled elements as well as earth embankments of different shapes. The design of the shielding external walls...to vent entirely through the doors. This was required because the large amount of earth fill on the roofs, required for radiation shielding , precluded...Safety Window Shield to Protect Against External Explosions ...... ............... ................... 783 R. L. Shope, W. A. Keenan Strenghtening

  20. The Requirements Generation System: A tool for managing mission requirements

    NASA Technical Reports Server (NTRS)

    Sheppard, Sylvia B.

    1994-01-01

    Historically, NASA's cost for developing mission requirements has been a significant part of a mission's budget. Large amounts of time have been allocated in mission schedules for the development and review of requirements by the many groups who are associated with a mission. Additionally, tracing requirements from a current document to a parent document has been time-consuming and costly. The Requirements Generation System (RGS) is a computer-supported cooperative-work tool that assists mission developers in the online creation, review, editing, tracing, and approval of mission requirements as well as in the production of requirements documents. This paper describes the RGS and discusses some lessons learned during its development.

  1. The Development and Microstructure Analysis of High Strength Steel Plate NVE36 for Large Heat Input Welding

    NASA Astrophysics Data System (ADS)

    Peng, Zhang; Liangfa, Xie; Ming, Wei; Jianli, Li

    In the shipbuilding industry, the welding efficiency of the ship plate not only has a great effect on the construction cost of the ship, but also affects the construction speed and determines the delivery cycle. The steel plate used for large heat input welding was developed sufficiently. In this paper, the composition of the steel with a small amount of Nb, Ti and large amount of Mn had been designed in micro-alloyed route. The content of C and the carbon equivalent were also designed to a low level. The technology of oxide metallurgy was used during the smelting process of the steel. The rolling technology of TMCP was controlled at a low rolling temperature and ultra-fast cooling technology was used, for the purpose of controlling the transformation of the microstructure. The microstructure of the steel plate was controlled to be the mixed microstructure of low carbon bainite and ferrite. Large amount of oxide particles dispersed in the microstructure of steel, which had a positive effects on the mechanical property and welding performance of the steel. The mechanical property of the steel plate was excellent and the value of longitudinal Akv at -60 °C is more than 200 J. The toughness of WM and HAZ were excellent after the steel plate was welded with a large heat input of 100-250 kJ/cm. The steel plate processed by mentioned above can meet the requirement of large heat input welding.

  2. Determining national greenhouse gas emissions from waste-to-energy using the Balance Method.

    PubMed

    Schwarzböck, Therese; Rechberger, Helmut; Cencic, Oliver; Fellner, Johann

    2016-03-01

    Different directives of the European Union require operators of waste-to-energy (WTE) plants to report the amount of electricity that is produced from biomass in the waste feed, as well as the amount of fossil CO2 emissions generated by the combustion of fossil waste materials. This paper describes the application of the Balance Method for determining the overall amount of fossil and thus climate relevant CO2 emissions from waste incineration in Austria. The results of 10 Austrian WTE plants (annual waste throughput of around 2,300 kt) demonstrate large seasonal variations in the specific fossil CO2 emissions of the plants as well as large differences between the facilities (annual means range from 32±2 to 51±3 kg CO(2,foss)/GJ heating value). An overall amount of around 924 kt/yr of fossil CO2 for all 10 WTE plants is determined. In comparison biogenic (climate neutral) CO2 emissions amount to 1,187 kt/yr, which corresponds to 56% of the total CO2 emissions from waste incineration. The total energy input via waste feed to the 10 facilities is about 22,500 TJ/yr, of which around 48% can be assigned to biogenic and thus renewable sources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Millimeter radiometer system technology

    NASA Technical Reports Server (NTRS)

    Wilson, W. J.; Swanson, P. N.

    1989-01-01

    JPL has had a large amount of experience with spaceborne microwave/millimeter wave radiometers for remote sensing. All of the instruments use filled aperture antenna systems from 5 cm diameter for the microwave Sounder Units (MSU), 16 m for the microwave limb sounder (MLS) to 20 m for the large deployable reflector (LDR). The advantages of filled aperture antenna systems are presented. The requirements of the 10 m Geoplat antenna system, 10 m multified antenna, and the MLS are briefly discussed.

  4. Millimeter radiometer system technology

    NASA Astrophysics Data System (ADS)

    Wilson, W. J.; Swanson, P. N.

    1989-07-01

    JPL has had a large amount of experience with spaceborne microwave/millimeter wave radiometers for remote sensing. All of the instruments use filled aperture antenna systems from 5 cm diameter for the microwave Sounder Units (MSU), 16 m for the microwave limb sounder (MLS) to 20 m for the large deployable reflector (LDR). The advantages of filled aperture antenna systems are presented. The requirements of the 10 m Geoplat antenna system, 10 m multified antenna, and the MLS are briefly discussed.

  5. CD-ROM technology at the EROS data center

    USGS Publications Warehouse

    Madigan, Michael E.; Weinheimer, Mary C.

    1993-01-01

    The vast amount of digital spatial data often required by a single user has created a demand for media alternatives to 1/2" magnetic tape. One such medium that has been recently adopted at the U.S. Geological Survey's EROS Data Center is the compact disc (CD). CD's are a versatile, dynamic, and low-cost method for providing a variety of data on a single media device and are compatible with various computer platforms. CD drives are available for personal computers, UNIX workstations, and mainframe systems, either directly connected, or through a network. This medium furnishes a quick method of reproducing and distributing large amounts of data on a single CD. Several data sets are already available on CD's, including collections of historical Landsat multispectral scanner data and biweekly composites of Advanced Very High Resolution Radiometer data for the conterminous United States. The EROS Data Center intends to provide even more data sets on CD's. Plans include specific data sets on a customized disc to fulfill individual requests, and mass production of unique data sets for large-scale distribution. Requests for a single compact disc-read only memory (CD-ROM) containing a large volume of data either for archiving or for one-time distribution can be addressed with a CD-write once (CD-WO) unit. Mass production and large-scale distribution will require CD-ROM replication and mastering.

  6. Mass spectroscopic measurements in the plasma edge of the W7-AS stellarator and their statistical analysis

    NASA Astrophysics Data System (ADS)

    Zebisch, P.; Grigull, P.; Dose, V.; Taglauer, E.; W7-AS Team^

    1997-02-01

    During the W7-AS operation period in autumn 1995 sniffer probe measurements were made for more than 800 discharges. The {H}/{D} ratio during deuterium discharges was determined showing HD and H 2 desorption from the walls even after fresh boronization. For these discharges the loading of the walls with deuterium could be observed. In the higher mass range the development of large amounts of hydrocarbons was observed at the beginning of the discharges with neutral beam injection. To evaluate the large amount of data recorded here (order of 10000 mass spectra), appropriate mathematical methods are required. It is shown that group analysis can be applied to distinguish certain sets of discharges and to derive useful mean values.

  7. Design and experimental realization of an optimal scheme for teleportation of an n-qubit quantum state

    NASA Astrophysics Data System (ADS)

    Sisodia, Mitali; Shukla, Abhishek; Thapliyal, Kishore; Pathak, Anirban

    2017-12-01

    An explicit scheme (quantum circuit) is designed for the teleportation of an n-qubit quantum state. It is established that the proposed scheme requires an optimal amount of quantum resources, whereas larger amount of quantum resources have been used in a large number of recently reported teleportation schemes for the quantum states which can be viewed as special cases of the general n-qubit state considered here. A trade-off between our knowledge about the quantum state to be teleported and the amount of quantum resources required for the same is observed. A proof-of-principle experimental realization of the proposed scheme (for a 2-qubit state) is also performed using 5-qubit superconductivity-based IBM quantum computer. The experimental results show that the state has been teleported with high fidelity. Relevance of the proposed teleportation scheme has also been discussed in the context of controlled, bidirectional, and bidirectional controlled state teleportation.

  8. Adsorption behavior of natural anthocyanin dye on mesoporous silica

    NASA Astrophysics Data System (ADS)

    Kohno, Yoshiumi; Haga, Eriko; Yoda, Keiko; Shibata, Masashi; Fukuhara, Choji; Tomita, Yasumasa; Maeda, Yasuhisa; Kobayashi, Kenkichiro

    2014-01-01

    Because of its non-toxicity, naturally occurring anthocyanin is potentially suitable as a colorant for foods and cosmetics. To the wider use of the anthocyanin, the immobilization on the inorganic host for an easy handling as well as the improvement of the stability is required. This study is focused on the adsorption of significant amount of the natural anthocyanin dye onto mesoporous silica, and on the stability enhancement of the anthocyanin by the complexation. The anthocyanin has successfully been adsorbed on the HMS type mesoporous silica containing small amount of aluminum. The amount of the adsorbed anthocyanin has been increased by modifying the pore wall with n-propyl group to make the silica surface hydrophobic. The light fastness of the adsorbed anthocyanin has been improved by making the composite with the HMS samples containing aluminum, although the degree of the improvement is not so large. It has been proposed that incorporation of the anthocyanin molecule deep inside the mesopore is required for the further enhancement of the stability.

  9. Solar power satellite life-cycle energy recovery consideration

    NASA Astrophysics Data System (ADS)

    Weingartner, S.; Blumenberg, J.

    The construction, in-orbit installation and maintenance of a solar power satellite (SPS) will demand large amounts of energy. As a minimum requirement for an energy effective power satellite it is asked that this amount of energy be recovered. The energy effectiveness in this sense resulting in a positive net energy balance is a prerequisite for cost-effective power satellite. This paper concentrates on life-cycle energy recovery instead on monetary aspects. The trade-offs between various power generation systems (different types of solar cells, solar dynamic), various construction and installation strategies (using terrestrial or extra-terrestrial resources) and the expected/required lifetime of the SPS are reviewed. The presented work is based on a 2-year study performed at the Technical University of Munich. The study showed that the main energy which is needed to make a solar power satellite a reality is required for the production of the solar power components (up to 65%), especially for the solar cell production. Whereas transport into orbit accounts in the order of 20% and the receiving station on earth (rectenna) requires about 15% of the total energy investment. The energetic amortization time, i.e. the time the SPS has to be operational to give back the amount of energy which was needed for its production installation and operation, is about two years.

  10. Medical Logistics Functional Integration Management To-Be Modeling Workshop: Improving Today For a Better Tomorrow

    DTIC Science & Technology

    1993-06-18

    A unique identifying number assigned by the contracting officer that is a binding agreement between the Government and a Vendor. quantity- of -beds The...repair it; maintenance contracts may be costly. Barriers to Implementation • Requires the large amount of funding to link a significant number of ...and follow-on requirements for maintenance, training, and installation. 22. Cross Sharing of Standard Contract Shells A3 2.88 Al112 Local activities

  11. An Analysis of Informal Reasoning Fallacy and Critical Thinking Dispositions among Malaysian Undergraduates

    ERIC Educational Resources Information Center

    Ramasamy, Shamala

    2011-01-01

    In this information age, the amount of complex information available due to technological advancement would require undergraduates to be extremely competent in processing information systematically. Critical thinking ability of undergraduates has been the focal point among educators, employers and the public at large. One of the dimensions of…

  12. Designing an Integrated System of Databases: A Workstation for Information Seekers.

    ERIC Educational Resources Information Center

    Micco, Mary; Smith, Irma

    1987-01-01

    Proposes a framework for the design of a full function workstation for information retrieval based on study of information seeking behavior. A large amount of local storage of the CD-ROM jukebox variety and full networking capability to both local and external databases are identified as requirements of the prototype. (MES)

  13. The Convergence of Information Technology, Data, and Management in a Library Imaging Program

    ERIC Educational Resources Information Center

    France, Fenella G.; Emery, Doug; Toth, Michael B.

    2010-01-01

    Integrating advanced imaging and processing capabilities in libraries, archives, and museums requires effective systems and information management to ensure that the large amounts of digital data about cultural artifacts can be readily acquired, stored, archived, accessed, processed, and linked to other data. The Library of Congress is developing…

  14. Students' Explanations in Complex Learning of Disciplinary Programming

    ERIC Educational Resources Information Center

    Vieira, Camilo

    2016-01-01

    Computational Science and Engineering (CSE) has been denominated as the third pillar of science and as a set of important skills to solve the problems of a global society. Along with the theoretical and the experimental approaches, computation offers a third alternative to solve complex problems that require processing large amounts of data, or…

  15. Team-Based Learning Exercise Efficiently Teaches Brief Intervention Skills to Medicine Residents

    ERIC Educational Resources Information Center

    Wamsley, Maria A.; Julian, Katherine A.; O'Sullivan, Patricia; McCance-Katz, Elinore F.; Batki, Steven L.; Satre, Derek D.; Satterfield, Jason

    2013-01-01

    Background: Evaluations of substance use screening and brief intervention (SBI) curricula typically focus on learner attitudes and knowledge, although effects on clinical skills are of greater interest and utility. Moreover, these curricula often require large amounts of training time and teaching resources. This study examined whether a 3-hour…

  16. Parallel mass spectrometry (APCI-MS and ESI-MS) for lipid analysis

    USDA-ARS?s Scientific Manuscript database

    Coupling the condensed phase of HPLC with the high vacuum necessary for ion analysis in a mass spectrometer requires quickly evaporating large amounts of liquid mobile phase to release analyte molecules into the gas phase, along with ionization of those molecules, so they can be detected by the mass...

  17. Bringing Text Display Digital Radio to Consumers with Hearing Loss

    ERIC Educational Resources Information Center

    Sheffield, Ellyn G.; Starling, Michael; Schwab, Daniel

    2011-01-01

    Radio is migrating to digital transmission, expanding its offerings to include captioning for individuals with hearing loss. Text display radio requires a large amount of word throughput with minimal screen display area, making good user interface design crucial to its success. In two experiments, we presented hearing, hard-of-hearing, and deaf…

  18. For Mole Problems, Call Avogadro: 602-1023.

    ERIC Educational Resources Information Center

    Uthe, R. E.

    2002-01-01

    Describes techniques to help introductory students become familiar with Avogadro's number and mole calculations. Techniques involve estimating numbers of common objects then calculating the length of time needed to count large numbers of them. For example, the immense amount of time required to count a mole of sand grains at one grain per second…

  19. Discourse Analysis and Language Learning [Summary of a Symposium].

    ERIC Educational Resources Information Center

    Hatch, Evelyn

    1981-01-01

    A symposium on discourse analysis and language learning is summarized. Discourse analysis can be divided into six fields of research: syntax, the amount of syntactic organization required for different types of discourse, large speech events, intra-sentential cohesion in text, speech acts, and unequal power discourse. Research on speech events and…

  20. Program Analyzes Radar Altimeter Data

    NASA Technical Reports Server (NTRS)

    Vandemark, Doug; Hancock, David; Tran, Ngan

    2004-01-01

    A computer program has been written to perform several analyses of radar altimeter data. The program was designed to improve on previous methods of analysis of altimeter engineering data by (1) facilitating and accelerating the analysis of large amounts of data in a more direct manner and (2) improving the ability to estimate performance of radar-altimeter instrumentation and provide data corrections. The data in question are openly available to the international scientific community and can be downloaded from anonymous file-transfer- protocol (FTP) locations that are accessible via links from altimetry Web sites. The software estimates noise in range measurements, estimates corrections for electromagnetic bias, and performs statistical analyses on various parameters for comparison of different altimeters. Whereas prior techniques used to perform similar analyses of altimeter range noise require comparison of data from repetitions of satellite ground tracks, the present software uses a high-pass filtering technique to obtain similar results from single satellite passes. Elimination of the requirement for repeat-track analysis facilitates the analysis of large amounts of satellite data to assess subtle variations in range noise.

  1. Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems

    PubMed Central

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.

    2016-01-01

    Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982

  2. Solar-energy mobile water aerators are efficient for restoring eutrophic water

    NASA Astrophysics Data System (ADS)

    Wang, Y. Y.; Xu, Z. X.

    2017-01-01

    Surface water eutrophication has become a worldwide social issue. large amounts of secondhand energy, high capital investment are required, and most ecosystem disturbances will arise in the conventional eutrophication restoration measures. However, mobile solar-energy water aerator has the better oxygen transfer rate, hydrodynamic condition and can be used in the large waterbody for its cruising character. Second, the device is low carbon and sustainable for the solar photovoltaic system applications. So the device can be widely used in the eutrophication restoration.

  3. Technology Requirements for Information Management

    NASA Technical Reports Server (NTRS)

    Graves, Sara; Knoblock, Craig A.; Lannom, Larry

    2002-01-01

    This report provides the results of a panel study conducted into the technology requirements for information management in support of application domains of particular government interest, including digital libraries, mission operations, and scientific research. The panel concluded that it was desirable to have a coordinated program of R&D that pursues a science of information management focused on an environment typified by applications of government interest - highly distributed with very large amounts of data and a high degree of heterogeneity of sources, data, and users.

  4. ms_lims, a simple yet powerful open source laboratory information management system for MS-driven proteomics.

    PubMed

    Helsens, Kenny; Colaert, Niklaas; Barsnes, Harald; Muth, Thilo; Flikka, Kristian; Staes, An; Timmerman, Evy; Wortelkamp, Steffi; Sickmann, Albert; Vandekerckhove, Joël; Gevaert, Kris; Martens, Lennart

    2010-03-01

    MS-based proteomics produces large amounts of mass spectra that require processing, identification and possibly quantification before interpretation can be undertaken. High-throughput studies require automation of these various steps, and management of the data in association with the results obtained. We here present ms_lims (http://genesis.UGent.be/ms_lims), a freely available, open-source system based on a central database to automate data management and processing in MS-driven proteomics analyses.

  5. Correcting quantum errors with entanglement.

    PubMed

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  6. Optical smart packaging to reduce transmitted information.

    PubMed

    Cabezas, Luisa; Tebaldi, Myrian; Barrera, John Fredy; Bolognini, Néstor; Torroba, Roberto

    2012-01-02

    We demonstrate a smart image-packaging optical technique that uses what we believe is a new concept to save byte space when transmitting data. The technique supports a large set of images mapped into modulated speckle patterns. Then, they are multiplexed into a single package. This operation results in a substantial decreasing of the final amount of bytes of the package with respect to the amount resulting from the addition of the images without using the method. Besides, there are no requirements on the type of images to be processed. We present results that proof the potentiality of the technique.

  7. The Digital Slide Archive: A Software Platform for Management, Integration, and Analysis of Histology for Cancer Research.

    PubMed

    Gutman, David A; Khalilia, Mohammed; Lee, Sanghoon; Nalisnik, Michael; Mullen, Zach; Beezley, Jonathan; Chittajallu, Deepak R; Manthey, David; Cooper, Lee A D

    2017-11-01

    Tissue-based cancer studies can generate large amounts of histology data in the form of glass slides. These slides contain important diagnostic, prognostic, and biological information and can be digitized into expansive and high-resolution whole-slide images using slide-scanning devices. Effectively utilizing digital pathology data in cancer research requires the ability to manage, visualize, share, and perform quantitative analysis on these large amounts of image data, tasks that are often complex and difficult for investigators with the current state of commercial digital pathology software. In this article, we describe the Digital Slide Archive (DSA), an open-source web-based platform for digital pathology. DSA allows investigators to manage large collections of histologic images and integrate them with clinical and genomic metadata. The open-source model enables DSA to be extended to provide additional capabilities. Cancer Res; 77(21); e75-78. ©2017 AACR . ©2017 American Association for Cancer Research.

  8. Low-cost floating emergence net and bottle trap: Comparison of two designs

    USGS Publications Warehouse

    Cadmus, Pete; Pomeranz, Justin; Kraus, Johanna M.

    2016-01-01

    Sampling emergent aquatic insects is of interest to many freshwater ecologists. Many quantitative emergence traps require the use of aspiration for collection. However, aspiration is infeasible in studies with large amounts of replication that is often required in large biomonitoring projects. We designed an economic, collapsible pyramid-shaped floating emergence trap with an external collection bottle that avoids the need for aspiration. This design was compared experimentally to a design of similar dimensions that relied on aspiration to ensure comparable results. The pyramid-shaped design captured twice as many total emerging insects. When a preservative was used in bottle collectors, >95% of the emergent abundance was collected in the bottle. When no preservative was used, >81% of the total insects were collected from the bottle. In addition to capturing fewer emergent insects, the traps that required aspiration took significantly longer to sample. Large studies and studies sampling remote locations could benefit from the economical construction, speed of sampling, and capture efficiency.

  9. Construction and Reconstruction Efforts in Nation Building: Planning for Everything in Afghanistan Except the Afghans

    DTIC Science & Technology

    2015-04-01

    or earth bricks, rammed earth, and sometimes a cement binder. Adobe type construction has been around for thousands of years. It has many benefits...they were steel, they heated up like an oven and required large amounts of foam insulation which turned out to be highly flammable.15 The end result

  10. Using Popular Music to Teach the Geography of the United States and Canada

    ERIC Educational Resources Information Center

    Smiley, Sarah L.; Post, Chris W.

    2014-01-01

    The introductory level course Geography of the U.S. and Canada requires students to grasp large amounts of complex material, oftentimes using a lecture-based pedagogical approach. This article outlines two ways that popular music can be successfully used in the geography classroom. First, songs are used to review key concepts and characteristics…

  11. Autoclave Meltout of Cast Explosives

    DTIC Science & Technology

    1996-08-22

    various tanks , kettles , and pelletizing equipment a usable product was recovered. This process creates large amounts of pink water requiring...vacuum treatment melt kettles , flaker belts, and improved material handling equipment in an integrated system. During the 1976/1977 period, AED...McAlester Army Ammo Plant , Oklahoma, to discuss proposed workload and inspect available facilities and equipment . Pilot model production and testing

  12. The use of PacBio and Hi-C data in denovo assembly of the goat genome

    USDA-ARS?s Scientific Manuscript database

    Generating de novo reference genome assemblies for non-model organisms is a laborious task that often requires a large amount of data from several sequencing platforms and cytogenetic surveys. By using PacBio sequence data and new library creation techniques, we present a de novo, high quality refer...

  13. 44 CFR 10.8 - Determination of requirement for environmental review.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... apply: (i) If an action will result in an extensive change in land use or the commitment of a large amount of land; (ii) If an action will result in a land use change which is incompatible with the... ecosystems, including endangered species; (vi) If an action will result in a major adverse impact upon air or...

  14. 44 CFR 10.8 - Determination of requirement for environmental review.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... apply: (i) If an action will result in an extensive change in land use or the commitment of a large amount of land; (ii) If an action will result in a land use change which is incompatible with the... ecosystems, including endangered species; (vi) If an action will result in a major adverse impact upon air or...

  15. 44 CFR 10.8 - Determination of requirement for environmental review.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... apply: (i) If an action will result in an extensive change in land use or the commitment of a large amount of land; (ii) If an action will result in a land use change which is incompatible with the... ecosystems, including endangered species; (vi) If an action will result in a major adverse impact upon air or...

  16. 44 CFR 10.8 - Determination of requirement for environmental review.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... apply: (i) If an action will result in an extensive change in land use or the commitment of a large amount of land; (ii) If an action will result in a land use change which is incompatible with the... ecosystems, including endangered species; (vi) If an action will result in a major adverse impact upon air or...

  17. 44 CFR 10.8 - Determination of requirement for environmental review.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... apply: (i) If an action will result in an extensive change in land use or the commitment of a large amount of land; (ii) If an action will result in a land use change which is incompatible with the... ecosystems, including endangered species; (vi) If an action will result in a major adverse impact upon air or...

  18. Development of a station based climate database for SWAT and APEX assessments in the U.S.

    USDA-ARS?s Scientific Manuscript database

    Water quality simulation models such as the Soil and Water Assessment Tool (SWAT) and Agricultural Policy EXtender (APEX) are widely used in the U.S. These models require large amounts of spatial and tabular data to simulate the natural world. Accurate and seamless daily climatic data are critical...

  19. Inventory: 26 Reasons for Doing One

    ERIC Educational Resources Information Center

    Braxton, Barbara

    2005-01-01

    A stocktake is a legal requirement that ensures that teacher-librarians are accountable for the money they have spent throughout the year. Including staff salaries, the library absorbs a large amount of the annual school budget, so it is essential that funds are spent wisely. Inventory is done at the end of each academic year as part of the…

  20. A model for prioritizing landfills for remediation and closure: A case study in Serbia.

    PubMed

    Ubavin, Dejan; Agarski, Boris; Maodus, Nikola; Stanisavljevic, Nemanja; Budak, Igor

    2018-01-01

    The existence of large numbers of landfills that do not fulfill sanitary prerequisites presents a serious hazard for the environment in lower income countries. One of the main hazards is landfill leachate that contains various pollutants and presents a threat to groundwater. Groundwater pollution from landfills depends on various mutually interconnected factors such as the waste type and amount, the amount of precipitation, the landfill location characteristics, and operational measures, among others. Considering these factors, lower income countries face a selection problem where landfills urgently requiring remediation and closure must be identified from among a large number of sites. The present paper proposes a model for prioritizing landfills for closure and remediation based on multicriteria decision making, in which the hazards of landfill groundwater pollution are evaluated. The parameters for the prioritization of landfills are the amount of waste disposed, the amount of precipitation, the vulnerability index, and the rate of increase of the amount of waste in the landfill. Verification was performed using a case study in Serbia where all municipal landfills were included and 128 landfills were selected for prioritization. The results of the evaluation of Serbian landfills, prioritizing sites for closure and remediation, are presented for the first time. Critical landfills are identified, and prioritization ranks for the selected landfills are provided. Integr Environ Assess Manag 2018;14:105-119. © 2017 SETAC. © 2017 SETAC.

  1. Development of a Two-Stage Microalgae Dewatering Process – A Life Cycle Assessment Approach

    PubMed Central

    Soomro, Rizwan R.; Zeng, Xianhai; Lu, Yinghua; Lin, Lu; Danquah, Michael K.

    2016-01-01

    Even though microalgal biomass is leading the third generation biofuel research, significant effort is required to establish an economically viable commercial-scale microalgal biofuel production system. Whilst a significant amount of work has been reported on large-scale cultivation of microalgae using photo-bioreactors and pond systems, research focus on establishing high performance downstream dewatering operations for large-scale processing under optimal economy is limited. The enormous amount of energy and associated cost required for dewatering large-volume microalgal cultures has been the primary hindrance to the development of the needed biomass quantity for industrial-scale microalgal biofuels production. The extremely dilute nature of large-volume microalgal suspension and the small size of microalgae cells in suspension create a significant processing cost during dewatering and this has raised major concerns towards the economic success of commercial-scale microalgal biofuel production as an alternative to conventional petroleum fuels. This article reports an effective framework to assess the performance of different dewatering technologies as the basis to establish an effective two-stage dewatering system. Bioflocculation coupled with tangential flow filtration (TFF) emerged a promising technique with total energy input of 0.041 kWh, 0.05 kg CO2 emissions and a cost of $ 0.0043 for producing 1 kg of microalgae biomass. A streamlined process for operational analysis of two-stage microalgae dewatering technique, encompassing energy input, carbon dioxide emission, and process cost, is presented. PMID:26904075

  2. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  3. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser.

    PubMed

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-03-17

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.

  4. Implementing Solar PV Projects on Historic Buildings and in Historic Districts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kandt, A.; Hotchkiss, E.; Walker, A.

    Many municipalities, particularly in older communities of the United States, have a large amount of historic buildings and districts. In addition to preserving these historic assets, many municipalities have goals or legislative requirements to procure a certain amount of energy from renewable sources and to become more efficient in their energy use; often, these requirements do not exempt historic buildings. This paper details findings from a workshop held in Denver, Colorado, in June 2010 that brought together stakeholders from both the solar and historic preservation industries. Based on these findings, this paper identifies challenges and recommends solutions for developing solarmore » photovoltaic (PV) projects on historic buildings and in historic districts in such a way as to not affect the characteristics that make a building eligible for historic status.« less

  5. Solar power satellite—Life-cycle energy recovery considerations

    NASA Astrophysics Data System (ADS)

    Weingartner, S.; Blumenberg, J.

    1995-05-01

    The construction, in-orbit installation and maintenance of a solar power satellite (SPS) will demand large amounts of energy. As a minimum requirement for an energy effective power satellite it is asked that this amount of energy be recovered. The energy effectiveness in this sense resulting in a positive net energy balance is a prerequisite for a cost-effective power satellite. This paper concentrates on life-cycle energy recovery instead of monetary aspects. The trade-offs between various power generation systems (different types of solar cells, solar dynamic), various construction and installation strategies (using terrestrial or extra-terrestrial resources) and the expected/required lifetime of the SPS are reviewed. The presented work is based on a 2-year study performed at the Technical University of Munich. The study showed that the main energy which is needed to make a solar power satellite a reality is required for the production of the solar power plant components (up to 65%), especially for the solar cell production. Whereas transport into orbit accounts in the order of 20% and the receiving station on Earth (rectenna) requires in the order of 15% of the total energy investment. The energetic amortization time, i.e. the time the SPS has to be operational to give back the amount of energy which was needed for its production, installation and operation, is in the order of two years.

  6. Preparation of PEMFC Electrodes from Milligram-Amounts of Catalyst Powder

    DOE PAGES

    Yarlagadda, Venkata; McKinney, Samuel E.; Keary, Cristin L.; ...

    2017-06-03

    Development of electrocatalysts with higher activity and stability is one of the highest priorities in enabling cost-competitive hydrogen-air fuel cells. Although the rotating disk electrode (RDE) technique is widely used to study new catalyst materials, it has been often shown to be an unreliable predictor of catalyst performance in actual fuel cell operation. Fabrication of membrane electrode assemblies (MEA) for evaluation which are more representative of actual fuel cells generally requires relatively large amounts (>1 g) of catalyst material which are often not readily available in early stages of development. In this study, we present two MEA preparation techniques usingmore » as little as 30 mg of catalyst material, providing methods to conduct more meaningful MEA-based tests using research-level catalysts amounts.« less

  7. Effect of oil on an electrowetting lenticular lens and related optical characteristics.

    PubMed

    Shin, Dooseub; Kim, Junoh; Kim, Cheoljoong; Koo, Gyo Hyun; Sim, Jee Hoon; Lee, Junsik; Won, Yong Hyub

    2017-03-01

    While there are many ways to realize autostereoscopic 2D/3D switchable displays, the electrowetting lenticular lens is superior due to the high optical efficiency and short response time. In this paper, we propose a more stable electrowetting lenticular lens by controlling the quantity of oil. With a large amount of oil, the oil layer was broken and the lenticular lens was damaged at relatively low voltage. Therefore, controlling the amount of oil is crucial to obtain the required dioptric power with stability. We proposed a new structure to evenly adjust the volume of oil and the dioptric power was measured by varying the volume of oil. Furthermore, the optical characteristics were finally analyzed in the electrowetting lenticular lens array with a proper amount of oil.

  8. The optimisation of low-acceleration interstellar relativistic rocket trajectories using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Fung, Kenneth K. H.; Lewis, Geraint F.; Wu, Xiaofeng

    2017-04-01

    A vast wealth of literature exists on the topic of rocket trajectory optimisation, particularly in the area of interplanetary trajectories due to its relevance today. Studies on optimising interstellar and intergalactic trajectories are usually performed in flat spacetime using an analytical approach, with very little focus on optimising interstellar trajectories in a general relativistic framework. This paper examines the use of low-acceleration rockets to reach galactic destinations in the least possible time, with a genetic algorithm being employed for the optimisation process. The fuel required for each journey was calculated for various types of propulsion systems to determine the viability of low-acceleration rockets to colonise the Milky Way. The results showed that to limit the amount of fuel carried on board, an antimatter propulsion system would likely be the minimum technological requirement to reach star systems tens of thousands of light years away. However, using a low-acceleration rocket would require several hundreds of thousands of years to reach these star systems, with minimal time dilation effects since maximum velocities only reached about 0.2 c . Such transit times are clearly impractical, and thus, any kind of colonisation using low acceleration rockets would be difficult. High accelerations, on the order of 1 g, are likely required to complete interstellar journeys within a reasonable time frame, though they may require prohibitively large amounts of fuel. So for now, it appears that humanity's ultimate goal of a galactic empire may only be possible at significantly higher accelerations, though the propulsion technology requirement for a journey that uses realistic amounts of fuel remains to be determined.

  9. Large-scale production and isolation of Candida biofilm extracellular matrix.

    PubMed

    Zarnowski, Robert; Sanchez, Hiram; Andes, David R

    2016-12-01

    The extracellular matrix of biofilm is unique to the biofilm lifestyle, and it has key roles in community survival. A complete understanding of the biochemical nature of the matrix is integral to the understanding of the roles of matrix components. This knowledge is a first step toward the development of novel therapeutics and diagnostics to address persistent biofilm infections. Many of the assay methods needed for refined matrix composition analysis require milligram amounts of material that is separated from the cellular components of these complex communities. The protocol described here explains the large-scale production and isolation of the Candida biofilm extracellular matrix. To our knowledge, the proposed procedure is the only currently available approach in the field that yields milligram amounts of biofilm matrix. This procedure first requires biofilms to be seeded in large-surface-area roller bottles, followed by cell adhesion and biofilm maturation during continuous movement of the medium across the surface of the rotating bottle. The formed matrix is then separated from the entire biomass using sonication, which efficiently removes the matrix without perturbing the fungal cell wall. Subsequent filtration, dialysis and lyophilization steps result in a purified matrix product sufficient for biochemical, structural and functional assays. The overall protocol takes ∼11 d to complete. This protocol has been used for Candida species, but, using the troubleshooting guide provided, it could be adapted for other fungi or bacteria.

  10. 76 FR 29718 - Western Pacific Pelagic Fisheries; American Samoa Longline Gear Modifications To Reduce Turtle...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-23

    ...,400 albacore (most destined for the Pago Pago cannery), and smaller amounts of skipjack, yellowfin and... depth of 100 meters or deeper, away from the primary turtle habitat. This action would require fishermen on the large vessels (Classes B, C, and D) to use float lines that are at least 30 meters long, and...

  11. Evaluation of alternative approaches for landscape-scale biomass estimation in a mixed-species northern forest

    Treesearch

    Coeli M. Hoover; Mark J. Ducey; R. Andy Colter; Mariko Yamasaki

    2018-01-01

    There is growing interest in estimating and mapping biomass and carbon content of forests across large landscapes. LiDAR-based inventory methods are increasingly common and have been successfully implemented in multiple forest types. Asner et al. (2011) developed a simple universal forest carbon estimation method for tropical forests that reduces the amount of required...

  12. Dynamic Database. Efficiently Convert Massive Quantities of Sensor Data into Actionable Information for Tactical Commanders

    DTIC Science & Technology

    2000-06-01

    As the number of sensors, platforms, exploitation sites, and command and control nodes continues to grow in response to Joint Vision 2010 information ... dominance requirements, Commanders and analysts will have an ever increasing need to collect and process vast amounts of data over wide areas using a large number of disparate sensors and information gathering sources.

  13. Rotor Smoothing and Vibration Monitoring Results for the US Army VMEP

    DTIC Science & Technology

    2009-06-01

    individual component CI detection thresholds, and development of models for diagnostics, prognostics , and anomaly detection . Figure 16 VMEP Server...and prognostics are of current interest. Development of those systems requires large amounts of data (collection, monitoring , manipulation) to capture...development of automated systems and for continuous updating of algorithms to improve detection , classification, and prognostic performance. A test

  14. Effect of broiler litter ash and flue gas desulfurization gypsum on yield, calcium and phosphorus uptake by peanut

    USDA-ARS?s Scientific Manuscript database

    Peanut (Arachis hyogaea) is an important oil seed crop that is grown as a principle source of edible oil and vegetable protein. Over 1.6 million acres of peanuts were planted in the United States during 2012. Peanuts require large amounts of Calcium (Ca) and Phosphorus (P). In 2010, over 10 milli...

  15. WinHPC System Policies | High-Performance Computing | NREL

    Science.gov Websites

    requiring high CPU utilization or large amounts of memory should be run on the worker nodes. WinHPC02 is not associated data are removed when NREL worker status is discontinued. Users should make arrangements to save other users. Licenses are returned to the license pool when other users close the application or after

  16. The DTIC Review: Volume 2, Number 4, Surviving Chemical and Biological Warfare

    DTIC Science & Technology

    1996-12-01

    CHROMATOGRAPHIC ANALYSIS, NUCLEAR MAGNETIC RESONANCE, INFRARED SPECTROSCOPY , ARMY RESEARCH, DEGRADATION, VERIFICATION, MASS SPECTROSCOPY , LIQUID... mycotoxins . Such materials are not attractive as weapons of mass destruction however, as large amounts are required to produce lethal effects. In...VERIFICATION, ATOMIC ABSORPTION SPECTROSCOPY , ATOMIC ABSORPTION. AL The DTIC Review Defense Technical Information Center AD-A285 242 AD-A283 754 EDGEWOOO

  17. The secret life of marbled murrelets: monitoring populations and habitats.

    Treesearch

    Jonathan Thompson

    2007-01-01

    The marbled murrelet is a small diving seabird that occupies coastal waters from Alaska to central California. Murrelets have a unique nesting strategy that requires them to commute tens of miles inland, where they use large mossy branches on older conifers as platforms to balance their solitary egg. Populations have been declining for decades as the amount of nesting...

  18. Solar array flight experiment

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Emerging satellite designs require increasing amounts of electrical power to operate spacecraft instruments and to provide environments suitable for human habitation. In the past, electrical power was generated by covering rigid honeycomb panels with solar cells. This technology results in unacceptable weight and volume penalties when large amounts of power are required. To fill the need for large-area, lightweight solar arrays, a fabrication technique in which solar cells are attached to a copper printed circuit laminated to a plastic sheet was developed. The result is a flexible solar array with one-tenth the stowed volume and one-third the weight of comparably sized rigid arrays. An automated welding process developed to attack the cells to the printed circuit guarantees repeatable welds that are more tolerant of severe environments than conventional soldered connections. To demonstrate the flight readiness of this technology, the Solar Array Flight Experiment (SAFE) was developed and flown on the space shuttle Discovery in September 1984. The tests showed the modes and frequencies of the array to be very close to preflight predictions. Structural damping, however, was higher than anticipated. Electrical performance of the active solar panel was also tested. The flight performance and postflight data evaluation are described.

  19. Economic analysis of municipal wastewater utilization for thermoelectric power production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safari, I.; Walker, M.; Abbasian, J.

    2011-01-01

    The thermoelectric power industry in the U.S. uses a large amount of freshwater. The large water demand is increasingly a problem, especially for new power plant development, as availability of freshwater for new uses diminishes in the United States. Reusing non-traditional water sources, such as treated municipal wastewater, provides one option to mitigate freshwater usage in the thermoelectric power industry. The amount of freshwater withdrawal that can be displaced with non-traditional water sources at a particular location requires evaluation of the water management and treatment requirements, considering the quality and abundance of the non-traditional water sources. This paper presents themore » development of an integrated costing model to assess the impact of degraded water treatment, as well as the implications of increased tube scaling in the main condenser. The model developed herein is used to perform case studies of various treatment, condenser cleaning and condenser configurations to provide insight into the ramifications of degraded water use in the cooling loops of thermoelectric power plants. Further, this paper lays the groundwork for the integration of relationships between degraded water quality, scaling characteristics and volatile emission within a recirculating cooling loop model.« less

  20. An algorithm of discovering signatures from DNA databases on a computer cluster.

    PubMed

    Lee, Hsiao Ping; Sheu, Tzu-Fang

    2014-10-05

    Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.

  1. Enhanced membrane filtration of wood hydrolysates for hemicelluloses recovery by pretreatment with polymeric adsorbents.

    PubMed

    Koivula, Elsi; Kallioinen, Mari; Sainio, Tuomo; Antón, Enrique; Luque, Susana; Mänttäri, Mika

    2013-09-01

    In this study adsorption of foulants from birch and pine/eucalyptus wood hydrolysates on two polymeric adsorbents was studied aiming to reduce the membrane fouling. The effect of the pretreatment of hydrolysate on polyethersulphone membrane performance was studied in dead-end filtration experiments. Adsorption pretreatment improved significantly filtration capacity and decreased membrane fouling. Especially high-molecular weight lignin was efficiently removed. A multistep adsorption pretreatment was found to reduce the amount of adsorbent required. While large adsorbent amount was shown to increase flux in filtration, it was found also to cause significant hemicellulose losses. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. The Design and Development of a Management Information System for the Monterey Navy Flying Club.

    DTIC Science & Technology

    1986-03-27

    Management Information System for the Monterey Navy Flying Club. It supplies the tools necessary to enable the club manager to maintain all club records and generate required administrative and financial reports. The Monterey Navy Flying Club has one of the largest memberships of the Navy sponsored flying clubs. As a result of this large membership and the amount of manual paperwork required to properly maintain club records, the Manager’s ability to provide necessary services and reports in severely hampered. The implementation of an efficient

  3. ECUT: Energy Conversion and Utilization Technologies program biocatalysis research activity. Potential membrane applications to biocatalyzed processes: Assessment of concentration polarization and membrane fouling

    NASA Technical Reports Server (NTRS)

    Ingham, J. D.

    1983-01-01

    Separation and purification of the products of biocatalyzed fermentation processes, such as ethanol or butanol, consumes most of the process energy required. Since membrane systems require substantially less energy for separation than most alternatives (e.g., distillation) they have been suggested for separation or concentration of fermentation products. This report is a review of the effects of concentration polarization and membrane fouling for the principal membrane processes: microfiltration, ultrafiltration, reverse osmosis, and electrodialysis including a discussion of potential problems relevant to separation of fermentation products. It was concluded that advanced membrane systems may result in significantly decreased energy consumption. However, because of the need to separate large amounts of water from much smaller amounts of product that may be more volatile than wate, it is not clear that membrane separations will necessarily be more efficient than alternative processes.

  4. A scalable diffraction-based scanning 3D colour video display as demonstrated by using tiled gratings and a vertical diffuser

    PubMed Central

    Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping

    2017-01-01

    A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing. PMID:28304371

  5. Low Data Drug Discovery with One-Shot Learning.

    PubMed

    Altae-Tran, Han; Ramsundar, Bharath; Pappu, Aneesh S; Pande, Vijay

    2017-04-26

    Recent advances in machine learning have made significant contributions to drug discovery. Deep neural networks in particular have been demonstrated to provide significant boosts in predictive power when inferring the properties and activities of small-molecule compounds (Ma, J. et al. J. Chem. Inf. 2015, 55, 263-274). However, the applicability of these techniques has been limited by the requirement for large amounts of training data. In this work, we demonstrate how one-shot learning can be used to significantly lower the amounts of data required to make meaningful predictions in drug discovery applications. We introduce a new architecture, the iterative refinement long short-term memory, that, when combined with graph convolutional neural networks, significantly improves learning of meaningful distance metrics over small-molecules. We open source all models introduced in this work as part of DeepChem, an open-source framework for deep-learning in drug discovery (Ramsundar, B. deepchem.io. https://github.com/deepchem/deepchem, 2016).

  6. An evaluation of coding methodologies for potential use in the Alabama Resource Information System (ARIS)-transportation study for the state of Alabama

    NASA Technical Reports Server (NTRS)

    Montgomery, O. L.

    1977-01-01

    Procedures developed for digitizing the transportation arteries, airports, and dock facilities of Alabama and placing them in a computerized format compatible with the Alabama Resource Information System are described. The time required to digitize by the following methods: (a) manual, (b) Telereadex 29 with film reading and digitizing system, and (c) digitizing tablets was evaluated. A method for digitizing and storing information from the U. T. M. grid cell base which was compatible with the system was developed and tested. The highways, navigable waterways, railroads, airports, and docks in the study area were digitized and the data stored. The manual method of digitizing was shown to be best for small amounts of data, while the graphic input from the digitizing tablets would be the best approach for entering the large amounts of data required for an entire state.

  7. The Surface Composition of Large Kuiper Belt Object 2007 OR10

    NASA Astrophysics Data System (ADS)

    Brown, M. E.; Burgasser, A. J.; Fraser, W. C.

    2011-09-01

    We present photometry and spectra of the large Kuiper belt object 2007 OR10. The data show significant near-infrared absorption features due to water ice. While most objects in the Kuiper belt with water ice absorption this prominent have the optically neutral colors of water ice, 2007 OR10 is among the reddest Kuiper belt objects known. One other large Kuiper belt object—Quaoar—has similar red coloring and water ice absorption, and it is hypothesized that the red coloration of this object is due to irradiation of the small amounts of methane able to be retained on Quaoar. 2007 OR10, though warmer than Quaoar, is in a similar volatile retention regime because it is sufficiently larger that its stronger gravity can still retain methane. We propose, therefore, that the red coloration on 2007 OR10 is also caused by the retention of small amounts of methane. Positive detection of methane on 2007 OR10 will require spectra with higher signal to noise. Models for volatile retention on Kuiper belt objects appear to continue to do an excellent job reproducing all of the available observations.

  8. Identifying and quantifying urban recharge: a review

    NASA Astrophysics Data System (ADS)

    Lerner, David N.

    2002-02-01

    The sources of and pathways for groundwater recharge in urban areas are more numerous and complex than in rural environments. Buildings, roads, and other surface infrastructure combine with man-made drainage networks to change the pathways for precipitation. Some direct recharge is lost, but additional recharge can occur from storm drainage systems. Large amounts of water are imported into most cities for supply, distributed through underground pipes, and collected again in sewers or septic tanks. The leaks from these pipe networks often provide substantial recharge. Sources of recharge in urban areas are identified through piezometry, chemical signatures, and water balances. All three approaches have problems. Recharge is quantified either by individual components (direct recharge, water-mains leakage, septic tanks, etc.) or holistically. Working with individual components requires large amounts of data, much of which is uncertain and is likely to lead to large uncertainties in the final result. Recommended holistic approaches include the use of groundwater modelling and solute balances, where various types of data are integrated. Urban recharge remains an under-researched topic, with few high-quality case studies reported in the literature.

  9. Elastin in large artery stiffness and hypertension

    PubMed Central

    Wagenseil, Jessica E.; Mecham, Robert P.

    2012-01-01

    Large artery stiffness, as measured by pulse wave velocity (PWV), is correlated with high blood pressure and may be a causative factor in essential hypertension. The extracellular matrix components, specifically the mix of elastin and collagen in the vessel wall, determine the passive mechanical properties of the large arteries. Elastin is organized into elastic fibers in the wall during arterial development in a complex process that requires spatial and temporal coordination of numerous proteins. The elastic fibers last the lifetime of the organism, but are subject to proteolytic degradation and chemical alterations that change their mechanical properties. This review discusses how alterations in the amount, assembly, organization or chemical properties of the elastic fibers affect arterial stiffness and blood pressure. Strategies for encouraging or reversing alterations to the elastic fibers are addressed. Methods for determining the efficacy of these strategies, by measuring elastin amounts and arterial stiffness, are summarized. Therapies that have a direct effect on arterial stiffness through alterations to the elastic fibers in the wall may be an effective treatment for essential hypertension. PMID:22290157

  10. Hexamethyldisilazane Removal with Mesoporous Materials Prepared from Calcium Fluoride Sludge.

    PubMed

    Kao, Ching-Yang; Lin, Min-Fa; Nguyen, Nhat-Thien; Tsai, Hsiao-Hsin; Chang, Luh-Maan; Chen, Po-Han; Chang, Chang-Tang

    2018-05-01

    A large amount of calcium fluoride sludge is generated by the semiconductor industry every year. It also requires a high amount of fuel consumption using rotor concentrators and thermal oxidizers to treat VOCs. The mesoporous adsorbent prepared by calcium fluoride sludge was used for VOCs treatment. The semiconductor industry employs HMDS to promote the adhesion of photo-resistant material to oxide(s) due to the formation of silicon dioxide, which blocks porous adsorbents. The adsorption of HMDS (Hexamethyldisiloxane) was tested with mesoporous silica materials synthesized from calcium fluoride (CF-MCM). The resulting samples were characterized by XRD, XRF, FTIR, N2-adsorption-desorption techniques. The prepared samples possessed high specific surface area, large pore volume and large pore diameter. The crystal patterns of CF-MCM were similar with Mobil composite matter (MCM-41) from TEM image. The adsorption capacity of HMDS with CF-MCM was 40 and 80 mg g-1, respectively, under 100 and 500 ppm HMDS. The effects of operation parameters, such as contact time and mixture concentration, on the performance of CF-MCM were also discussed in this study.

  11. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  12. Report of the Plasma Physics and Environmental Perturbation Laboratory (PPEPL) working groups. Volume 1: Plasma probes, wakes, and sheaths working group

    NASA Technical Reports Server (NTRS)

    1974-01-01

    It is shown in this report that comprehensive in-situ study of all aspects of the entire zone disturbance caused by a body in a flowing plasma resulted in a large number if requirements on the shuttle-PPEPL facility. A large amount of necessary in-situ observation can be obtained by adopting appropriate modes of performing the experiments. Requirements are indicated for worthwhile studies, of some aspects of the problems, which can be carried out effectively while imposing relatively few constraints on the early missions. Considerations for the desired growth and improvement of the PPEPL to facilitate more complete studies in later missions are also discussed. For Vol. 2, see N74-28170; for Vol# 3, see N74-28171.

  13. Beowulf Distributed Processing and the United States Geological Survey

    USGS Publications Warehouse

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing technology. It will describe the benefits of the technology. Real data about a distributed application will be presented as an example of the benefits that this technology can bring to USGS scientific programs. Finally, some of the issues with distributed processing that relate to USGS work will be discussed.

  14. Are Digital Natives a World-Wide Phenomenon? An Investigation into South African First Year Students' Use and Experience with Technology

    ERIC Educational Resources Information Center

    Thinyane, Hannah

    2010-01-01

    In 2001 Marc Prensky coined the phrase "digital natives" to refer to the new generation of students who have grown up surrounded by technology. His companion papers spurred large amounts of research, debating changes that are required to curricula and pedagogical models to cater for the changes in the student population. This article…

  15. Application of symbolic/numeric matrix solution techniques to the NASTRAN program

    NASA Technical Reports Server (NTRS)

    Buturla, E. M.; Burroughs, S. H.

    1977-01-01

    The matrix solving algorithm of any finite element algorithm is extremely important since solution of the matrix equations requires a large amount of elapse time due to null calculations and excessive input/output operations. An alternate method of solving the matrix equations is presented. A symbolic processing step followed by numeric solution yields the solution very rapidly and is especially useful for nonlinear problems.

  16. Yield comparisons from floating blade and fixed arbor gang ripsaws when processing boards before and after crook removal

    Treesearch

    Charles J. Gatchell; Charles J. Gatchell

    1991-01-01

    Gang-ripping technology that uses a movable (floating) outer blade to eliminate unusable edgings is described, including new tenn1nology for identifying preferred and minimally acceptable strip widths. Because of the large amount of salvage required to achieve total yields, floating blade gang ripping is not recommended for boards with crook. With crook removed by...

  17. GPU applications for data processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vladymyrov, Mykhailo, E-mail: mykhailo.vladymyrov@cern.ch; Aleksandrov, Andrey; INFN sezione di Napoli, I-80125 Napoli

    2015-12-31

    Modern experiments that use nuclear photoemulsion imply fast and efficient data acquisition from the emulsion can be performed. The new approaches in developing scanning systems require real-time processing of large amount of data. Methods that use Graphical Processing Unit (GPU) computing power for emulsion data processing are presented here. It is shown how the GPU-accelerated emulsion processing helped us to rise the scanning speed by factor of nine.

  18. East Europe Report, Economic and Industrial Affairs

    DTIC Science & Technology

    1984-07-03

    Because it is a proven fact that a certain amount of protein is a biological necessity for broiler chickens. And if the ground soya, meat- or fish flour...of conventional energy while largely meeting the requirement for digestible protein, which is still deficient today, especially in livestock feed...adding various ingredients that stimulate the digestive processeso Differentiated and much more effective measures are also needed to exploit by

  19. Effect of broiler litter ash and flue gas desulfurization gypsum on yield, calcium, phosphorus, copper, iron, manganese and zinc uptake by peanut

    USDA-ARS?s Scientific Manuscript database

    Peanut (Arachis hyogaea) is an important oil seed crop that is grown as a principle source of edible oil and vegetable protein. Over 1.6 million acres of peanuts were planted in the United States during 2012. Peanuts require large amounts of calcium (Ca) and phosphorus (P). In 2010, over 10 milli...

  20. Evaluating Maintenance Performance: A Video Approach to Symbolic Testing of Electronics Maintenance Tasks. Final Report.

    ERIC Educational Resources Information Center

    Shriver, Edgar L.; And Others

    This volume reports an effort to use the video media as an approach for the preparation of a battery of symbolic tests that would be empirically valid substitutes for criterion referenced Job Task Performance Tests. The graphic symbolic tests require the storage of a large amount of pictorial information which must be searched rapidly for display.…

  1. A Specific Role for Hippocampal Mossy Fiber's Zinc in Rapid Storage of Emotional Memories

    ERIC Educational Resources Information Center

    Ceccom, Johnatan; Halley, Hélène; Daumas, Stéphanie; Lassalle, Jean Michel

    2014-01-01

    We investigated the specific role of zinc present in large amounts in the synaptic vesicles of mossy fibers and coreleased with glutamate in the CA3 region. In previous studies, we have shown that blockade of zinc after release has no effect on the consolidation of spatial learning, while zinc is required for the consolidation of contextual fear…

  2. Job Requirements for Marketing Graduates: Are There Differences in the Knowledge, Skills, and Personal Attributes Needed for Different Salary Levels?

    ERIC Educational Resources Information Center

    Schlee, Regina Pefanis; Karns, Gary L.

    2017-01-01

    Several studies in the business press and in the marketing literature point to a "transformation" of marketing caused by the availability of large amounts of data for marketing analysis and planning. However, the effects of the integration of technology on entry-level jobs for marketing graduates have not been fully explored. This study…

  3. Relationship between food waste, diet quality, and environmental sustainability

    PubMed Central

    Niles, Meredith T.; Neher, Deborah A.; Roy, Eric D.; Tichenor, Nicole E.; Jahns, Lisa

    2018-01-01

    Improving diet quality while simultaneously reducing environmental impact is a critical focus globally. Metrics linking diet quality and sustainability have typically focused on a limited suite of indicators, and have not included food waste. To address this important research gap, we examine the relationship between food waste, diet quality, nutrient waste, and multiple measures of sustainability: use of cropland, irrigation water, pesticides, and fertilizers. Data on food intake, food waste, and application rates of agricultural amendments were collected from diverse US government sources. Diet quality was assessed using the Healthy Eating Index-2015. A biophysical simulation model was used to estimate the amount of cropland associated with wasted food. This analysis finds that US consumers wasted 422g of food per person daily, with 30 million acres of cropland used to produce this food every year. This accounts for 30% of daily calories available for consumption, one-quarter of daily food (by weight) available for consumption, and 7% of annual cropland acreage. Higher quality diets were associated with greater amounts of food waste and greater amounts of wasted irrigation water and pesticides, but less cropland waste. This is largely due to fruits and vegetables, which are health-promoting and require small amounts of cropland, but require substantial amounts of agricultural inputs. These results suggest that simultaneous efforts to improve diet quality and reduce food waste are necessary. Increasing consumers’ knowledge about how to prepare and store fruits and vegetables will be one of the practical solutions to reducing food waste. PMID:29668732

  4. Implications of the Large O VI Columns around Low-redshift L ∗ Galaxies

    NASA Astrophysics Data System (ADS)

    McQuinn, Matthew; Werk, Jessica K.

    2018-01-01

    Observations reveal massive amounts of O VI around star-forming L * galaxies, with covering fractions of near unity extending to the host halo’s virial radius. This O VI absorption is typically kinematically centered upon photoionized gas, with line widths that are suprathermal and kinematically offset from the galaxy. We discuss various scenarios and whether they could result in the observed phenomenology (cooling gas flows, boundary layers, shocks, virialized gas). If collisionally ionized, as we argue is most probable, the O VI observations require that the circumgalactic medium (CGM) of L * galaxies holds nearly all of the associated baryons within a virial radius (∼ {10}11 {M}ȯ ) and hosts massive flows of cooling gas with ≈ 30[{nT}/30 {{cm}}-3 {{K}}] {M}ȯ {{yr}}-1, which must be largely prevented from accreting onto the host galaxy. Cooling and feedback energetics considerations require 10< {nT}< 100 cm‑3 K for the warm and hot halo gases. We argue that virialized gas, boundary layers, hot winds, and shocks are unlikely to directly account for the bulk of the O VI. Furthermore, we show that there is a robust constraint on the number density of many of the photoionized ∼ {10}4 {{K}} absorption systems that yields upper bounds in the range n< (0.1-3) × {10}-3(Z/0.3) cm‑3, suggesting that the dominant pressure in some photoionized clouds is nonthermal. This constraint is in accordance with the low densities inferred from more complex photoionization modeling. The large amount of cooling gas that is inferred could re-form these clouds in a fraction of the halo dynamical time, and it requires much of the feedback energy available from supernovae to be dissipated in the CGM.

  5. A Multi-Robot Sense-Act Approach to Lead to a Proper Acting in Environmental Incidents

    PubMed Central

    Conesa-Muñoz, Jesús; Valente, João; del Cerro, Jaime; Barrientos, Antonio; Ribeiro, Angela

    2016-01-01

    Many environmental incidents affect large areas, often in rough terrain constrained by natural obstacles, which makes intervention difficult. New technologies, such as unmanned aerial vehicles, may help address this issue due to their suitability to reach and easily cover large areas. Thus, unmanned aerial vehicles may be used to inspect the terrain and make a first assessment of the affected areas; however, nowadays they do not have the capability to act. On the other hand, ground vehicles rely on enough power to perform the intervention but exhibit more mobility constraints. This paper proposes a multi-robot sense-act system, composed of aerial and ground vehicles. This combination allows performing autonomous tasks in large outdoor areas by integrating both types of platforms in a fully automated manner. Aerial units are used to easily obtain relevant data from the environment and ground units use this information to carry out interventions more efficiently. This paper describes the platforms and sensors required by this multi-robot sense-act system as well as proposes a software system to automatically handle the workflow for any generic environmental task. The proposed system has proved to be suitable to reduce the amount of herbicide applied in agricultural treatments. Although herbicides are very polluting, they are massively deployed on complete agricultural fields to remove weeds. Nevertheless, the amount of herbicide required for treatment is radically reduced when it is accurately applied on patches by the proposed multi-robot system. Thus, the aerial units were employed to scout the crop and build an accurate weed distribution map which was subsequently used to plan the task of the ground units. The whole workflow was executed in a fully autonomous way, without human intervention except when required by Spanish law due to safety reasons. PMID:27517934

  6. The economics of clinical genetics services. III. Cognitive genetics services are not self-supporting.

    PubMed Central

    Bernhardt, B A; Pyeritz, R E

    1989-01-01

    We investigated the amount of time required to provide, and the charges and reimbursement for, cognitive genetics services in four clinical settings. In a prenatal diagnostic center, a mean of 3 h/couple was required to provide counseling and follow-up services with a mean charge of $30/h and collection of $27/h. Only 49% of personnel costs were covered by income from patient charges. In a genetics clinic in a private specialty hospital, 5.5 and 2.75 h were required to provide cognitive services to each new and follow-up family, respectively. The mean charge for each new family was $25/h and for follow-up families $13/h. The amount collected was less than 25% of that charged. In a pediatric genetics clinic in a large teaching hospital, new families required a mean of 4 h and were charged $28/h; follow-up families also required a mean of 4 h, and were charged $15/h. Only 55% of the amounts charged were collected. Income from patient charges covered only 69% of personnel costs. In a genetics outreach setting, 5 and 4.5 h were required to serve new and follow-up families, respectively. Charges were $25/h and $12/h, and no monies were collected. In all clinic settings, less than one-half of the total service time was that of a physician, and more than one-half of the service time occurred before and after the clinic visit. In no clinic setting were cognitive genetics services self-supporting. Means to improve the financial base of cognitive genetics services include improving collections, increasing charges, developing fee schedules, providing services more efficiently, and seeking state, federal, and foundation support for services. PMID:2912071

  7. Effect of the amount of Na2SO4 on the high temperature corrosion of Udimet-700

    NASA Technical Reports Server (NTRS)

    Misra, A. K.; Kohl, F. J.

    1983-01-01

    The corrosion of Udimet-700, coated with different doses of Na2SO4, was studied in an isothermal thermogravimetric test in the temperature range 900 to 950 C. The weight gain curve is characterized by five distinct stages: an initial period of linear corrosion; an induction period; a period of accelerated corrosion; a period of decelerating corrosion; and a period of parabolic oxidation. The time required for the failure of the alloy increases with an increase in the amount of Na2SO4, reaches a peak and then decreases with further increase in the amount of Na2SO4. For low and intermediate doses (0.3 to 2.0 mg/sq cm), the catastrophic failure of the material occurs by the formation of Na2MoO4 and interaction of the liquid Na2MoO4 with the alloy. For heavy doses, the degradation of the material is due to the formation of large amounts of sulfides.

  8. Optimization of cDNA-AFLP experiments using genomic sequence data.

    PubMed

    Kivioja, Teemu; Arvas, Mikko; Saloheimo, Markku; Penttilä, Merja; Ukkonen, Esko

    2005-06-01

    cDNA amplified fragment length polymorphism (cDNA-AFLP) is one of the few genome-wide level expression profiling methods capable of finding genes that have not yet been cloned or even predicted from sequence but have interesting expression patterns under the studied conditions. In cDNA-AFLP, a complex cDNA mixture is divided into small subsets using restriction enzymes and selective PCR. A large cDNA-AFLP experiment can require a substantial amount of resources, such as hundreds of PCR amplifications and gel electrophoresis runs, followed by manual cutting of a large number of bands from the gels. Our aim was to test whether this workload can be reduced by rational design of the experiment. We used the available genomic sequence information to optimize cDNA-AFLP experiments beforehand so that as many transcripts as possible could be profiled with a given amount of resources. Optimization of the selection of both restriction enzymes and selective primers for cDNA-AFLP experiments has not been performed previously. The in silico tests performed suggest that substantial amounts of resources can be saved by the optimization of cDNA-AFLP experiments.

  9. Modeling an exhumed basin: A method for estimating eroded overburden

    USGS Publications Warehouse

    Poelchau, H.S.

    2001-01-01

    The Alberta Deep Basin in western Canada has undergone a large amount of erosion following deep burial in the Eocene. Basin modeling and simulation of burial and temperature history require estimates of maximum overburden for each gridpoint in the basin model. Erosion can be estimated using shale compaction trends. For instance, the widely used Magara method attempts to establish a sonic log gradient for shales and uses the extrapolation to a theoretical uncompacted shale value as a first indication of overcompaction and estimation of the amount of erosion. Because such gradients are difficult to establish in many wells, an extension of this method was devised to help map erosion over a large area. Sonic A; values of one suitable shale formation are calibrated with maximum depth of burial estimates from sonic log extrapolation for several wells. This resulting regression equation then can be used to estimate and map maximum depth of burial or amount of erosion for all wells in which this formation has been logged. The example from the Alberta Deep Basin shows that the magnitude of erosion calculated by this method is conservative and comparable to independent estimates using vitrinite reflectance gradient methods. ?? 2001 International Association for Mathematical Geology.

  10. Essential amino acids: master regulators of nutrition and environmental footprint?

    PubMed Central

    Tessari, Paolo; Lante, Anna; Mosca, Giuliano

    2016-01-01

    The environmental footprint of animal food production is considered several-fold greater than that of crops cultivation. Therefore, the choice between animal and vegetarian diets may have a relevant environmental impact. In such comparisons however, an often neglected issue is the nutritional value of foods. Previous estimates of nutrients’ environmental footprint had predominantly been based on either food raw weight or caloric content, not in respect to human requirements. Essential amino acids (EAAs) are key parameters in food quality assessment. We re-evaluated here the environmental footprint (expressed both as land use for production and as Green House Gas Emission (GHGE), of some animal and vegetal foods, titrated to provide EAAs amounts in respect to human requirements. Production of high-quality animal proteins, in amounts sufficient to match the Recommended Daily Allowances of all the EAAs, would require a land use and a GHGE approximately equal, greater o smaller (by only ±1-fold), than that necessary to produce vegetal proteins, except for soybeans, that exhibited the smallest footprint. This new analysis downsizes the common concept of a large advantage, in respect to environmental footprint, of crops vs. animal foods production, when human requirements of EAAs are used for reference. PMID:27221394

  11. Combination of nutrients in a mammalian cell culture medium kills cryptococci.

    PubMed

    Granger, Donald L; Call, Donna M

    2018-06-06

    We found that a large inoculum of Cryptococcus gattii cells, when plated on Dulbecco's modified eagle's medium (DMEM) incorporated into agar, died within a few hours provided that DMEM agar plates had been stored in darkness for approximately 3 days after preparation. Standard conditions were developed for quantification of killing. The medium lost its fungicidal activity when exposed to visible light of wave length ∼400 nm. The amount of energy required was estimated at 5.8 × 104 joules @ 550 nm. Liquid DMEM conditioned by incubation over DMEM agar plates stored in darkness was fungicidal. We found that fungicidal activity was heat-stable (100°C). Dialysis tubing with MWC0 < 100 Daltons retained fungicidal activity. Neutral pH was required. Strains of Cryptococcus were uniformly sensitive, but some Candida species were resistant. Components of DMEM required for killing were pyridoxal and cystine. Micromolar amounts of iron shortened the time required for DMEM agar plates to become fungicidal when stored in the dark. Organic and inorganic compounds bearing reduced sulfur atoms at millimolar concentrations inhibited fungicidal activity. Our results point to a light-sensitive antifungal compound formed by reaction of pyridoxal with cystine possibly by Schiff base formation.

  12. Essential amino acids: master regulators of nutrition and environmental footprint?

    PubMed

    Tessari, Paolo; Lante, Anna; Mosca, Giuliano

    2016-05-25

    The environmental footprint of animal food production is considered several-fold greater than that of crops cultivation. Therefore, the choice between animal and vegetarian diets may have a relevant environmental impact. In such comparisons however, an often neglected issue is the nutritional value of foods. Previous estimates of nutrients' environmental footprint had predominantly been based on either food raw weight or caloric content, not in respect to human requirements. Essential amino acids (EAAs) are key parameters in food quality assessment. We re-evaluated here the environmental footprint (expressed both as land use for production and as Green House Gas Emission (GHGE), of some animal and vegetal foods, titrated to provide EAAs amounts in respect to human requirements. Production of high-quality animal proteins, in amounts sufficient to match the Recommended Daily Allowances of all the EAAs, would require a land use and a GHGE approximately equal, greater o smaller (by only ±1-fold), than that necessary to produce vegetal proteins, except for soybeans, that exhibited the smallest footprint. This new analysis downsizes the common concept of a large advantage, in respect to environmental footprint, of crops vs. animal foods production, when human requirements of EAAs are used for reference.

  13. Augmenting Conceptual Design Trajectory Tradespace Exploration with Graph Theory

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Mathew R.; Steffens, Michael; Edwards, Stephen

    2016-01-01

    Within conceptual design changes occur rapidly due to a combination of uncertainty and shifting requirements. To stay relevant in this fluid time, trade studies must also be performed rapidly. In order to drive down analysis time while improving the information gained by these studies, surrogate models can be created to represent the complex output of a tool or tools within a specified tradespace. In order to create this model however, a large amount of data must be collected in a short amount of time. By this method, the historical approach of relying on subject matter experts to generate the data required is schedule infeasible. However, by implementing automation and distributed analysis the required data can be generated in a fraction of the time. Previous work focused on setting up a tool called multiPOST capable of orchestrating many simultaneous runs of an analysis tool assessing these automated analyses utilizing heuristics gleaned from the best practices of current subject matter experts. In this update to the previous work, elements of graph theory are included to further drive down analysis time by leveraging data previously gathered. It is shown to outperform the previous method in both time required, and the quantity and quality of data produced.

  14. Forming artificial soils from waste materials for mine site rehabilitation

    NASA Astrophysics Data System (ADS)

    Yellishetty, Mohan; Wong, Vanessa; Taylor, Michael; Li, Johnson

    2014-05-01

    Surface mining activities often produce large volumes of solid wastes which invariably requires the removal of significant quantities of waste rock (overburden). As mines expand, larger volumes of waste rock need to be moved which also require extensive areas for their safe disposal and containment. The erosion of these dumps may result in landform instability, which in turn may result in exposure of contaminants such as trace metals, elevated sediment delivery in adjacent waterways, and the subsequent degradation of downstream water quality. The management of solid waste materials from industrial operations is also a key component for a sustainable economy. For example, in addition to overburden, coal mines produce large amounts of waste in the form of fly ash while sewage treatment plants require disposal of large amounts of compost. Similarly, paper mills produce large volumes of alkaline rejected wood chip waste which is usually disposed of in landfill. These materials, therefore, presents a challenge in their use, and re-use in the rehabilitation of mine sites and provides a number of opportunities for innovative waste disposal. The combination of solid wastes sourced from mines, which are frequently nutrient poor and acidic, with nutrient-rich composted material produced from sewage treatment and alkaline wood chip waste has the potential to lead to a soil suitable for mine rehabilitation and successful seed germination and plant growth. This paper presents findings from two pilot projects which investigated the potential of artificial soils to support plant growth for mine site rehabilitation. We found that pH increased in all the artificial soil mixtures and were able to support plant establishment. Plant growth was greatest in those soils with the greatest proportion of compost due to the higher nutrient content. These pot trials suggest that the use of different waste streams to form an artificial soil can potentially be used in mine site rehabilitation where there is a nutrient-rich source of waste.

  15. Enabling Graph Appliance for Genome Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Rina; Graves, Jeffrey A; Lee, Sangkeun

    2015-01-01

    In recent years, there has been a huge growth in the amount of genomic data available as reads generated from various genome sequencers. The number of reads generated can be huge, ranging from hundreds to billions of nucleotide, each varying in size. Assembling such large amounts of data is one of the challenging computational problems for both biomedical and data scientists. Most of the genome assemblers developed have used de Bruijn graph techniques. A de Bruijn graph represents a collection of read sequences by billions of vertices and edges, which require large amounts of memory and computational power to storemore » and process. This is the major drawback to de Bruijn graph assembly. Massively parallel, multi-threaded, shared memory systems can be leveraged to overcome some of these issues. The objective of our research is to investigate the feasibility and scalability issues of de Bruijn graph assembly on Cray s Urika-GD system; Urika-GD is a high performance graph appliance with a large shared memory and massively multithreaded custom processor designed for executing SPARQL queries over large-scale RDF data sets. However, to the best of our knowledge, there is no research on representing a de Bruijn graph as an RDF graph or finding Eulerian paths in RDF graphs using SPARQL for potential genome discovery. In this paper, we address the issues involved in representing a de Bruin graphs as RDF graphs and propose an iterative querying approach for finding Eulerian paths in large RDF graphs. We evaluate the performance of our implementation on real world ebola genome datasets and illustrate how genome assembly can be accomplished with Urika-GD using iterative SPARQL queries.« less

  16. Finite Element Analysis and Optimization of Flexure Bearing for Linear Motor Compressor

    NASA Astrophysics Data System (ADS)

    Khot, Maruti; Gawali, Bajirao

    Nowadays linear motor compressors are commonly used in miniature cryocoolers instead of rotary compressors because rotary compressors apply large radial forces to the piston, which provide no useful work, cause large amount of wear and usually require lubrication. Recent trends favour flexure supported configurations for long life. The present work aims at designing and geometrical optimization of flexure bearings using finite element analysis and the development of design charts for selection purposes. The work also covers the manufacturing of flexures using different materials and the validation of the experimental finite element analysis results.

  17. Microwave evidence for large-scale changes associated with a filament eruption

    NASA Technical Reports Server (NTRS)

    Kundu, M. R.; Schmahl, E. J.; Fu, Q.-J.

    1989-01-01

    VLA observations at 6 and 20 cm wavelengths taken on August 3, 1985 are presented, showing an eruptive filament event in which microwave emission originated in two widely separated regions during the disintegration of the filament. The amount of heat required for the enhancement is estimated. Near-simultaneous changes in intensity and polarization were observed in the western components of the northern and southern regions. It is suggested that large-scale magnetic interconnections permitted the two regions to respond similarly to an external energy or mass source involved in the disruption of the filament.

  18. Production of Bacteriophages by Listeria Cells Entrapped in Organic Polymers.

    PubMed

    Roy, Brigitte; Philippe, Cécile; Loessner, Martin J; Goulet, Jacques; Moineau, Sylvain

    2018-06-13

    Applications for bacteriophages as antimicrobial agents are increasing. The industrial use of these bacterial viruses requires the production of large amounts of suitable strictly lytic phages, particularly for food and agricultural applications. This work describes a new approach for phage production. Phages H387 ( Siphoviridae ) and A511 ( Myoviridae ) were propagated separately using Listeria ivanovii host cells immobilised in alginate beads. The same batch of alginate beads could be used for four successive and efficient phage productions. This technique enables the production of large volumes of high-titer phage lysates in continuous or semi-continuous (fed-batch) cultures.

  19. Parallelization and visual analysis of multidimensional fields: Application to ozone production, destruction, and transport in three dimensions

    NASA Technical Reports Server (NTRS)

    Schwan, Karsten

    1994-01-01

    Atmospheric modeling is a grand challenge problem for several reasons, including its inordinate computational requirements and its generation of large amounts of data concurrent with its use of very large data sets derived from measurement instruments like satellites. In addition, atmospheric models are typically run several times, on new data sets or to reprocess existing data sets, to investigate or reinvestigate specific chemical or physical processes occurring in the earth's atmosphere, to understand model fidelity with respect to observational data, or simply to experiment with specific model parameters or components.

  20. Water and Land Limitations to Future Agricultural Production in the Middle East

    NASA Astrophysics Data System (ADS)

    Koch, J. A. M.; Wimmer, F.; Schaldach, R.

    2015-12-01

    Countries in the Middle East use a large fraction of their scarce water resources to produce cash crops, such as fruit and vegetables, for international markets. At the same time, these countries import large amounts of staple crops, such as cereals, required to meet the nutritional demand of their populations. This makes food security in the Middle East heavily dependent on world market prices for staple crops. Under these preconditions, increasing food demand due to population growth, urban expansion on fertile farmlands, and detrimental effects of a changing climate on the production of agricultural commodities present major challenges to countries in the Middle East that try to improve food security by increasing their self-sufficiency rate of staple crops.We applied the spatio-temporal land-use change model LandSHIFT.JR to simulate how an expansion of urban areas may affect the production of agricultural commodities in Jordan. We furthermore evaluated how climate change and changes in socio-economic conditions may influence crop production. The focus of our analysis was on potential future irrigated and rainfed production (crop yield and area demand) of fruit, vegetables, and cereals. Our simulation results show that the expansion of urban areas and the resulting displacement of agricultural areas does result in a slight decrease in crop yields. This leads to almost no additional irrigation water requirements due to the relocation of agricultural areas, i.e. there is the same amount of "crop per drop". However, taking into account projected changes in socio-economic conditions and climate conditions, a large volume of water would be required for cereal production in order to safeguard current self-sufficiency rates for staple crops. Irrigation water requirements are expected to double until 2025 and to triple until 2050. Irrigated crop yields are projected to decrease by about 25%, whereas there is no decrease in rainfed crop yields to be expected.

  1. Large temporal scale and capacity subsurface bulk energy storage with CO2

    NASA Astrophysics Data System (ADS)

    Saar, M. O.; Fleming, M. R.; Adams, B. M.; Ogland-Hand, J.; Nelson, E. S.; Randolph, J.; Sioshansi, R.; Kuehn, T. H.; Buscheck, T. A.; Bielicki, J. M.

    2017-12-01

    Decarbonizing energy systems by increasing the penetration of variable renewable energy (VRE) technologies requires efficient and short- to long-term energy storage. Very large amounts of energy can be stored in the subsurface as heat and/or pressure energy in order to provide both short- and long-term (seasonal) storage, depending on the implementation. This energy storage approach can be quite efficient, especially where geothermal energy is naturally added to the system. Here, we present subsurface heat and/or pressure energy storage with supercritical carbon dioxide (CO2) and discuss the system's efficiency, deployment options, as well as its advantages and disadvantages, compared to several other energy storage options. CO2-based subsurface bulk energy storage has the potential to be particularly efficient and large-scale, both temporally (i.e., seasonal) and spatially. The latter refers to the amount of energy that can be stored underground, using CO2, at a geologically conducive location, potentially enabling storing excess power from a substantial portion of the power grid. The implication is that it would be possible to employ centralized energy storage for (a substantial part of) the power grid, where the geology enables CO2-based bulk subsurface energy storage, whereas the VRE technologies (solar, wind) are located on that same power grid, where (solar, wind) conditions are ideal. However, this may require reinforcing the power grid's transmission lines in certain parts of the grid to enable high-load power transmission from/to a few locations.

  2. Optimization of a Monte Carlo Model of the Transient Reactor Test Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kristin; DeHart, Mark; Goluoglu, Sedat

    2017-03-01

    The ultimate goal of modeling and simulation is to obtain reasonable answers to problems that don’t have representations which can be easily evaluated while minimizing the amount of computational resources. With the advances during the last twenty years of large scale computing centers, researchers have had the ability to create a multitude of tools to minimize the number of approximations necessary when modeling a system. The tremendous power of these centers requires the user to possess an immense amount of knowledge to optimize the models for accuracy and efficiency.This paper seeks to evaluate the KENO model of TREAT to optimizemore » calculational efforts.« less

  3. Development of life cycle water footprints for the production of fuels and chemicals from algae biomass.

    PubMed

    Nogueira Junior, Edson; Kumar, Mayank; Pankratz, Stan; Oyedun, Adetoyese Olajire; Kumar, Amit

    2018-09-01

    This study develops life cycle water footprints for the production of fuels and chemicals via thermochemical conversion of algae biomass. This study is based on two methods of feedstock production - ponds and photobioreactors (PBRs) - and four conversion pathways - fast pyrolysis, hydrothermal liquefaction (HTL), conventional gasification, and hydrothermal gasification (HTG). The results show the high fresh water requirement for algae production and the necessity to recycle harvested water or use alternative water sources. To produce 1 kg of algae through ponds, 1564 L of water are required. When PBRs are used, only 372 L water are required; however, the energy requirements for PBRs are about 30 times higher than for ponds. From a final product perspective, the pathway based on the gasification of algae biomass was the thermochemical conversion method that required the highest amount of water per MJ produced (mainly due to its low hydrogen yield), followed by fast pyrolysis and HTL. On the other hand, HTG has the lowest water footprint, mainly because the large amount of electricity generated as part of the process compensates for the electricity used by the system. Performance in all pathways can be improved through recycling channels. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Hot working behavior of selective laser melted and laser metal deposited Inconel 718

    NASA Astrophysics Data System (ADS)

    Bambach, Markus; Sizova, Irina

    2018-05-01

    The production of Nickel-based high-temperature components is of great importance for the transport and energy sector. Forging of high-temperature alloys often requires expensive dies, multiple forming steps and leads to forged parts with tolerances that require machining to create the final shape and a large amount of scrap. Additive manufacturing offers the possibility to print the desired shapes directly as net-shape components, requiring only little additional effort in machining. Especially for high-temperature alloys carrying a large amount of energy per unit mass, additive manufacturing could be more energy-efficient than forging if the energy contained in the machining scrap exceeds the energy needed for powder production and laser processing. However, the microstructure and performance of 3d-printed parts will not reach the level of forged material unless further expensive processes such as hot-isostatic pressing are used. Using the design freedom and possibilities to locally engineer material, additive manufacturing could be combined with forging operations to novel process chains, offering the possibility to reduce the number of forging steps and to create near-net shape forgings with desired local properties. Some innovative process chains combining additive manufacturing and forging have been patented recently, but almost no scientific knowledge on the workability of 3D printed preforms exists. The present study investigates the flow stress and microstructure evolution during hot working of pre-forms produced by laser powder deposition and selective laser melting (Figure 1) and puts forward a model for the flow stress.

  5. ACToR - Aggregated Computational Toxicology Resource ...

    EPA Pesticide Factsheets

    There are too many uncharacterized environmental chemicals to test with current in vivo protocols. Develop predictive in vitro screening assays that can be used to prioritize chemicals for detailed testing. ToxCast program requires large amounts of data: In vitro assays (mainly generated by ToxCast program) and In vivo data to develop and validate predictive signatures ACToR is compiling both sets of data for use in predictive algorithms.

  6. Transport Traffic Analysis for Abusive Infrastructure Characterization

    DTIC Science & Technology

    2012-12-14

    Introduction Abusive traffic abounds on the Internet, in the form of email, malware, vulnerability scanners, worms, denial-of-service, drive-by-downloads, scam ...insight is two-fold. First, attackers have a basic requirement to source large amounts of data, be it denial-of-service, scam -hosting, spam, or other...the network core. This paper explores the power of transport-layer traffic analysis to detect and characterize scam hosting infrastructure, including

  7. Man-Machine Interaction: Operator.

    DTIC Science & Technology

    1984-06-01

    EASTER OF SCIENCI I COMPUTER SCIENCE Justification from the Distribution/ Availability Codes NAVal POSTGBADUATE SCHOOL Avail and/or June 1984 Dlst...Few pecple, if any, remember everything they see or hear but an anazingly large amount of material can be recalled years after it has been acquired...and skill, learning takes tine. The time required for the learning process will generally vary with the coaplexity of the material cr task he is

  8. Artificial intelligence applications concepts for the remote sensing and earth science community

    NASA Technical Reports Server (NTRS)

    Campbell, W. J.; Roelofs, L. H.

    1984-01-01

    The following potential applications of AI to the study of earth science are described: (1) intelligent data management systems; (2) intelligent processing and understanding of spatial data; and (3) automated systems which perform tasks that currently require large amounts of time by scientists and engineers to complete. An example is provided of how an intelligent information system might operate to support an earth science project.

  9. Isolation of High-Molecular-Weight DNA from Monolayer Cultures of Mammalian Cells Using Proteinase K and Phenol.

    PubMed

    Green, Michael R; Sambrook, Joseph

    2017-07-05

    This procedure is the method of choice for purification of mammalian genomic DNA from monolayer cultures when large amounts of DNA are required, for example, for Southern blotting. Approximately 200 µg of mammalian DNA, 100-150 kb in length, is obtained from 5 × 10 7 cultured aneuploid cells (e.g., HeLa cells). © 2017 Cold Spring Harbor Laboratory Press.

  10. KSC-388c2096-08

    NASA Image and Video Library

    2000-05-02

    Original photo and caption dated June 22, 1988: "A dwarf wheat variety known as Yecoro Rojo flourishes in KSC's Biomass Production Chamber. Researchers are gathering information on the crop's ability to produce food, water and oxygen, and then remove carbon dioxide. The confined quarters associated with space travel require researchers to focus on smaller plants that yield proportionately large amounts of biomass. This wheat crop takes about 85 days to grow before harvest."

  11. Tools and Data Services from the NASA Earth Satellite Observations for Remote Sensing Commercial Applications

    NASA Technical Reports Server (NTRS)

    Vicente, Gilberto

    2005-01-01

    Several commercial applications of remote sensing data, such as water resources management, environmental monitoring, climate prediction, agriculture, forestry, preparation for and migration of extreme weather events, require access to vast amounts of archived high quality data, software tools and services for data manipulation and information extraction. These on the other hand require gaining detailed understanding of the data's internal structure and physical implementation of data reduction, combination and data product production. The time-consuming task must be undertaken before the core investigation can begin and is an especially difficult challenge when science objectives require users to deal with large multi-sensor data sets of different formats, structures, and resolutions.

  12. A Roadmap for HEP Software and Computing R&D for the 2020s

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alves, Antonio Augusto, Jr; et al.

    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to preparemore » for this software upgrade.« less

  13. Extra-metabolic energy use and the rise in human hyper-density

    NASA Astrophysics Data System (ADS)

    Burger, Joseph R.; Weinberger, Vanessa P.; Marquet, Pablo A.

    2017-03-01

    Humans, like all organisms, are subject to fundamental biophysical laws. Van Valen predicted that, because of zero-sum dynamics, all populations of all species in a given environment flux the same amount of energy on average. Damuth’s ’energetic equivalence rule’ supported Van Valen´s conjecture by showing a tradeoff between few big animals per area with high individual metabolic rates compared to abundant small species with low energy requirements. We use metabolic scaling theory to compare variation in densities and individual energy use in human societies to other land mammals. We show that hunter-gatherers occurred at densities lower than the average for a mammal of our size. Most modern humans, in contrast, concentrate in large cities at densities up to four orders of magnitude greater than hunter-gatherers, yet consume up to two orders of magnitude more energy per capita. Today, cities across the globe flux greater energy than net primary productivity on a per area basis. This is possible by importing enormous amounts of energy and materials required to sustain hyper-dense, modern humans. The metabolic rift with nature created by modern cities fueled largely by fossil energy poses formidable challenges for establishing a sustainable relationship on a rapidly urbanizing, yet finite planet.

  14. Alternatives for the intermediate recovery of plasmid DNA: performance, economic viability and environmental impact.

    PubMed

    Freitas, Sindelia; Canário, Sónia; Santos, José A L; Prazeres, Duarte M F

    2009-02-01

    Robust cGMP manufacturing is required to produce high-quality plasmid DNA (pDNA). Three established techniques, isopropanol and ammonium sulfate (AS) precipitation (PP), tangential flow filtration (TFF) and aqueous two-phase systems (ATPS) with PEG600/AS, were tested as alternatives to recover pDNA from alkaline lysates. Yield and purity data were used to evaluate the economic and environmental impact of each option. Although pDNA yields > or = 90% were always obtained, ATPS delivered the highest HPLC purity (59%), followed by PP (48%) and TFF (18%). However, the ability of ATPS to concentrate pDNA was very poor when compared with PP or TFF. Processes were also implemented by coupling TFF with ATPS or AS-PP. Process simulations indicate that all options require large amounts of water (100-200 tons/kg pDNA) and that the ATPS process uses large amounts of mass separating agents (65 tons/kg pDNA). Estimates indicate that operating costs of the ATPS process are 2.5-fold larger when compared with the PP and TFF processes. The most significant contributions to the costs in the PP, TFF and ATPS processes came from operators (59%), consumables (75%) and raw materials (84%), respectively. The ATPS process presented the highest environmental impact, whereas the impact of the TFF process was negligible.

  15. Extra-metabolic energy use and the rise in human hyper-density.

    PubMed

    Burger, Joseph R; Weinberger, Vanessa P; Marquet, Pablo A

    2017-03-02

    Humans, like all organisms, are subject to fundamental biophysical laws. Van Valen predicted that, because of zero-sum dynamics, all populations of all species in a given environment flux the same amount of energy on average. Damuth's 'energetic equivalence rule' supported Van Valen´s conjecture by showing a tradeoff between few big animals per area with high individual metabolic rates compared to abundant small species with low energy requirements. We use metabolic scaling theory to compare variation in densities and individual energy use in human societies to other land mammals. We show that hunter-gatherers occurred at densities lower than the average for a mammal of our size. Most modern humans, in contrast, concentrate in large cities at densities up to four orders of magnitude greater than hunter-gatherers, yet consume up to two orders of magnitude more energy per capita. Today, cities across the globe flux greater energy than net primary productivity on a per area basis. This is possible by importing enormous amounts of energy and materials required to sustain hyper-dense, modern humans. The metabolic rift with nature created by modern cities fueled largely by fossil energy poses formidable challenges for establishing a sustainable relationship on a rapidly urbanizing, yet finite planet.

  16. External Tank Program Legacy of Success

    NASA Technical Reports Server (NTRS)

    Welzyn, Ken; Pilet, Jeff

    2010-01-01

    I.Goal: a) Extensive TPS damage caused by extreme hail storm. b) Repair plan required to restore TPS to minimize program manifest impacts. II. Challenges: a) Skeptical technical community - Concerned about interactions of damage with known/unknown failure modes. b) Schedule pressure to accommodate ISS program- Next tank still at MAF c)Limited ET resources. III. How d We Do It?: a) Developed unique engineering requirements and tooling to minimize repairs. b) Performed large amount of performance testing to demonstrate understanding of repairs and residual conditions. c) Effectively communicated results to technical community and management to instill confidence in expected performance.

  17. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less

  18. Conically scanned lidar telescope using holographic optical elements

    NASA Technical Reports Server (NTRS)

    Schwemmer, Geary K.; Wilkerson, Thomas D.

    1992-01-01

    Holographic optical elements (HOE) using volume phase holograms make possible a new class of lightweight scanning telescopes having advantages for lidar remote sensing instruments. So far, the only application of HOE's to lidar has been a non-scanning receiver for a laser range finder. We introduce a large aperture, narrow field of view (FOV) telescope used in a conical scanning configuration, having a much smaller rotating mass than in conventional designs. Typically, lidars employ a large aperture collector and require a narrow FOV to limit the amount of skylight background. Focal plane techniques are not good approaches to scanning because they require a large FOV within which to scan a smaller FOV mirror or detector array. Thus, scanning lidar systems have either used a large flat scanning mirror at which the receiver telescope is pointed, or the entire telescope is steered. We present a concept for a conically scanned lidar telescope in which the only moving part is the HOE which serves as the primary collecting optic. We also describe methods by which a multiplexed HOE can be used simultaneously as a dichroic beamsplitter.

  19. System Learning via Exploratory Data Analysis: Seeing Both the Forest and the Trees

    NASA Astrophysics Data System (ADS)

    Habash Krause, L.

    2014-12-01

    As the amount of observational Earth and Space Science data grows, so does the need for learning and employing data analysis techniques that can extract meaningful information from those data. Space-based and ground-based data sources from all over the world are used to inform Earth and Space environment models. However, with such a large amount of data comes a need to organize those data in a way such that trends within the data are easily discernible. This can be tricky due to the interaction between physical processes that lead to partial correlation of variables or multiple interacting sources of causality. With the suite of Exploratory Data Analysis (EDA) data mining codes available at MSFC, we have the capability to analyze large, complex data sets and quantitatively identify fundamentally independent effects from consequential or derived effects. We have used these techniques to examine the accuracy of ionospheric climate models with respect to trends in ionospheric parameters and space weather effects. In particular, these codes have been used to 1) Provide summary "at-a-glance" surveys of large data sets through categorization and/or evolution over time to identify trends, distribution shapes, and outliers, 2) Discern the underlying "latent" variables which share common sources of causality, and 3) Establish a new set of basis vectors by computing Empirical Orthogonal Functions (EOFs) which represent the maximum amount of variance for each principal component. Some of these techniques are easily implemented in the classroom using standard MATLAB functions, some of the more advanced applications require the statistical toolbox, and applications to unique situations require more sophisiticated levels of programming. This paper will present an overview of the range of tools available and how they might be used for a variety of time series Earth and Space Science data sets. Examples of feature recognition from both 1D and 2D (e.g. imagery) time series data sets will be presented.

  20. Exploration Planetary Surface Structural Systems: Design Requirements and Compliance

    NASA Technical Reports Server (NTRS)

    Dorsey, John T.

    2011-01-01

    The Lunar Surface Systems Project developed system concepts that would be necessary to establish and maintain a permanent human presence on the Lunar surface. A variety of specific system implementations were generated as a part of the scenarios, some level of system definition was completed, and masses estimated for each system. Because the architecture studies generally spawned a large number of system concepts and the studies were executed in a short amount of time, the resulting system definitions had very low design fidelity. This paper describes the development sequence required to field a particular structural system: 1) Define Requirements, 2) Develop the Design and 3) Demonstrate Compliance of the Design to all Requirements. This paper also outlines and describes in detail the information and data that are required to establish structural design requirements and outlines the information that would comprise a planetary surface system Structures Requirements document.

  1. Facilitating access to information in large documents with an intelligent hypertext system

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie

    1993-01-01

    Retrieving specific information from large amounts of documentation is not an easy task. It could be facilitated if information relevant in the current problem solving context could be automatically supplied to the user. As a first step towards this goal, we have developed an intelligent hypertext system called CID (Computer Integrated Documentation) and tested it on the Space Station Freedom requirement documents. The CID system enables integration of various technical documents in a hypertext framework and includes an intelligent context-sensitive indexing and retrieval mechanism. This mechanism utilizes on-line user information requirements and relevance feedback either to reinforce current indexing in case of success or to generate new knowledge in case of failure. This allows the CID system to provide helpful responses, based on previous usage of the documentation, and to improve its performance over time.

  2. A Primer on Infectious Disease Bacterial Genomics

    PubMed Central

    Petkau, Aaron; Knox, Natalie; Graham, Morag; Van Domselaar, Gary

    2016-01-01

    SUMMARY The number of large-scale genomics projects is increasing due to the availability of affordable high-throughput sequencing (HTS) technologies. The use of HTS for bacterial infectious disease research is attractive because one whole-genome sequencing (WGS) run can replace multiple assays for bacterial typing, molecular epidemiology investigations, and more in-depth pathogenomic studies. The computational resources and bioinformatics expertise required to accommodate and analyze the large amounts of data pose new challenges for researchers embarking on genomics projects for the first time. Here, we present a comprehensive overview of a bacterial genomics projects from beginning to end, with a particular focus on the planning and computational requirements for HTS data, and provide a general understanding of the analytical concepts to develop a workflow that will meet the objectives and goals of HTS projects. PMID:28590251

  3. Vapor Compression and Thermoelectric Heat Pumps for a Cascade Distillation Subsystem: Design and Experiment

    NASA Technical Reports Server (NTRS)

    Erickson, Lisa R.; Ungar, Eugene K.

    2012-01-01

    Humans on a spacecraft require significant amounts of water for drinking, food, hydration, and hygiene. Maximizing the reuse of wastewater while minimizing the use of consumables is critical for long duration space exploration. One of the more promising consumable-free methods of reclaiming wastewater is the distillation/condensation process used in the Cascade Distillation Subsystem (CDS). The CDS heats wastewater to the point of vaporization then condenses and cools the resulting water vapor. The CDS wastewater flow requires heating for evaporation and the product water flow requires cooling for condensation. Performing the heating and cooling processes separately would require two separate units, each of which would demand large amounts of electrical power. Mass, volume, and power efficiencies can be obtained by heating the wastewater and cooling the condensate in a single heat pump unit. The present work describes and compares two competing heat pump methodologies that meet the needs of the CDS: 1) a series of mini compressor vapor compression cycles and 2) a thermoelectric heat exchanger. In the paper, the CDS system level requirements are outlined, the designs of the two heat pumps are described in detail, and the results of heat pump analysis and performance tests are provided. The mass, volume, and power requirement for each heat pump option is compared and the advantages and disadvantages of each system are listed.

  4. A case for automated tape in clinical imaging.

    PubMed

    Bookman, G; Baune, D

    1998-08-01

    Electronic archiving of radiology images over many years will require many terabytes of storage with a need for rapid retrieval of these images. As more large PACS installations are installed and implemented, a data crisis occurs. The ability to store this large amount of data using the traditional method of optical jukeboxes or online disk alone becomes an unworkable solution. The amount of floor space number of optical jukeboxes, and off-line shelf storage required to store the images becomes unmanageable. With the recent advances in tape and tape drives, the use of tape for long term storage of PACS data has become the preferred alternative. A PACS system consisting of a centrally managed system of RAID disk, software and at the heart of the system, tape, presents a solution that for the first time solves the problems of multi-modality high end PACS, non-DICOM image, electronic medical record and ADT data storage. This paper will examine the installation of the University of Utah, Department of Radiology PACS system and the integration of automated tape archive. The tape archive is also capable of storing data other than traditional PACS data. The implementation of an automated data archive to serve the many other needs of a large hospital will also be discussed. This will include the integration of a filmless cardiology department and the backup/archival needs of a traditional MIS department. The need for high bandwidth to tape with a large RAID cache will be examined and how with an interface to a RIS pre-fetch engine, tape can be a superior solution to optical platters or other archival solutions. The data management software will be discussed in detail. The performance and cost of RAID disk cache and automated tape compared to a solution that includes optical will be examined.

  5. Composing Data Parallel Code for a SPARQL Graph Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Tumeo, Antonino; Villa, Oreste

    Big data analytics process large amount of data to extract knowledge from them. Semantic databases are big data applications that adopt the Resource Description Framework (RDF) to structure metadata through a graph-based representation. The graph based representation provides several benefits, such as the possibility to perform in memory processing with large amounts of parallelism. SPARQL is a language used to perform queries on RDF-structured data through graph matching. In this paper we present a tool that automatically translates SPARQL queries to parallel graph crawling and graph matching operations. The tool also supports complex SPARQL constructs, which requires more than basicmore » graph matching for their implementation. The tool generates parallel code annotated with OpenMP pragmas for x86 Shared-memory Multiprocessors (SMPs). With respect to commercial database systems such as Virtuoso, our approach reduces memory occupation due to join operations and provides higher performance. We show the scaling of the automatically generated graph-matching code on a 48-core SMP.« less

  6. A biomedical information system for retrieval and manipulation of NHANES data.

    PubMed

    Mukherjee, Sukrit; Martins, David; Norris, Keith C; Jenders, Robert A

    2013-01-01

    The retrieval and manipulation of data from large public databases like the U.S. National Health and Nutrition Examination Survey (NHANES) may require sophisticated statistical software and significant expertise that may be unavailable in the university setting. In response, we have developed the Data Retrieval And Manipulation System (DReAMS), an automated information system to handle all processes of data extraction and cleaning and then joining different subsets to produce analysis-ready output. The system is a browser-based data warehouse application in which the input data from flat files or operational systems are aggregated in a structured way so that the desired data can be read, recoded, queried and extracted efficiently. The current pilot implementation of the system provides access to a limited amount of NHANES database. We plan to increase the amount of data available through the system in the near future and to extend the techniques to other large databases from CDU archive with a current holding of about 53 databases.

  7. An Efficient Method for the Isolation of Highly Purified RNA from Seeds for Use in Quantitative Transcriptome Analysis.

    PubMed

    Kanai, Masatake; Mano, Shoji; Nishimura, Mikio

    2017-01-11

    Plant seeds accumulate large amounts of storage reserves comprising biodegradable organic matter. Humans rely on seed storage reserves for food and as industrial materials. Gene expression profiles are powerful tools for investigating metabolic regulation in plant cells. Therefore, detailed, accurate gene expression profiles during seed development are required for crop breeding. Acquiring highly purified RNA is essential for producing these profiles. Efficient methods are needed to isolate highly purified RNA from seeds. Here, we describe a method for isolating RNA from seeds containing large amounts of oils, proteins, and polyphenols, which have inhibitory effects on high-purity RNA isolation. Our method enables highly purified RNA to be obtained from seeds without the use of phenol, chloroform, or additional processes for RNA purification. This method is applicable to Arabidopsis, rapeseed, and soybean seeds. Our method will be useful for monitoring the expression patterns of low level transcripts in developing and mature seeds.

  8. Principles of gene microarray data analysis.

    PubMed

    Mocellin, Simone; Rossi, Carlo Riccardo

    2007-01-01

    The development of several gene expression profiling methods, such as comparative genomic hybridization (CGH), differential display, serial analysis of gene expression (SAGE), and gene microarray, together with the sequencing of the human genome, has provided an opportunity to monitor and investigate the complex cascade of molecular events leading to tumor development and progression. The availability of such large amounts of information has shifted the attention of scientists towards a nonreductionist approach to biological phenomena. High throughput technologies can be used to follow changing patterns of gene expression over time. Among them, gene microarray has become prominent because it is easier to use, does not require large-scale DNA sequencing, and allows for the parallel quantification of thousands of genes from multiple samples. Gene microarray technology is rapidly spreading worldwide and has the potential to drastically change the therapeutic approach to patients affected with tumor. Therefore, it is of paramount importance for both researchers and clinicians to know the principles underlying the analysis of the huge amount of data generated with microarray technology.

  9. Faster sequence homology searches by clustering subsequences.

    PubMed

    Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka

    2015-04-15

    Sequence homology searches are used in various fields. New sequencing technologies produce huge amounts of sequence data, which continuously increase the size of sequence databases. As a result, homology searches require large amounts of computational time, especially for metagenomic analysis. We developed a fast homology search method based on database subsequence clustering, and implemented it as GHOSTZ. This method clusters similar subsequences from a database to perform an efficient seed search and ungapped extension by reducing alignment candidates based on triangle inequality. The database subsequence clustering technique achieved an ∼2-fold increase in speed without a large decrease in search sensitivity. When we measured with metagenomic data, GHOSTZ is ∼2.2-2.8 times faster than RAPSearch and is ∼185-261 times faster than BLASTX. The source code is freely available for download at http://www.bi.cs.titech.ac.jp/ghostz/ akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  10. Variable Stars in the Field of V729 Aql

    NASA Astrophysics Data System (ADS)

    Cagaš, P.

    2017-04-01

    Wide field instruments can be used to acquire light curves of tens or even hundreds of variable stars per night, which increases the probability of new discoveries of interesting variable stars and generally increases the efficiency of observations. At the same time, wide field instruments produce a large amount of data, which must be processed using advanced software. The traditional approach, typically used by amateur astronomers, requires an unacceptable amount of time needed to process each data set. New functionality, built into SIPS software package, can shorten the time needed to obtain light curves by several orders of magnitude. Also, newly introduced SILICUPS software is intended for post-processing of stored light curves. It can be used to visualize observations from many nights, to find variable star periods, evaluate types of variability, etc. This work provides an overview of tools used to process data from the large field of view around the variable star V729 Aql. and demonstrates the results.

  11. Mining algorithm for association rules in big data based on Hadoop

    NASA Astrophysics Data System (ADS)

    Fu, Chunhua; Wang, Xiaojing; Zhang, Lijun; Qiao, Liying

    2018-04-01

    In order to solve the problem that the traditional association rules mining algorithm has been unable to meet the mining needs of large amount of data in the aspect of efficiency and scalability, take FP-Growth as an example, the algorithm is realized in the parallelization based on Hadoop framework and Map Reduce model. On the basis, it is improved using the transaction reduce method for further enhancement of the algorithm's mining efficiency. The experiment, which consists of verification of parallel mining results, comparison on efficiency between serials and parallel, variable relationship between mining time and node number and between mining time and data amount, is carried out in the mining results and efficiency by Hadoop clustering. Experiments show that the paralleled FP-Growth algorithm implemented is able to accurately mine frequent item sets, with a better performance and scalability. It can be better to meet the requirements of big data mining and efficiently mine frequent item sets and association rules from large dataset.

  12. Stratospheric Aerosols for Solar Radiation Management

    NASA Astrophysics Data System (ADS)

    Kravitz, Ben

    SRM in the context of this entry involves placing a large amount of aerosols in the stratosphere to reduce the amount of solar radiation reaching the surface, thereby cooling the surface and counteracting some of the warming from anthropogenic greenhouse gases. The way this is accomplished depends on the specific aerosol used, but the basic mechanism involves backscattering and absorbing certain amounts of solar radiation aloft. Since warming from greenhouse gases is due to longwave (thermal) emission, compensating for this warming by reduction of shortwave (solar) energy is inherently imperfect, meaning SRM will have climate effects that are different from the effects of climate change. This will likely manifest in the form of regional inequalities, in that, similarly to climate change, some regions will benefit from SRM, while some will be adversely affected, viewed both in the context of present climate and a climate with high CO2 concentrations. These effects are highly dependent upon the means of SRM, including the type of aerosol to be used, the particle size and other microphysical concerns, and the methods by which the aerosol is placed in the stratosphere. SRM has never been performed, nor has deployment been tested, so the research up to this point has serious gaps. The amount of aerosols required is large enough that SRM would require a major engineering endeavor, although SRM is potentially cheap enough that it could be conducted unilaterally. Methods of governance must be in place before deployment is attempted, should deployment even be desired. Research in public policy, ethics, and economics, as well as many other disciplines, will be essential to the decision-making process. SRM is only a palliative treatment for climate change, and it is best viewed as part of a portfolio of responses, including mitigation, adaptation, and possibly CDR. At most, SRM is insurance against dangerous consequences that are directly due to increased surface air temperatures.

  13. Smart nanogels at the air/water interface: structural studies by neutron reflectivity

    NASA Astrophysics Data System (ADS)

    Zielińska, Katarzyna; Sun, Huihui; Campbell, Richard A.; Zarbakhsh, Ali; Resmini, Marina

    2016-02-01

    The development of effective transdermal drug delivery systems based on nanosized polymers requires a better understanding of the behaviour of such nanomaterials at interfaces. N-Isopropylacrylamide-based nanogels synthesized with different percentages of N,N'-methylenebisacrylamide as cross-linker, ranging from 10 to 30%, were characterized at physiological temperature at the air/water interface, using neutron reflectivity (NR), with isotopic contrast variation, and surface tension measurements; this allowed us to resolve the adsorbed amount and the volume fraction of nanogels at the interface. A large conformational change for the nanogels results in strong deformations at the interface. As the percentage of cross-linker incorporated in the nanogels becomes higher, more rigid matrices are obtained, although less deformed, and the amount of adsorbed nanogels is increased. The data provide the first experimental evidence of structural changes of nanogels as a function of the degree of cross-linking at the air/water interface.The development of effective transdermal drug delivery systems based on nanosized polymers requires a better understanding of the behaviour of such nanomaterials at interfaces. N-Isopropylacrylamide-based nanogels synthesized with different percentages of N,N'-methylenebisacrylamide as cross-linker, ranging from 10 to 30%, were characterized at physiological temperature at the air/water interface, using neutron reflectivity (NR), with isotopic contrast variation, and surface tension measurements; this allowed us to resolve the adsorbed amount and the volume fraction of nanogels at the interface. A large conformational change for the nanogels results in strong deformations at the interface. As the percentage of cross-linker incorporated in the nanogels becomes higher, more rigid matrices are obtained, although less deformed, and the amount of adsorbed nanogels is increased. The data provide the first experimental evidence of structural changes of nanogels as a function of the degree of cross-linking at the air/water interface. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07538f

  14. Integrated Bottom-Up and Top-Down Liquid Chromatography-Mass Spectrometry for Characterization of Recombinant Human Growth Hormone Degradation Products.

    PubMed

    Wang, Yu Annie; Wu, Di; Auclair, Jared R; Salisbury, Joseph P; Sarin, Richa; Tang, Yang; Mozdzierz, Nicholas J; Shah, Kartik; Zhang, Anna Fan; Wu, Shiaw-Lin; Agar, Jeffery N; Love, J Christopher; Love, Kerry R; Hancock, William S

    2017-12-05

    With the advent of biosimilars to the U.S. market, it is important to have better analytical tools to ensure product quality from batch to batch. In addition, the recent popularity of using a continuous process for production of biopharmaceuticals, the traditional bottom-up method, alone for product characterization and quality analysis is no longer sufficient. Bottom-up method requires large amounts of material for analysis and is labor-intensive and time-consuming. Additionally, in this analysis, digestion of the protein with enzymes such as trypsin could induce artifacts and modifications which would increase the complexity of the analysis. On the other hand, a top-down method requires a minimum amount of sample and allows for analysis of the intact protein mass and sequence generated from fragmentation within the instrument. However, fragmentation usually occurs at the N-terminal and C-terminal ends of the protein with less internal fragmentation. Herein, we combine the use of the complementary techniques, a top-down and bottom-up method, for the characterization of human growth hormone degradation products. Notably, our approach required small amounts of sample, which is a requirement due to the sample constraints of small scale manufacturing. Using this approach, we were able to characterize various protein variants, including post-translational modifications such as oxidation and deamidation, residual leader sequence, and proteolytic cleavage. Thus, we were able to highlight the complementarity of top-down and bottom-up approaches, which achieved the characterization of a wide range of product variants in samples of human growth hormone secreted from Pichia pastoris.

  15. Deuterium Retention and Physical Sputtering of Low Activation Ferritic Steel

    NASA Astrophysics Data System (ADS)

    T, Hino; K, Yamaguchi; Y, Yamauchi; Y, Hirohata; K, Tsuzuki; Y, Kusama

    2005-04-01

    Low activation materials have to be developed toward fusion demonstration reactors. Ferritic steel, vanadium alloy and SiC/SiC composite are candidate materials of the first wall, vacuum vessel and blanket components, respectively. Although changes of mechanical-thermal properties owing to neutron irradiation have been investigated so far, there is little data for the plasma material interactions, such as fuel hydrogen retention and erosion. In the present study, deuterium retention and physical sputtering of low activation ferritic steel, F82H, were investigated by using deuterium ion irradiation apparatus. After a ferritic steel sample was irradiated by 1.7 keV D+ ions, the weight loss was measured to obtain the physical sputtering yield. The sputtering yield was 0.04, comparable to that of stainless steel. In order to obtain the retained amount of deuterium, technique of thermal desorption spectroscopy (TDS) was employed to the irradiated sample. The retained deuterium desorbed at temperature ranging from 450 K to 700 K, in the forms of DHO, D2, D2O and hydrocarbons. Hence, the deuterium retained can be reduced by baking with a relatively low temperature. The fluence dependence of retained amount of deuterium was measured by changing the ion fluence. In the ferritic steel without mechanical polish, the retained amount was large even when the fluence was low. In such a case, a large amount of deuterium was trapped in the surface oxide layer containing O and C. When the fluence was large, the thickness of surface oxide layer was reduced by the ion sputtering, and then the retained amount in the oxide layer decreased. In the case of a high fluence, the retained amount of deuterium became comparable to that of ferritic steel with mechanical polish or SS 316L, and one order of magnitude smaller than that of graphite. When the ferritic steel is used, it is required to remove the surface oxide layer for reduction of fuel hydrogen retention. Ferritic steel sample was exposed to the environment of JFT-2M tokamak in JAERI and after that the deuterium retention was examined. The result was roughly the same as the case of deuterium ion irradiation experiment.

  16. Low Data Drug Discovery with One-Shot Learning

    PubMed Central

    2017-01-01

    Recent advances in machine learning have made significant contributions to drug discovery. Deep neural networks in particular have been demonstrated to provide significant boosts in predictive power when inferring the properties and activities of small-molecule compounds (Ma, J. et al. J. Chem. Inf. Model.2015, 55, 263–27425635324). However, the applicability of these techniques has been limited by the requirement for large amounts of training data. In this work, we demonstrate how one-shot learning can be used to significantly lower the amounts of data required to make meaningful predictions in drug discovery applications. We introduce a new architecture, the iterative refinement long short-term memory, that, when combined with graph convolutional neural networks, significantly improves learning of meaningful distance metrics over small-molecules. We open source all models introduced in this work as part of DeepChem, an open-source framework for deep-learning in drug discovery (Ramsundar, B. deepchem.io. https://github.com/deepchem/deepchem, 2016). PMID:28470045

  17. A high throughput MATLAB program for automated force-curve processing using the AdG polymer model.

    PubMed

    O'Connor, Samantha; Gaddis, Rebecca; Anderson, Evan; Camesano, Terri A; Burnham, Nancy A

    2015-02-01

    Research in understanding biofilm formation is dependent on accurate and representative measurements of the steric forces related to brush on bacterial surfaces. A MATLAB program to analyze force curves from an AFM efficiently, accurately, and with minimal user bias has been developed. The analysis is based on a modified version of the Alexander and de Gennes (AdG) polymer model, which is a function of equilibrium polymer brush length, probe radius, temperature, separation distance, and a density variable. Automating the analysis reduces the amount of time required to process 100 force curves from several days to less than 2min. The use of this program to crop and fit force curves to the AdG model will allow researchers to ensure proper processing of large amounts of experimental data and reduce the time required for analysis and comparison of data, thereby enabling higher quality results in a shorter period of time. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Dynamic Modeling and Evaluation of Recurring Infrastructure Maintenance Budget Determination Methods

    DTIC Science & Technology

    2005-03-01

    represent the annual M&R costs for the entire facility (Melvin, 1992). This method requires immense amounts of detailed data for each facility to be...and where facility and infrastructure maintenance must occur. Uzarski et al (1995) discuss that the data gathered produces a candidate list that can... facilities or an infrastructure plant. Government agencies like the DoD, major universities, and large corporations are the major players. Data

  19. Fire as a tool in restoring and maintaining beetle diversity in boreal forests: preliminary results from a large-scale field experiment

    Treesearch

    E. Hyvarinen; H. Lappalainen; P. Martikainen; J. Kouki

    2003-01-01

    During the 1900s, the amount of dead and decaying wood has declined drastically in boreal forests in Finland because of intensive forest management. As a result, species requiring such resources have also declined or have even gone extinct. Recently it has been observed that in addition to old-growth forests, natural, early successional phases are also important for...

  20. East Europe Report: Economic and Industrial Affairs, No. 2416

    DTIC Science & Technology

    1983-06-28

    that 5 percent of the interest on export credits is refunded and that credits are extended to entrepreneurs requiring a greater amount of capital. To pay...foreign customers. Thus the greater need for capital on the part of major entrepreneurs may in large part be financed through preferred credits-both...which reduce the effectiveness of management. In 1975, despite some increse in employment, the growth of non-agricultural generated income was equal to

  1. Rational calculation accuracy in acousto-optical matrix-vector processor

    NASA Astrophysics Data System (ADS)

    Oparin, V. V.; Tigin, Dmitry V.

    1994-01-01

    The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.

  2. Workshop on NASA workstation technology

    NASA Technical Reports Server (NTRS)

    Brown, Robert L.

    1990-01-01

    RIACS hosted a workshop which was designed to foster communication among those people within NASA working on workstation related technology, to share technology, and to learn about new developments and futures in the larger university and industrial workstation communities. Herein, the workshop is documented along with its conclusions. It was learned that there is both a large amount of commonality of requirements and a wide variation in the modernness of in-use technology among the represented NASA centers.

  3. Weymouth Fore River, Weymouth, Braintree, Massachusetts, Small Navigation Project. Detailed Project Report and Environmental Assessment.

    DTIC Science & Technology

    1981-02-01

    and all considered sites were beyond normal hydraulic pumping system capability. Marsh creation requires a large amount of land area which is...boating. Mechanical dredging is planned (as opposed to hydraulic ) because open water disposal of dredged sediments is the preferred alternative...river the above process would be complicated in several ways. First, because the material would have to be hydraulically disposed of at the temporary

  4. America’s Achilles Heel: Defense Against High-altitude Electromagnetic Pulse-policy vs. Practice

    DTIC Science & Technology

    2014-12-12

    Directives SCADA Supervisory Control and Data Acquisition Systems SHIELD Act Secure High-voltage Infrastructure for Electricity from Lethal Damage Act...take place, it is important to understand the effects of the components of EMP from a high-altitude nuclear detonation. The requirements for shielding ...Mass Ejection (CME). A massive, bubble-shaped burst of plasma expanding outward from the Sun’s corona, in which large amounts of superheated

  5. Leveraging human oversight and intervention in large-scale parallel processing of open-source data

    NASA Astrophysics Data System (ADS)

    Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.

    2015-05-01

    The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.

  6. Testing and validation of computerized decision support systems.

    PubMed

    Sailors, R M; East, T D; Wallace, C J; Carlson, D A; Franklin, M A; Heermann, L K; Kinder, A T; Bradshaw, R L; Randolph, A G; Morris, A H

    1996-01-01

    Systematic, through testing of decision support systems (DSSs) prior to release to general users is a critical aspect of high quality software design. Omission of this step may lead to the dangerous, and potentially fatal, condition of relying on a system with outputs of uncertain quality. Thorough testing requires a great deal of effort and is a difficult job because tools necessary to facilitate testing are not well developed. Testing is a job ill-suited to humans because it requires tireless attention to a large number of details. For these reasons, the majority of DSSs available are probably not well tested prior to release. We have successfully implemented a software design and testing plan which has helped us meet our goal of continuously improving the quality of our DSS software prior to release. While requiring large amounts of effort, we feel that the process of documenting and standardizing our testing methods are important steps toward meeting recognized national and international quality standards. Our testing methodology includes both functional and structural testing and requires input from all levels of development. Our system does not focus solely on meeting design requirements but also addresses the robustness of the system and the completeness of testing.

  7. An intelligent load shedding scheme using neural networks and neuro-fuzzy.

    PubMed

    Haidar, Ahmed M A; Mohamed, Azah; Al-Dabbagh, Majid; Hussain, Aini; Masoum, Mohammad

    2009-12-01

    Load shedding is some of the essential requirement for maintaining security of modern power systems, particularly in competitive energy markets. This paper proposes an intelligent scheme for fast and accurate load shedding using neural networks for predicting the possible loss of load at the early stage and neuro-fuzzy for determining the amount of load shed in order to avoid a cascading outage. A large scale electrical power system has been considered to validate the performance of the proposed technique in determining the amount of load shed. The proposed techniques can provide tools for improving the reliability and continuity of power supply. This was confirmed by the results obtained in this research of which sample results are given in this paper.

  8. Development of thermal energy storage units for spacecraft cryogenic coolers

    NASA Technical Reports Server (NTRS)

    Richter, R.; Mahefkey, E. T.

    1980-01-01

    Thermal Energy Storage Units were developed for storing thermal energy required for operating Vuilleumier cryogenic space coolers. In the course of the development work the thermal characteristics of thermal energy storage material was investigated. By three distinctly different methods it was established that ternary salts did not release fusion energy as determined by ideality at the melting point of the eutectic salt. Phase change energy was released over a relatively wide range of temperature with a large change in volume. This strongly affects the amount of thermal energy that is available to the Vuilleumier cryogenic cooler at its operating temperature range and the amount of thermal energy that can be stored and released during a single storage cycle.

  9. Effects of the space environment on the health and safety of space workers

    NASA Technical Reports Server (NTRS)

    Hull, W. E.

    1980-01-01

    Large numbers of individuals are required to work in space to assemble and operate a Solar Power Satellite. The physiological and behavioral consequences for large groups of men and women who perform complex tasks in the vehicular or extravehicular environments over long periods of orbital stay time were considered. The most disturbing consequences of exposure to the null gravity environment found relate to: (1) a generalized cardiovascular deconditioning along with loss of a significant amount of body fluid volume; (2) loss of bone minerals and muscle mass; and (3) degraded performance of neutral mechanisms which govern equilibrium and spatial orientation.

  10. Advanced spacecraft: What will they look like and why

    NASA Technical Reports Server (NTRS)

    Price, Humphrey W.

    1990-01-01

    The next century of spaceflight will witness an expansion in the physical scale of spacecraft, from the extreme of the microspacecraft to the very large megaspacecraft. This will respectively spawn advances in highly integrated and miniaturized components, and also advances in lightweight structures, space fabrication, and exotic control systems. Challenges are also presented by the advent of advanced propulsion systems, many of which require controlling and directing hot plasma, dissipating large amounts of waste heat, and handling very high radiation sources. Vehicle configuration studies for a number of theses types of advanced spacecraft were performed, and some of them are presented along with the rationale for their physical layouts.

  11. Early Program Development

    NASA Image and Video Library

    1996-06-20

    Engineers at one of MSFC's vacuum chambers begin testing a microthruster model. The purpose of these tests are to collect sufficient data that will enabe NASA to develop microthrusters that will move the Space Shuttle, a future space station, or any other space related vehicle with the least amount of expended energy. When something is sent into outer space, the forces that try to pull it back to Earth (gravity) are very small so that it only requires a very small force to move very large objects. In space, a force equal to a paperclip can move an object as large as a car. Microthrusters are used to produce these small forces.

  12. Effects of the space environment on the health and safety of space workers

    NASA Astrophysics Data System (ADS)

    Hull, W. E.

    1980-07-01

    Large numbers of individuals are required to work in space to assemble and operate a Solar Power Satellite. The physiological and behavioral consequences for large groups of men and women who perform complex tasks in the vehicular or extravehicular environments over long periods of orbital stay time were considered. The most disturbing consequences of exposure to the null gravity environment found relate to: (1) a generalized cardiovascular deconditioning along with loss of a significant amount of body fluid volume; (2) loss of bone minerals and muscle mass; and (3) degraded performance of neutral mechanisms which govern equilibrium and spatial orientation.

  13. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  14. Destruction of Navy Hazardous Wastes by Supercritical Water Oxidation

    DTIC Science & Technology

    1994-08-01

    cleaning and derusting (nitrite and citric acid solutions), electroplating ( acids and metal bearing solutions), electronics and refrigeration... acid forming chemical species or that contain a large amount of dissolved solids present a challenge to current SCWO •-chnology. Approved for public...Waste streams that contain a large amount of mineral- acid forming chemical species or that contain a large amount of dissolved solids present a challenge

  15. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images.

    PubMed

    Du, Xiaogang; Dang, Jianwu; Wang, Yangping; Wang, Song; Lei, Tao

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU).

  16. Geo-reCAPTCHA: Crowdsourcing large amounts of geographic information from earth observation data

    NASA Astrophysics Data System (ADS)

    Hillen, Florian; Höfle, Bernhard

    2015-08-01

    The reCAPTCHA concept provides a large amount of valuable information for various applications. First, it provides security, e.g., for a form on a website, by means of a test that only a human could solve. Second, the effort of the user for this test is used to generate additional information, e.g., digitization of books or identification of house numbers. In this work, we present a concept for adapting the reCAPTCHA idea to create user-generated geographic information from earth observation data, and the requirements during the conception and implementation are depicted in detail. Furthermore, the essential parts of a Geo-reCAPTCHA system are described, and afterwards transferred, to a prototype implementation. An empirical user study is conducted to investigate the Geo-reCAPTCHA approach, assessing time and quality of the resulting geographic information. Our results show that a Geo-reCAPTCHA can be solved by the users of our study on building digitization in a short amount of time (19.2 s on average) with an overall average accuracy of the digitizations of 82.2%. In conclusion, Geo-reCAPTCHA has the potential to be a reasonable alternative to the typical reCAPTCHA, and to become a new data-rich channel of crowdsourced geographic information.

  17. The Threat of Uncertainty: Why Using Traditional Approaches for Evaluating Spacecraft Reliability are Insufficient for Future Human Mars Missions

    NASA Technical Reports Server (NTRS)

    Stromgren, Chel; Goodliff, Kandyce; Cirillo, William; Owens, Andrew

    2016-01-01

    Through the Evolvable Mars Campaign (EMC) study, the National Aeronautics and Space Administration (NASA) continues to evaluate potential approaches for sending humans beyond low Earth orbit (LEO). A key aspect of these missions is the strategy that is employed to maintain and repair the spacecraft systems, ensuring that they continue to function and support the crew. Long duration missions beyond LEO present unique and severe maintainability challenges due to a variety of factors, including: limited to no opportunities for resupply, the distance from Earth, mass and volume constraints of spacecraft, high sensitivity of transportation element designs to variation in mass, the lack of abort opportunities to Earth, limited hardware heritage information, and the operation of human-rated systems in a radiation environment with little to no experience. The current approach to maintainability, as implemented on ISS, which includes a large number of spares pre-positioned on ISS, a larger supply sitting on Earth waiting to be flown to ISS, and an on demand delivery of logistics from Earth, is not feasible for future deep space human missions. For missions beyond LEO, significant modifications to the maintainability approach will be required.Through the EMC evaluations, several key findings related to the reliability and safety of the Mars spacecraft have been made. The nature of random and induced failures presents significant issues for deep space missions. Because spare parts cannot be flown as needed for Mars missions, all required spares must be flown with the mission or pre-positioned. These spares must cover all anticipated failure modes and provide a level of overall reliability and safety that is satisfactory for human missions. This will require a large amount of mass and volume be dedicated to storage and transport of spares for the mission. Further, there is, and will continue to be, a significant amount of uncertainty regarding failure rates for spacecraft components. This uncertainty makes it much more difficult to anticipate failures and will potentially require an even larger amount of spares to provide an acceptable level of safety. Ultimately, the approach to maintenance and repair applied to ISS, focusing on the supply of spare parts, may not be tenable for deep space missions. Other approaches, such as commonality of components, simplification of systems, and in-situ manufacturing will be required.

  18. Are sunscreens luxury products?

    PubMed

    Mahé, Emmanuel; Beauchet, Alain; de Maleissye, Marie-Florence; Saiag, Philippe

    2011-09-01

    The incidence of skin cancers is rapidly increasing in Western countries. One of the main sun-protection measures advocated is application of sunscreen. Some studies report a failure to comply with sunscreen application guidance. One explanation is their cost. To evaluate the true cost of sunscreen in two situations: a 4-member family spending 1 week at the beach and a transplant patient respecting all the sun protection recommendations. We performed an analysis of prices of sunscreens sold via Internet drugstores in Europe and North America. Standard sunscreen application recommendations were followed. We tested the recommended amount of sunscreen to be applied (ie, 2 mg/cm(2)). Six hundred seven sunscreens from 17 drugstores in 7 countries were evaluated. Median price of sunscreen was $1.7 US per 10 grams. The price decreased with the size of the bottle. The median price for a family varied from $178.2 per week to $238.4 per week. The price decreased by 33% if the family wore UV-protective T-shirts and by 41% if large-volume bottles were used. The median price for a transplant patient varied from $245.3 per year to $292.3 per year. Anti-UVA activity and topical properties were not evaluated. We tested the recommended amount (2 mg/cm(2)) rather than the amount actually used (1 mg/cm(2)). Under acute sun exposure conditions (a week at the beach), the cost of sun protection appears acceptable if sun protective clothing is worn and large-format bottles and low-cost sunscreens are used. Conversely, in a sun-sensitive population requiring year-round protection, the annual budget is relatively high and patients may require financial assistance to be compliant with sun protection guidelines. Copyright © 2010 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  19. Advanced Optical Burst Switched Network Concepts

    NASA Astrophysics Data System (ADS)

    Nejabati, Reza; Aracil, Javier; Castoldi, Piero; de Leenheer, Marc; Simeonidou, Dimitra; Valcarenghi, Luca; Zervas, Georgios; Wu, Jian

    In recent years, as the bandwidth and the speed of networks have increased significantly, a new generation of network-based applications using the concept of distributed computing and collaborative services is emerging (e.g., Grid computing applications). The use of the available fiber and DWDM infrastructure for these applications is a logical choice offering huge amounts of cheap bandwidth and ensuring global reach of computing resources [230]. Currently, there is a great deal of interest in deploying optical circuit (wavelength) switched network infrastructure for distributed computing applications that require long-lived wavelength paths and address the specific needs of a small number of well-known users. Typical users are particle physicists who, due to their international collaborations and experiments, generate enormous amounts of data (Petabytes per year). These users require a network infrastructures that can support processing and analysis of large datasets through globally distributed computing resources [230]. However, providing wavelength granularity bandwidth services is not an efficient and scalable solution for applications and services that address a wider base of user communities with different traffic profiles and connectivity requirements. Examples of such applications may be: scientific collaboration in smaller scale (e.g., bioinformatics, environmental research), distributed virtual laboratories (e.g., remote instrumentation), e-health, national security and defense, personalized learning environments and digital libraries, evolving broadband user services (i.e., high resolution home video editing, real-time rendering, high definition interactive TV). As a specific example, in e-health services and in particular mammography applications due to the size and quantity of images produced by remote mammography, stringent network requirements are necessary. Initial calculations have shown that for 100 patients to be screened remotely, the network would have to securely transport 1.2 GB of data every 30 s [230]. According to the above explanation it is clear that these types of applications need a new network infrastructure and transport technology that makes large amounts of bandwidth at subwavelength granularity, storage, computation, and visualization resources potentially available to a wide user base for specified time durations. As these types of collaborative and network-based applications evolve addressing a wide range and large number of users, it is infeasible to build dedicated networks for each application type or category. Consequently, there should be an adaptive network infrastructure able to support all application types, each with their own access, network, and resource usage patterns. This infrastructure should offer flexible and intelligent network elements and control mechanism able to deploy new applications quickly and efficiently.

  20. Real-time monitoring of CO2 storage sites: Application to Illinois Basin-Decatur Project

    USGS Publications Warehouse

    Picard, G.; Berard, T.; Chabora, E.; Marsteller, S.; Greenberg, S.; Finley, R.J.; Rinck, U.; Greenaway, R.; Champagnon, C.; Davard, J.

    2011-01-01

    Optimization of carbon dioxide (CO2) storage operations for efficiency and safety requires use of monitoring techniques and implementation of control protocols. The monitoring techniques consist of permanent sensors and tools deployed for measurement campaigns. Large amounts of data are thus generated. These data must be managed and integrated for interpretation at different time scales. A fast interpretation loop involves combining continuous measurements from permanent sensors as they are collected to enable a rapid response to detected events; a slower loop requires combining large datasets gathered over longer operational periods from all techniques. The purpose of this paper is twofold. First, it presents an analysis of the monitoring objectives to be performed in the slow and fast interpretation loops. Second, it describes the implementation of the fast interpretation loop with a real-time monitoring system at the Illinois Basin-Decatur Project (IBDP) in Illinois, USA. ?? 2011 Published by Elsevier Ltd.

  1. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2011-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  2. Results of the harmonics measurement program at the John F. Long photovoltaic house

    NASA Astrophysics Data System (ADS)

    Campen, G. L.

    1982-03-01

    Photovoltaic (PV) systems used in single-family dwellings require an inverter to act as an interface between the direct-current (dc) power output of the PV unit and the alternating-current (ac) power needed by house loads. A type of inverter known as line commutated injects harmonic currents on the ac side and requires large amounts of reactive power. Large numbers of such PV installations could lead to unacceptable levels of harmonic voltages on the utility system, and the need to increase the utility's deliver of reactive power could result in significant cost increases. The harmonics and power-factor effects are examined for a single PV installation using a line-commutated inverter. The magnitude and phase of various currents and voltages from the fundamental to the 13th harmonic were recorded both with and without the operation of the PV system.

  3. Multiplexed analysis of protein-ligand interactions by fluorescence anisotropy in a microfluidic platform.

    PubMed

    Cheow, Lih Feng; Viswanathan, Ramya; Chin, Chee-Sing; Jennifer, Nancy; Jones, Robert C; Guccione, Ernesto; Quake, Stephen R; Burkholder, William F

    2014-10-07

    Homogeneous assay platforms for measuring protein-ligand interactions are highly valued due to their potential for high-throughput screening. However, the implementation of these multiplexed assays in conventional microplate formats is considerably expensive due to the large amounts of reagents required and the need for automation. We implemented a homogeneous fluorescence anisotropy-based binding assay in an automated microfluidic chip to simultaneously interrogate >2300 pairwise interactions. We demonstrated the utility of this platform in determining the binding affinities between chromatin-regulatory proteins and different post-translationally modified histone peptides. The microfluidic chip assay produces comparable results to conventional microtiter plate assays, yet requires 2 orders of magnitude less sample and an order of magnitude fewer pipetting steps. This approach enables one to use small samples for medium-scale screening and could ease the bottleneck of large-scale protein purification.

  4. RAID Disk Arrays for High Bandwidth Applications

    NASA Technical Reports Server (NTRS)

    Moren, Bill

    1996-01-01

    High bandwidth applications require large amounts of data transferred to/from storage devices at extremely high data rates. Further, these applications often are 'real time' in which access to the storage device must take place on the schedule of the data source, not the storage. A good example is a satellite downlink - the volume of data is quite large and the data rates quite high (dozens of MB/sec). Further, a telemetry downlink must take place while the satellite is overhead. A storage technology which is ideally suited to these types of applications is redundant arrays of independent discs (RAID). Raid storage technology, while offering differing methodologies for a variety of applications, supports the performance and redundancy required in real-time applications. Of the various RAID levels, RAID-3 is the only one which provides high data transfer rates under all operating conditions, including after a drive failure.

  5. Ontology-based tools to expedite predictive model construction.

    PubMed

    Haug, Peter; Holmen, John; Wu, Xinzi; Mynam, Kumar; Ebert, Matthew; Ferraro, Jeffrey

    2014-01-01

    Large amounts of medical data are collected electronically during the course of caring for patients using modern medical information systems. This data presents an opportunity to develop clinically useful tools through data mining and observational research studies. However, the work necessary to make sense of this data and to integrate it into a research initiative can require substantial effort from medical experts as well as from experts in medical terminology, data extraction, and data analysis. This slows the process of medical research. To reduce the effort required for the construction of computable, diagnostic predictive models, we have developed a system that hybridizes a medical ontology with a large clinical data warehouse. Here we describe components of this system designed to automate the development of preliminary diagnostic models and to provide visual clues that can assist the researcher in planning for further analysis of the data behind these models.

  6. Innovative Double Bypass Engine for Increased Performance

    NASA Astrophysics Data System (ADS)

    Manoharan, Sanjivan

    Engines continue to grow in size to meet the current thrust requirements of the civil aerospace industry. Large engines pose significant transportation problems and require them to be split in order to be shipped. Thus, large amounts of time have been spent in researching methods to increase thrust capabilities while maintaining a reasonable engine size. Unfortunately, much of this research has been focused on increasing the performance and efficiencies of individual components while limited research has been done on innovative engine configurations. This thesis focuses on an innovative engine configuration, the High Double Bypass Engine, aimed at increasing fuel efficiency and thrust while maintaining a competitive fan diameter and engine length. The 1-D analysis was done in Excel and then compared to the results from Numerical Propulsion Simulation System (NPSS) software and were found to be within 4% error. Flow performance characteristics were also determined and validated against their criteria.

  7. Protein sequence comparison based on K-string dictionary.

    PubMed

    Yu, Chenglong; He, Rong L; Yau, Stephen S-T

    2013-10-25

    The current K-string-based protein sequence comparisons require large amounts of computer memory because the dimension of the protein vector representation grows exponentially with K. In this paper, we propose a novel concept, the "K-string dictionary", to solve this high-dimensional problem. It allows us to use a much lower dimensional K-string-based frequency or probability vector to represent a protein, and thus significantly reduce the computer memory requirements for their implementation. Furthermore, based on this new concept, we use Singular Value Decomposition to analyze real protein datasets, and the improved protein vector representation allows us to obtain accurate gene trees. © 2013.

  8. Singlet oxygen detection in biological systems: Uses and limitations.

    PubMed

    Koh, Eugene; Fluhr, Robert

    2016-07-02

    The study of singlet oxygen in biological systems is challenging in many ways. Singlet oxygen is a relatively unstable ephemeral molecule, and its properties make it highly reactive with many biomolecules, making it difficult to quantify accurately. Several methods have been developed to study this elusive molecule, but most studies thus far have focused on those conditions that produce relatively large amounts of singlet oxygen. However, the need for more sensitive methods is required as one begins to explore the levels of singlet oxygen required in signaling and regulatory processes. Here we discuss the various methods used in the study of singlet oxygen, and outline their uses and limitations.

  9. Changes in electrical energy requirements to operate an ice cream freezer as a function of sweeteners and gums

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, D.E.; Bakshi, A.S.; Gay, S.A.

    1985-01-01

    Changes in electrical energy required to operate a continuous freezer were monitored for various ice cream formulae. Ice cream formulae consisted of nine different combinations of sucrose, 36 DE corn syrup, and 42 high fructose corn syrup as well as two ratios of guar gum to locust bean gum. Within the same sweetening system, a mix high in locust bean gum tended to have a lower energy demand than mix with large amounts of guar gum. This was especially pronounced in mixes with 50% 42 high fructose corn syrup and/or 50% 36 DE corn syrup solids.

  10. The impact of image storage organization on the effectiveness of PACS.

    PubMed

    Hindel, R

    1990-11-01

    Picture archiving communication system (PACS) requires efficient handling of large amounts of data. Mass storage systems are cost effective but slow, while very fast systems, like frame buffers and parallel transfer disks, are expensive. The image traffic can be divided into inbound traffic generated by diagnostic modalities and outbound traffic into workstations. At the contact points with medical professionals, the responses must be fast. Archiving, on the other hand, can employ slower but less expensive storage systems, provided that the primary activities are not impeded. This article illustrates a segmentation architecture meeting these requirements based on a clearly defined PACS concept.

  11. An Electrostatic Precipitator System for the Martian Environment

    NASA Technical Reports Server (NTRS)

    Calle, C. I.; Mackey, P. J.; Hogue, M. D.; Johansen, M. R.; Phillips, J. R., III; Clements, J. S.

    2012-01-01

    Human exploration missions to Mars will require the development of technologies for the utilization of the planet's own resources for the production of commodities. However, the Martian atmosphere contains large amounts of dust. The extraction of commodities from this atmosphere requires prior removal of this dust. We report on our development of an electrostatic precipitator able to collect Martian simulated dust particles in atmospheric conditions approaching those of Mars. Extensive experiments with an initial prototype in a simulated Martian atmosphere showed efficiencies of 99%. The design of a second prototype with aerosolized Martian simulated dust in a flow-through is described. Keywords: Space applications, electrostatic precipitator, particle control, particle charging

  12. Rapid prototyping and AI programming environments applied to payload modeling

    NASA Technical Reports Server (NTRS)

    Carnahan, Richard S., Jr.; Mendler, Andrew P.

    1987-01-01

    This effort focused on using artificial intelligence (AI) programming environments and rapid prototyping to aid in both space flight manned and unmanned payload simulation and training. Significant problems addressed are the large amount of development time required to design and implement just one of these payload simulations and the relative inflexibility of the resulting model to accepting future modification. Results of this effort have suggested that both rapid prototyping and AI programming environments can significantly reduce development time and cost when applied to the domain of payload modeling for crew training. The techniques employed are applicable to a variety of domains where models or simulations are required.

  13. VALIDATION FOR THE PERMANGANATE DIGESTION OF REILLEX HPQ ANION RESIN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyser, E.

    2009-09-23

    The flowsheet for the digestion of Reillex{trademark} HPQ was validated both under the traditional alkaline conditions and under strongly acidic conditions. Due to difficulty in performing a pH adjustment in the large tank where this flowsheet must be performed, the recommended digestion conditions were changed from pH 8-10 to 8 M HNO{sub 3}. Thus, no pH adjustment of the solution is required prior to performing the permanganate addition and digestion and the need to sample the digestion tank to confirm appropriate pH range for digestion may be avoided. Neutralization of the acidic digestion solution will be performed after completion ofmore » the resin digestion cycle. The amount of permanganate required for this type of resin (Reillex{trademark} HPQ) was increased from 1 kg/L resin to 4 kg/L resin to reduce the amount of residual resin solids to a minimal amount (<5%). The length of digestion time at 70 C remains unchanged at 15 hours. These parameters are not optimized but are expected to be adequate for the conditions. The flowsheet generates a significant amount of fine manganese dioxide (MnO{sub 2}) solids (1.71 kg/L resin) and involves the generation of a significant liquid volume due to the low solubility of permanganate. However, since only two batches of resin (40 L each) are expected to be digested, the total waste generated is limited.« less

  14. Data identification for improving gene network inference using computational algebra.

    PubMed

    Dimitrova, Elena; Stigler, Brandilyn

    2014-11-01

    Identification of models of gene regulatory networks is sensitive to the amount of data used as input. Considering the substantial costs in conducting experiments, it is of value to have an estimate of the amount of data required to infer the network structure. To minimize wasted resources, it is also beneficial to know which data are necessary to identify the network. Knowledge of the data and knowledge of the terms in polynomial models are often required a priori in model identification. In applications, it is unlikely that the structure of a polynomial model will be known, which may force data sets to be unnecessarily large in order to identify a model. Furthermore, none of the known results provides any strategy for constructing data sets to uniquely identify a model. We provide a specialization of an existing criterion for deciding when a set of data points identifies a minimal polynomial model when its monomial terms have been specified. Then, we relax the requirement of the knowledge of the monomials and present results for model identification given only the data. Finally, we present a method for constructing data sets that identify minimal polynomial models.

  15. A simple biosynthetic pathway for large product generation from small substrate amounts

    NASA Astrophysics Data System (ADS)

    Djordjevic, Marko; Djordjevic, Magdalena

    2012-10-01

    A recently emerging discipline of synthetic biology has the aim of constructing new biosynthetic pathways with useful biological functions. A major application of these pathways is generating a large amount of the desired product. However, toxicity due to the possible presence of toxic precursors is one of the main problems for such production. We consider here the problem of generating a large amount of product from a potentially toxic substrate. To address this, we propose a simple biosynthetic pathway, which can be induced in order to produce a large number of the product molecules, by keeping the substrate amount at low levels. Surprisingly, we show that the large product generation crucially depends on fast non-specific degradation of the substrate molecules. We derive an optimal induction strategy, which allows as much as three orders of magnitude increase in the product amount through biologically realistic parameter values. We point to a recently discovered bacterial immune system (CRISPR/Cas in E. coli) as a putative example of the pathway analysed here. We also argue that the scheme proposed here can be used not only as a stand-alone pathway, but also as a strategy to produce a large amount of the desired molecules with small perturbations of endogenous biosynthetic pathways.

  16. Ultrasound Picture Archiving And Communication Systems

    NASA Astrophysics Data System (ADS)

    Koestner, Ken; Hottinger, C. F.

    1982-01-01

    The ideal ultrasonic image communication and storage system must be flexible in order to optimize speed and minimize storage requirements. Various ultrasonic imaging modalities are quite different in data volume and speed requirements. Static imaging, for example B-Scanning, involves acquisition of a large amount of data that is averaged or accumulated in a desired manner. The image is then frozen in image memory before transfer and storage. Images are commonly a 512 x 512 point array, each point 6 bits deep. Transfer of such an image over a serial line at 9600 baud would require about three minutes. Faster transfer times are possible; for example, we have developed a parallel image transfer system using direct memory access (DMA) that reduces the time to 16 seconds. Data in this format requires 256K bytes for storage. Data compression can be utilized to reduce these requirements. Real-time imaging has much more stringent requirements for speed and storage. The amount of actual data per frame in real-time imaging is reduced due to physical limitations on ultrasound. For example, 100 scan lines (480 points long, 6 bits deep) can be acquired during a frame at a 30 per second rate. In order to transmit and save this data at a real-time rate requires a transfer rate of 8.6 Megabaud. A real-time archiving system would be complicated by the necessity of specialized hardware to interpolate between scan lines and perform desirable greyscale manipulation on recall. Image archiving for cardiology and radiology would require data transfer at this high rate to preserve temporal (cardiology) and spatial (radiology) information.

  17. Carbon Capture and Sequestration (CCS)

    DTIC Science & Technology

    2009-06-19

    tons of CO2 underground each year to help recover oil and gas resources (enhanced oil recovery , or EOR).1 Also, potentially large amounts of CO2 ... CO2 will be used for enhanced gas recovery at a nearby natural gas field. See http://www.vattenfall.com/www/co2_en/ co2_en/Gemeinsame_Inhalte...for enhanced oil recovery (EOR).18 Transporting CO2 in pipelines is similar to transporting petroleum products like natural gas and oil; it requires

  18. A Self Sustaining Solar-Bio-Nano Based Wastewater Treatment System for Forward Operating Bases

    DTIC Science & Technology

    2017-06-21

    fouling problem and requires a relatively high operational pressure (more than 500 psi) [52]. It has also been reported that pulsed electric discharge as...large amount of working fluid to the targeted temperature. In addition, energy loss to the ambient environment is another problem that significantly...heat. Gas and steam turbines as engine units were compared to determine the most suitable for the studied solar–bio hybrid system. The net capacity

  19. NPS CubeSat Launcher Design, Process and Requirements

    DTIC Science & Technology

    2009-06-01

    Soviet era ICBM. The first Dnepr launch in July 2006 consisted of fourteen CubeSats in five P-PODs, while the second in April 2007 consisted of...Regulations (ITAR). ITAR restricts the export of defense-related products and technology on the United States Munitions List. Although one might not...think that CubeSat technology would fall under ITAR, in fact a large amount of Aerospace technology , including some that could be used on CubeSats is

  20. Mechanical Transformation of Task Heuristics into Operational Procedures

    DTIC Science & Technology

    1981-04-14

    Introduction A central theme of recent research in artificial intelligence is that *Intelligent task performance requires large amounts of knowledge...PLAY P1 C4] (. (LEADING (QSO)) (OR (CAN-LEAO- HEARrS (gSO)J (mEg (SUIT-OF C3) H])] C-) (FOLLOWING (QSO)) (OR [VOID (OSO) (SUIT-LED)3 [IN-SUIT C3 (SUIT...Production rules as a representation for a knowledge based consultation system. Artificial Intelligence 8:15-45, Spring, 1977. [Davis 77b] R. Davis

  1. Multi-Sensory Features for Personnel Detection at Border Crossings

    DTIC Science & Technology

    2011-07-08

    challenging problem. Video sensors consume high amounts of power and require a large volume for storage. Hence, it is preferable to use non- imaging sensors...temporal distribution of gait beats [5]. At border crossings, animals such as mules, horses, or donkeys are often known to carry loads. Animal hoof...field, passive ultrasonic, sonar, and both infrared and visi- ble video sensors. Each sensor suite is placed along the path with a spacing of 40 to

  2. Parallel processing implementations of a contextual classifier for multispectral remote sensing data

    NASA Technical Reports Server (NTRS)

    Siegel, H. J.; Swain, P. H.; Smith, B. W.

    1980-01-01

    Contextual classifiers are being developed as a method to exploit the spatial/spectral context of a pixel to achieve accurate classification. Classification algorithms such as the contextual classifier typically require large amounts of computation time. One way to reduce the execution time of these tasks is through the use of parallelism. The applicability of the CDC flexible processor system and of a proposed multimicroprocessor system (PASM) for implementing contextual classifiers is examined.

  3. Methods of fabricating applique circuits

    DOEpatents

    Dimos, Duane B.; Garino, Terry J.

    1999-09-14

    Applique circuits suitable for advanced packaging applications are introduced. These structures are particularly suited for the simple integration of large amounts (many nanoFarads) of capacitance into conventional integrated circuit and multichip packaging technology. In operation, applique circuits are bonded to the integrated circuit or other appropriate structure at the point where the capacitance is required, thereby minimizing the effects of parasitic coupling. An immediate application is to problems of noise reduction and control in modern high-frequency circuitry.

  4. Affordable multisensor digital video architecture for 360° situational awareness displays

    NASA Astrophysics Data System (ADS)

    Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana

    2011-06-01

    One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.

  5. Readout Electronics for the Central Drift Chamber of the Belle-II Detector

    NASA Astrophysics Data System (ADS)

    Uchida, Tomohisa; Taniguchi, Takashi; Ikeno, Masahiro; Iwasaki, Yoshihito; Saito, Masatoshi; Shimazaki, Shoichi; Tanaka, Manobu M.; Taniguchi, Nanae; Uno, Shoji

    2015-08-01

    We have developed readout electronics for the central drift chamber (CDC) of the Belle-II detector. The space near the endplate of the CDC for installation of the electronics was limited by the detector structure. Due to the large amounts of data generated by the CDC, a high-speed data link, with a greater than one gigabit transfer rate, was required to transfer the data to a back-end computer. A new readout module was required to satisfy these requirements. This module processes 48 signals from the CDC, converts them to digital data and transfers it directly to the computer. All functions that transfer digital data via the high speed link were implemented on the single module. We have measured its electrical characteristics and confirmed that the results satisfy the requirements of the Belle-II experiment.

  6. Synesthetic art through 3-D projection: The requirements of a computer-based supermedium

    NASA Technical Reports Server (NTRS)

    Mallary, Robert

    1989-01-01

    A computer-based form of multimedia art is proposed that uses the computer to fuse aspects of painting, sculpture, dance, music, film, and other media into a one-to-one synthesia of image and sound for spatially synchronous 3-D projection. Called synesthetic art, this conversion of many varied media into an aesthetically unitary experience determines the character and requirements of the system and its software. During the start-up phase, computer stereographic systems are unsuitable for software development. Eventually, a new type of illusory-projective supermedium will be required to achieve the needed combination of large-format projection and convincing real life presence, and to handle the vast amount of 3-D visual and acoustic information required. The influence of the concept on the author's research and creative work is illustrated through two examples.

  7. Visual attention mitigates information loss in small- and large-scale neural codes

    PubMed Central

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  8. Maximizing the Scientific Return of Low Cost Planetary Missions Using Solar Electric Propulsion(abstract)

    NASA Technical Reports Server (NTRS)

    Russell, C. T.; Metzger, A.; Pieters, C.; Elphic, R. C.; McCord, T.; Head, J.; Abshire, J.; Philips, R.; Sykes, M.; A'Hearn, M.; hide

    1994-01-01

    After many years of development, solar electric propulsion is now a practical low cost alternative for many planetary missions. In response to the recent Discovery AO, we and a number of colleagues have examined the scientific return from a missioon to map the Moon and then rendezvous with a small body. In planning this mission, we found that solar electric propulsion was quite affordable under the Discovery guidelines, that many targets could be reached more rapidly with solar electric propulsion than chemical propulsion, that a large number of planetary bodies were accessible with modest propulsion systems, and that such missions were quite adaptable, with generous launch windows which minimized mission risks. Moreover, solar electric propulsion is ideally suited for large payloads requiring a large amount of power.

  9. 34 CFR Appendix C to Part 379 - Calculating Required Matching Amount

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 2 2013-07-01 2013-07-01 false Calculating Required Matching Amount C Appendix C to Part 379 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF.... C Appendix C to Part 379—Calculating Required Matching Amount 1. The method for calculating the...

  10. 34 CFR Appendix C to Part 379 - Calculating Required Matching Amount

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 2 2014-07-01 2013-07-01 true Calculating Required Matching Amount C Appendix C to Part 379 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF.... C Appendix C to Part 379—Calculating Required Matching Amount 1. The method for calculating the...

  11. Finding Cardinality Heavy-Hitters in Massive Traffic Data and Its Application to Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Ishibashi, Keisuke; Mori, Tatsuya; Kawahara, Ryoichi; Hirokawa, Yutaka; Kobayashi, Atsushi; Yamamoto, Kimihiro; Sakamoto, Hitoaki; Asano, Shoichiro

    We propose an algorithm for finding heavy hitters in terms of cardinality (the number of distinct items in a set) in massive traffic data using a small amount of memory. Examples of such cardinality heavy-hitters are hosts that send large numbers of flows, or hosts that communicate with large numbers of other hosts. Finding these hosts is crucial to the provision of good communication quality because they significantly affect the communications of other hosts via either malicious activities such as worm scans, spam distribution, or botnet control or normal activities such as being a member of a flash crowd or performing peer-to-peer (P2P) communication. To precisely determine the cardinality of a host we need tables of previously seen items for each host (e. g., flow tables for every host) and this may infeasible for a high-speed environment with a massive amount of traffic. In this paper, we use a cardinality estimation algorithm that does not require these tables but needs only a little information called the cardinality summary. This is made possible by relaxing the goal from exact counting to estimation of cardinality. In addition, we propose an algorithm that does not need to maintain the cardinality summary for each host, but only for partitioned addresses of a host. As a result, the required number of tables can be significantly decreased. We evaluated our algorithm using actual backbone traffic data to find the heavy-hitters in the number of flows and estimate the number of these flows. We found that while the accuracy degraded when estimating for hosts with few flows, the algorithm could accurately find the top-100 hosts in terms of the number of flows using a limited-sized memory. In addition, we found that the number of tables required to achieve a pre-defined accuracy increased logarithmically with respect to the total number of hosts, which indicates that our method is applicable for large traffic data for a very large number of hosts. We also introduce an application of our algorithm to anomaly detection. With actual traffic data, our method could successfully detect a sudden network scan.

  12. [Determination of trace amounts of zinc in nickel electrolyte by flow injection on-line enrichment].

    PubMed

    Zhou, Z; Wang, Y; Dong, Z; Tong, K; Guo, X; Guo, X

    1999-10-01

    A method for the determination of trace amount of zinc in nickel electrolyte utilizing the flow injection on-line enrichment technique is reported in this paper. Atomic absorption spectrometer was used as detector. Zinc was separated from large amounts of nickel andother components in the electrolyte by absorption its chlorocomplex on a mini-column packed with strongly basic anion exchangers. It was found that sodium chloride containing in the electrolyte offered a sufficient chloride concentration needed for the formation of the zinc chlorocomplex and thus no additional reagent was required for the determination. The throughput of the method is 30 determinations per hour. The detection limit of the method is 0.002 microg x mL(-1) and the precision is 1.9% (RSD). The proposed method is rapid and cost-effective. It has been used for almost three years in the quality control of the electrolyte in the factory with great success.

  13. Monitoring and Hardware Management for Critical Fusion Plasma Instrumentation

    NASA Astrophysics Data System (ADS)

    Carvalho, Paulo F.; Santos, Bruno; Correia, Miguel; Combo, Álvaro M.; Rodrigues, AntÓnio P.; Pereira, Rita C.; Fernandes, Ana; Cruz, Nuno; Sousa, Jorge; Carvalho, Bernardo B.; Batista, AntÓnio J. N.; Correia, Carlos M. B. A.; Gonçalves, Bruno

    2018-01-01

    Controlled nuclear fusion aims to obtain energy by particles collision confined inside a nuclear reactor (Tokamak). These ionized particles, heavier isotopes of hydrogen, are the main elements inside of plasma that is kept at high temperatures (millions of Celsius degrees). Due to high temperatures and magnetic confinement, plasma is exposed to several sources of instabilities which require a set of procedures by the control and data acquisition systems throughout fusion experiments processes. Control and data acquisition systems often used in nuclear fusion experiments are based on the Advanced Telecommunication Computer Architecture (AdvancedTCA®) standard introduced by the Peripheral Component Interconnect Industrial Manufacturers Group (PICMG®), to meet the demands of telecommunications that require large amount of data (TB) transportation at high transfer rates (Gb/s), to ensure high availability including features such as reliability, serviceability and redundancy. For efficient plasma control, systems are required to collect large amounts of data, process it, store for later analysis, make critical decisions in real time and provide status reports either from the experience itself or the electronic instrumentation involved. Moreover, systems should also ensure the correct handling of detected anomalies and identified faults, notify the system operator of occurred events, decisions taken to acknowledge and implemented changes. Therefore, for everything to work in compliance with specifications it is required that the instrumentation includes hardware management and monitoring mechanisms for both hardware and software. These mechanisms should check the system status by reading sensors, manage events, update inventory databases with hardware system components in use and maintenance, store collected information, update firmware and installed software modules, configure and handle alarms to detect possible system failures and prevent emergency scenarios occurrences. The goal is to ensure high availability of the system and provide safety operation, experiment security and data validation for the fusion experiment. This work aims to contribute to the joint effort of the IPFN control and data acquisition group to develop a hardware management and monitoring application for control and data acquisition instrumentation especially designed for large scale tokamaks like ITER.

  14. TransAtlasDB: an integrated database connecting expression data, metadata and variants

    PubMed Central

    Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J

    2018-01-01

    Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361

  15. The effects of snowpack grain size on satellite passive microwave observations from the Upper Colorado River Basin

    USGS Publications Warehouse

    Josberger, E.G.; Gloersen, P.; Chang, A.; Rango, A.

    1996-01-01

    Understanding the passive microwave emissions of a snowpack, as observed by satellite sensors, requires knowledge of the snowpack properties: water equivalent, grain size, density, and stratigraphy. For the snowpack in the Upper Colorado River Basin, measurements of snow depth and water equivalent are routinely available from the U.S. Department of Agriculture, but extremely limited information is available for the other properties. To provide this information, a field program from 1984 to 1995 obtained profiles of snowpack grain size, density, and temperature near the time of maximum snow accumulation, at sites distributed across the basin. A synoptic basin-wide sampling program in 1985 showed that the snowpack exhibits consistent properties across large regions. Typically, the snowpack in the Wyoming region contains large amounts of depth hoar, with grain sizes up to 5 mm, while the snowpack in Colorado and Utah is dominated by rounded snow grains less than 2 mm in diameter. In the Wyoming region, large depth hoar crystals in shallow snowpacks yield the lowest emissivities or coldest brightness temperatures observed across the entire basin. Yearly differences in the average grain sizes result primarily from variations in the relative amount of depth hoar within the snowpack. The average grain size for the Colorado and Utah regions shows much less variation than do the grain sizes from the Wyoming region. Furthermore, the greatest amounts of depth hoar occur in the Wyoming region during 1987 and 1992, years with strong El Nin??o Southern Oscillation, but the Colorado and Utah regions do not show this behavior.

  16. Minimization of energy and surface roughness of the products machined by milling

    NASA Astrophysics Data System (ADS)

    Belloufi, A.; Abdelkrim, M.; Bouakba, M.; Rezgui, I.

    2017-08-01

    Metal cutting represents a large portion in the manufacturing industries, which makes this process the largest consumer of energy. Energy consumption is an indirect source of carbon footprint, we know that CO2 emissions come from the production of energy. Therefore high energy consumption requires a large production, which leads to high cost and a large amount of CO2 emissions. At this day, a lot of researches done on the Metal cutting, but the environmental problems of the processes are rarely discussed. The right selection of cutting parameters is an effective method to reduce energy consumption because of the direct relationship between energy consumption and cutting parameters in machining processes. Therefore, one of the objectives of this research is to propose an optimization strategy suitable for machining processes (milling) to achieve the optimum cutting conditions based on the criterion of the energy consumed during the milling. In this paper the problem of energy consumed in milling is solved by an optimization method chosen. The optimization is done according to the different requirements in the process of roughing and finishing under various technological constraints.

  17. 45 CFR 160.404 - Amount of a civil money penalty.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Amount of a civil money penalty. 160.404 Section... RELATED REQUIREMENTS GENERAL ADMINISTRATIVE REQUIREMENTS Imposition of Civil Money Penalties § 160.404 Amount of a civil money penalty. (a) The amount of a civil money penalty will be determined in accordance...

  18. 45 CFR 160.404 - Amount of a civil money penalty.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Amount of a civil money penalty. 160.404 Section... RELATED REQUIREMENTS GENERAL ADMINISTRATIVE REQUIREMENTS Imposition of Civil Money Penalties § 160.404 Amount of a civil money penalty. (a) The amount of a civil money penalty will be determined in accordance...

  19. 45 CFR 160.404 - Amount of a civil money penalty.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Amount of a civil money penalty. 160.404 Section... RELATED REQUIREMENTS GENERAL ADMINISTRATIVE REQUIREMENTS Imposition of Civil Money Penalties § 160.404 Amount of a civil money penalty. (a) The amount of a civil money penalty will be determined in accordance...

  20. 45 CFR 160.404 - Amount of a civil money penalty.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Amount of a civil money penalty. 160.404 Section... RELATED REQUIREMENTS GENERAL ADMINISTRATIVE REQUIREMENTS Imposition of Civil Money Penalties § 160.404 Amount of a civil money penalty. (a) The amount of a civil money penalty will be determined in accordance...

  1. 45 CFR 160.404 - Amount of a civil money penalty.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Amount of a civil money penalty. 160.404 Section... RELATED REQUIREMENTS GENERAL ADMINISTRATIVE REQUIREMENTS Imposition of Civil Money Penalties § 160.404 Amount of a civil money penalty. (a) The amount of a civil money penalty will be determined in accordance...

  2. 30 CFR 243.8 - When will MMS suspend my obligation to comply with an order?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...), and: (1) If the amount under appeal is less than $10,000 or does not require payment of a specified... the amount under appeal is less than $1,000 or does not require payment, MMS will suspend your... paying any demanded amount or complying with any other requirement pending appeal. However, voluntarily...

  3. Large optical glass blanks for the ELT generation

    NASA Astrophysics Data System (ADS)

    Jedamzik, Ralf; Petzold, Uwe; Dietrich, Volker; Wittmer, Volker; Rexius, Olga

    2016-07-01

    The upcoming extremely large telescope projects like the E-ELT, TMT or GMT telescopes require not only large amount of mirror blank substrates but have also sophisticated instrument setups. Common instrument components are atmospheric dispersion correctors that compensate for the varying atmospheric path length depending on the telescope inclination angle. These elements consist usually of optical glass blanks that have to be large due to the increased size of the focal beam of the extremely large telescopes. SCHOTT has a long experience in producing and delivering large optical glass blanks for astronomical applications up to 1 m and in homogeneity grades up to H3 quality in the past. The most common optical glass available in large formats is SCHOTT N-BK7. But other glass types like F2 or LLF1 can also be produced in formats up to 1 m. The extremely large telescope projects partly demand atmospheric dispersion components even in sizes beyond 1m up to a range of 1.5 m diameter. The production of such large homogeneous optical glass banks requires tight control of all process steps. To cover this demand in the future SCHOTT initiated a research project to improve the large optical blank production process steps from melting to annealing and measurement. Large optical glass blanks are measured in several sub-apertures that cover the total clear aperture of the application. With SCHOTT's new stitching software it is now possible to combine individual sub-aperture measurements to a total homogeneity map of the blank. In this presentation first results will be demonstrated.

  4. Margin based ontology sparse vector learning algorithm and applied in biology science.

    PubMed

    Gao, Wei; Qudair Baig, Abdul; Ali, Haidar; Sajjad, Wasim; Reza Farahani, Mohammad

    2017-01-01

    In biology field, the ontology application relates to a large amount of genetic information and chemical information of molecular structure, which makes knowledge of ontology concepts convey much information. Therefore, in mathematical notation, the dimension of vector which corresponds to the ontology concept is often very large, and thus improves the higher requirements of ontology algorithm. Under this background, we consider the designing of ontology sparse vector algorithm and application in biology. In this paper, using knowledge of marginal likelihood and marginal distribution, the optimized strategy of marginal based ontology sparse vector learning algorithm is presented. Finally, the new algorithm is applied to gene ontology and plant ontology to verify its efficiency.

  5. A smoothing algorithm using cubic spline functions

    NASA Technical Reports Server (NTRS)

    Smith, R. E., Jr.; Price, J. M.; Howser, L. M.

    1974-01-01

    Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.

  6. Spaceport Command and Control System Software Development

    NASA Technical Reports Server (NTRS)

    Glasser, Abraham

    2017-01-01

    The Spaceport Command and Control System (SCCS) is the National Aeronautics and Space Administration's (NASA) launch control system for the Orion capsule and Space Launch System, the next generation manned rocket currently in development. This large system requires a large amount of intensive testing that will properly measure the capabilities of the system. Automating the test procedures would save the project money from human labor costs, as well as making the testing process more efficient. Therefore, the Exploration Systems Division (formerly the Electrical Engineering Division) at Kennedy Space Center (KSC) has recruited interns for the past two years to work alongside full-time engineers to develop these automated tests, as well as innovate upon the current automation process.

  7. Using technology to support investigations in the electronic age: tracking hackers to large scale international computer fraud

    NASA Astrophysics Data System (ADS)

    McFall, Steve

    1994-03-01

    With the increase in business automation and the widespread availability and low cost of computer systems, law enforcement agencies have seen a corresponding increase in criminal acts involving computers. The examination of computer evidence is a new field of forensic science with numerous opportunities for research and development. Research is needed to develop new software utilities to examine computer storage media, expert systems capable of finding criminal activity in large amounts of data, and to find methods of recovering data from chemically and physically damaged computer storage media. In addition, defeating encryption and password protection of computer files is also a topic requiring more research and development.

  8. From stable to unstable anomaly-induced inflation

    NASA Astrophysics Data System (ADS)

    Netto, Tibério de Paula; Pelinson, Ana M.; Shapiro, Ilya L.; Starobinsky, Alexei A.

    2016-10-01

    Quantum effects derived through conformal anomaly lead to an inflationary model that can be either stable or unstable. The unstable version requires a large dimensionless coefficient of about 5× {10}^8 in front of the {R}^2 term that results in the inflationary regime in the R+{R}^2 ("Starobinsky") model being a generic intermediate attractor. In this case the non-local terms in the effective action are practically irrelevant, and there is a `graceful exit' to a low curvature matter-like dominated stage driven by high-frequency oscillations of R - scalarons, which later decay to pairs of all particles and antiparticles, with the amount of primordial scalar (density) perturbations required by observations. The stable version is a genuine generic attractor, so there is no exit from it. We discuss a possible transition from stable to unstable phases of inflation. It is shown that this transition is automatic if the sharp cut-off approximation is assumed for quantum corrections in the period of transition. Furthermore, we describe two different quantum mechanisms that may provide a required large {R}^2-term in the transition period.

  9. Semantic orchestration of image processing services for environmental analysis

    NASA Astrophysics Data System (ADS)

    Ranisavljević, Élisabeth; Devin, Florent; Laffly, Dominique; Le Nir, Yannick

    2013-09-01

    In order to analyze environmental dynamics, a major process is the classification of the different phenomena of the site (e.g. ice and snow for a glacier). When using in situ pictures, this classification requires data pre-processing. Not all the pictures need the same sequence of processes depending on the disturbances. Until now, these sequences have been done manually, which restricts the processing of large amount of data. In this paper, we present how to realize a semantic orchestration to automate the sequencing for the analysis. It combines two advantages: solving the problem of the amount of processing, and diversifying the possibilities in the data processing. We define a BPEL description to express the sequences. This BPEL uses some web services to run the data processing. Each web service is semantically annotated using an ontology of image processing. The dynamic modification of the BPEL is done using SPARQL queries on these annotated web services. The results obtained by a prototype implementing this method validate the construction of the different workflows that can be applied to a large number of pictures.

  10. Using 3D infrared imaging to calibrate and refine computational fluid dynamic modeling for large computer and data centers

    NASA Astrophysics Data System (ADS)

    Stockton, Gregory R.

    2011-05-01

    Over the last 10 years, very large government, military, and commercial computer and data center operators have spent millions of dollars trying to optimally cool data centers as each rack has begun to consume as much as 10 times more power than just a few years ago. In fact, the maximum amount of data computation in a computer center is becoming limited by the amount of available power, space and cooling capacity at some data centers. Tens of millions of dollars and megawatts of power are being annually spent to keep data centers cool. The cooling and air flows dynamically change away from any predicted 3-D computational fluid dynamic modeling during construction and as time goes by, and the efficiency and effectiveness of the actual cooling rapidly departs even farther from predicted models. By using 3-D infrared (IR) thermal mapping and other techniques to calibrate and refine the computational fluid dynamic modeling and make appropriate corrections and repairs, the required power for data centers can be dramatically reduced which reduces costs and also improves reliability.

  11. Preparation of a novel hyperbranched carbosilane-silica hybrid coating for trace amount detection by solid phase microextraction/gas chromatography.

    PubMed

    Chen, Guowen; Li, Wenjie; Zhang, Chen; Zhou, Chuanjian; Feng, Shengyu

    2012-09-21

    Phenyl-ended hyperbranched carbosilane (HBC) is synthesized and immobilized onto the inner wall of a fused silica capillary column using a sol-gel process. The hybrid coating layer formed is used as a stationary phase for gas chromatography (GC) and as an adsorption medium for solid phase microextraction (SPME). Trifluoroacetic acid, as a catalyst in this process, helps produce a homogeneous hybrid coating layer. This result is beneficial for better column chromatographic performances, such as high efficiency and high resolution. Extraction tests using the novel hybrid layer show an extraordinarily large adsorption capacity and specific adsorption behavior for aromatic compounds. A 1 ppm trace level detectability is obtained with the SPME/GC work model when both of the stationary phase and adsorption layer bear a hyperbranched structure. A large amount of phenyl groups and a low viscosity of hyperbranched polymers contribute to these valuable properties, which are important to environment and safety control, wherein detection sensitivity and special adsorption behavior are usually required. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Emulation and Sobol' sensitivity analysis of an atmospheric dispersion model applied to the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne

    2016-04-01

    Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.

  13. A special planning technique for stream-aquifer systems

    USGS Publications Warehouse

    Jenkins, C.T.; Taylor, O. James

    1974-01-01

    The potential effects of water-management plans on stream-aquifer systems in several countries have been simulated using electric-analog or digital-computer models. Many of the electric-analog models require large amounts of hardware preparation for each problem to be solved and some become so bulky that they present serious space and access problems. Digital-computer models require no special hardware preparation but often they require so many repetitive solutions of equations that they result in calculations that are unduly unwieldy and expensive, even on the latest generation of computers. Further, the more detailed digital models require a vast amount of core storage, leaving insufficient storage for evaluation of the many possible schemes of water-management. A concept introduced in 1968 by the senior author of this report offers a solution to these problems. The concept is that the effects on streamflow of ground-water withdrawal or recharge (stress) at any point in such a system can be approximated using two classical equations and a value of time that reflects the integrated effect of the following: irregular impermeable boundaries; stream meanders; aquifer properties and their areal variations; distance of the point from the stream; and imperfect hydraulic connection between the stream and the aquifer. The value of time is called the stream depletion factor (sdf). Results of a relatively few tests on detailed models can be summarized on maps showing lines through points of equal sdf. Sensitivity analyses of models of two large stream-aquifer systems in the State of Colorado show that the sdf technique described in this report provides results within tolerable ranges of error. The sdf technique is extremely versatile, allowing water managers to choose the degree of detail that best suits their needs and available computational hardware. Simple arithmetic, using, for example, only a slide rule and charts or tables of dimensionless values, will be sufficient for many calculations. If a large digital computer is available, detailed description of the system and its stresses will require only a fraction of the core storage, leaving the greater part of the storage available for sophisticated analyses, such as optimization. Once these analyses have been made, the model then is ready to perform its principal task--prediction of streamflow and changes in ground-water storage. In the two systems described in this report, direct diversion from the streams is the principal source of irrigation water, but it is supplemented by numerous wells. The streamflow depends largely on snowmelt. Estimates of both the amount and timing of runoff from snowmelt during the irrigation season are available on a monthly basis during the spring and early summer. These estimates become increasingly accurate as the season progresses, hence frequent changes of stress on the predictive model are necessary. The sdf technique is especially well suited to this purpose, because it is very easy to make such changes, resulting in more up-todate estimates of the availability of streamflow and ground-water storage. These estimates can be made for any time and any location in the system.

  14. Expression, purification, and characterization of almond (Prunus dulcis) allergen Pru du 4.

    PubMed

    Zhang, Yuzhu; Du, Wen-Xian; Fregevu, Cécile; Kothary, Mahendra H; Harden, Leslie; McHugh, Tara H

    2014-12-31

    Biochemical characterizations of food allergens are required for understanding the allergenicity of food allergens. Such studies require a relatively large amount of highly purified allergens. The level of Pru du 4 in almond is low, and its expression in a soluble form in Escherichia coli required an expression tag. An MBP tag was used to enhance its expression and solubility. Sumo was used for the first time as a peptidase recognition site. The expression tag was removed with a sumo protease, and the resulting wild-type Pru du 4 was purified chromatographically. The stability of the allergen was investigated with chemical denaturation. The Gibbs free energy of Pru du 4 folding-unfolding transition was determined to be 5.4 ± 0.7 kcal/mol.

  15. High-temperature beverages and Foods and Esophageal Cancer Risk -- A Systematic Review

    PubMed Central

    Islami, Farhad; Boffetta, Paolo; Ren, JianSong; Pedoeim, Leah; Khatib, Dara; Kamangar, Farin

    2009-01-01

    Coffee, tea, and maté may cause esophageal cancer (EC) by causing thermal injury to the esophageal mucosa. If so, the risk of EC attributable to thermal injury could be large in populations in which these beverages are commonly consumed. In addition, these drinks may cause or prevent EC via their chemical constituents. Therefore, a large number of epidemiologic studies have investigated the association of an indicator of amount or temperature of use of these drinks or other hot foods and beverages with risk of EC. We conducted a systematic review of these studies, and report the results for amount and temperature of use separately. By searching PubMed and the ISI, we found 59 eligible studies. For coffee and tea, there was little evidence for an association between amount of use and EC risk; however, the majority of studies showed an increased risk of EC associated with higher drinking temperature which was statistically significant in most of them. For maté drinking, the number of studies was limited, but they consistently showed that EC risk increased with both amount consumed and temperature, and these two were independent risk factors. For other hot foods and drinks, over half of the studies showed statistically significant increased risks of EC associated with higher temperature of intake. Overall, the available results strongly suggest that high-temperature beverage drinking increases the risk of EC. Future studies will require standardized strategies that allow for combining data, and results should be reported by histological subtypes of EC. PMID:19415743

  16. Energy carries information

    NASA Astrophysics Data System (ADS)

    Ilgin, Irfan; Yang, I.-Sheng

    2014-08-01

    We show that for every qubit of quantum information, there is a well-defined notion of "the amount of energy that carries it," because it is a conserved quantity. This generalizes to larger systems and any conserved quantities: the eigenvalue spectrum of conserved charges has to be preserved while transferring quantum information. It is possible to "apparently" violate these conservations by losing a small fraction of information, but that must invoke a specific process which requires a large scale coherence. We discuss its implication regarding the black hole information paradox.

  17. Set-Membership Identification for Robust Control Design

    DTIC Science & Technology

    1993-04-28

    system G can be regarded as having no memory in (18) in terms of G and 0, we get of events prior to t = 1, the initial time. Roughly, this means all...algorithm in [1]. Also in our application, the size of the matrices involved is quite large and special attention should be paid to the memory ...management and algorithmic implementation; otherwise huge amounts of memory will be required to perform the optimization even for modest values of M and N

  18. Crustal entrainment and pulsar glitches.

    PubMed

    Chamel, N

    2013-01-04

    Large pulsar frequency glitches are generally interpreted as sudden transfers of angular momentum between the neutron superfluid permeating the inner crust and the rest of the star. Despite the absence of viscous drag, the neutron superfluid is strongly coupled to the crust due to nondissipative entrainment effects. These effects are shown to severely limit the maximum amount of angular momentum that can possibly be transferred during glitches. In particular, it is found that the glitches observed in the Vela pulsar require an additional reservoir of angular momentum.

  19. Development of dielectric elastomer nanocomposites as stretchable actuating materials

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Sun, L. Z.

    2017-10-01

    Dielectric elastomer nanocomposites (DENCs) filled with multi-walled carbon nanotubes are developed. The electromechanical responses of DENCs to applied electric fields are investigated through laser Doppler vibrometry. It is found that a small amount of carbon nanotube fillers can effectively enhance the electromechanical performance of DENCs. The enhanced electromechanical properties have shown not only that the desired thickness strain can be achieved with reduced required electric fields but also that significantly large thickness strain can be obtained with any electric fields compared to pristine dielectric elastomers.

  20. A multi-landing pad DNA integration platform for mammalian cell engineering

    PubMed Central

    Gaidukov, Leonid; Wroblewska, Liliana; Teague, Brian; Nelson, Tom; Zhang, Xin; Liu, Yan; Jagtap, Kalpana; Mamo, Selamawit; Tseng, Wen Allen; Lowe, Alexis; Das, Jishnu; Bandara, Kalpanie; Baijuraj, Swetha; Summers, Nevin M; Zhang, Lin; Weiss, Ron

    2018-01-01

    Abstract Engineering mammalian cell lines that stably express many transgenes requires the precise insertion of large amounts of heterologous DNA into well-characterized genomic loci, but current methods are limited. To facilitate reliable large-scale engineering of CHO cells, we identified 21 novel genomic sites that supported stable long-term expression of transgenes, and then constructed cell lines containing one, two or three ‘landing pad’ recombination sites at selected loci. By using a highly efficient BxB1 recombinase along with different selection markers at each site, we directed recombinase-mediated insertion of heterologous DNA to selected sites, including targeting all three with a single transfection. We used this method to controllably integrate up to nine copies of a monoclonal antibody, representing about 100 kb of heterologous DNA in 21 transcriptional units. Because the integration was targeted to pre-validated loci, recombinant protein expression remained stable for weeks and additional copies of the antibody cassette in the integrated payload resulted in a linear increase in antibody expression. Overall, this multi-copy site-specific integration platform allows for controllable and reproducible insertion of large amounts of DNA into stable genomic sites, which has broad applications for mammalian synthetic biology, recombinant protein production and biomanufacturing. PMID:29617873

  1. Real Time Search Algorithm for Observation Outliers During Monitoring Engineering Constructions

    NASA Astrophysics Data System (ADS)

    Latos, Dorota; Kolanowski, Bogdan; Pachelski, Wojciech; Sołoducha, Ryszard

    2017-12-01

    Real time monitoring of engineering structures in case of an emergency of disaster requires collection of a large amount of data to be processed by specific analytical techniques. A quick and accurate assessment of the state of the object is crucial for a probable rescue action. One of the more significant evaluation methods of large sets of data, either collected during a specified interval of time or permanently, is the time series analysis. In this paper presented is a search algorithm for those time series elements which deviate from their values expected during monitoring. Quick and proper detection of observations indicating anomalous behavior of the structure allows to take a variety of preventive actions. In the algorithm, the mathematical formulae used provide maximal sensitivity to detect even minimal changes in the object's behavior. The sensitivity analyses were conducted for the algorithm of moving average as well as for the Douglas-Peucker algorithm used in generalization of linear objects in GIS. In addition to determining the size of deviations from the average it was used the so-called Hausdorff distance. The carried out simulation and verification of laboratory survey data showed that the approach provides sufficient sensitivity for automatic real time analysis of large amount of data obtained from different and various sensors (total stations, leveling, camera, radar).

  2. The distribution of soil phosphorus for global biogeochemical modeling

    DOE PAGES

    Yang, Xiaojuan; Post, Wilfred M.; Thornton, Peter E.; ...

    2013-04-16

    We discuss that phosphorus (P) is a major element required for biological activity in terrestrial ecosystems. Although the total P content in most soils can be large, only a small fraction is available or in an organic form for biological utilization because it is bound either in incompletely weathered mineral particles, adsorbed on mineral surfaces, or, over the time of soil formation, made unavailable by secondary mineral formation (occluded). In order to adequately represent phosphorus availability in global biogeochemistry–climate models, a representation of the amount and form of P in soils globally is required. We develop an approach that buildsmore » on existing knowledge of soil P processes and databases of parent material and soil P measurements to provide spatially explicit estimates of different forms of naturally occurring soil P on the global scale. We assembled data on the various forms of phosphorus in soils globally, chronosequence information, and several global spatial databases to develop a map of total soil P and the distribution among mineral bound, labile, organic, occluded, and secondary P forms in soils globally. The amount of P, to 50cm soil depth, in soil labile, organic, occluded, and secondary pools is 3.6 ± 3, 8.6 ± 6, 12.2 ± 8, and 3.2 ± 2 Pg P (Petagrams of P, 1 Pg = 1 × 10 15g) respectively. The amount in soil mineral particles to the same depth is estimated at 13.0 ± 8 Pg P for a global soil total of 40.6 ± 18 Pg P. The large uncertainty in our estimates reflects our limited understanding of the processes controlling soil P transformations during pedogenesis and a deficiency in the number of soil P measurements. In spite of the large uncertainty, the estimated global spatial variation and distribution of different soil P forms presented in this study will be useful for global biogeochemistry models that include P as a limiting element in biological production by providing initial estimates of the available soil P for plant uptake and microbial utilization.« less

  3. Centroiding Experiment for Determining the Positions of Stars with High Precision

    NASA Astrophysics Data System (ADS)

    Yano, T.; Araki, H.; Hanada, H.; Tazawa, S.; Gouda, N.; Kobayashi, Y.; Yamada, Y.; Niwa, Y.

    2010-12-01

    We have experimented with the determination of the positions of star images on a detector with high precision such as 10 microarcseconds, required by a space astrometry satellite, JASMINE. In order to accomplish such a precision, we take the following two procedures. (1) We determine the positions of star images on the detector with the precision of about 0.01 pixel for one measurement, using an algorithm for estimating them from photon weighted means of the star images. (2) We determine the positions of star images with the precision of about 0.0001-0.00001 pixel, which corresponds to that of 10 microarcseconds, using a large amount of data over 10000 measurements, that is, the error of the positions decreases according to the amount of data. Here, we note that the procedure 2 is not accomplished when the systematic error in our data is not excluded adequately even if we use a large amount of data. We first show the method to determine the positions of star images on the detector using photon weighted means of star images. This algorithm, used in this experiment, is very useful because it is easy to calculate the photon weighted mean from the data. This is very important in treating a large amount of data. Furthermore, we need not assume the shape of the point spread function in deriving the centroid of star images. Second, we show the results in the laboratory experiment for precision of determining the positions of star images. We obtain that the precision of estimation of positions of star images on the detector is under a variance of 0.01 pixel for one measurement (procedure 1). We also obtain that the precision of the positions of star images becomes a variance of about 0.0001 pixel using about 10000 measurements (procedure 2).

  4. Identification of p53 unbound to T-antigen in human cells transformed by simian virus 40 T-antigen.

    PubMed

    O'Neill, F J; Hu, Y; Chen, T; Carney, H

    1997-02-27

    In several clones of SV40-transformed human cells, we investigated the relative amounts of large T-Antigen (T-Ag) and p53 proteins, both unbound and associated within complexes, with the goal of identifying changes associated with transformation and immortalization. Cells were transformed by wild type (wt) T-Ag, a functionally temperature sensitive T-Ag (tsA58) and other T-Ag variants. Western analysis showed that while most of the T-Ag was ultimately bound by p53, most of the p53 remained unbound to T-Ag. Unbound p53 remained in the supernatant after a T-Ag immunoprecipitation and p53 was present in two to fourfold excess of T-Ag. In one transformant there was five to tenfold more p53 than T-Ag. p53 was present in transformants in amounts at least 200-fold greater than in untransformed human cells. In wt and variant T-Ag transformants, including those generated with tsA58 T-Ag, large amounts of unbound p53 were present in both pre-crisis and immortal cells and when the cells were grown at permissive or non-permissive temperatures. We also found that in transformants produced by tsA58, an SV40/JCV chimeric T-Ag and other variants, T-Ag appeared to form a complex with p53 slowly perhaps because one or both proteins matured slowly. The presence in transformed human cells of large amounts of unbound p53 and in excess of T-Ag suggests that sequestration of p53 by T-Ag, resulting from complex formation, is required neither for morphological transformation nor immortalization of human cells. Rather, these results support the proposal that high levels of p53, the T-Ag/p53 complexes, or other biochemical event(s), lead to transformation and immortalization of human cells by T-Ag.

  5. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images

    PubMed Central

    Wang, Yangping; Wang, Song

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU). PMID:28053653

  6. IT Data Mining Tool Uses in Aerospace

    NASA Technical Reports Server (NTRS)

    Monroe, Gilena A.; Freeman, Kenneth; Jones, Kevin L.

    2012-01-01

    Data mining has a broad spectrum of uses throughout the realms of aerospace and information technology. Each of these areas has useful methods for processing, distributing, and storing its corresponding data. This paper focuses on ways to leverage the data mining tools and resources used in NASA's information technology area to meet the similar data mining needs of aviation and aerospace domains. This paper details the searching, alerting, reporting, and application functionalities of the Splunk system, used by NASA's Security Operations Center (SOC), and their potential shared solutions to address aircraft and spacecraft flight and ground systems data mining requirements. This paper also touches on capacity and security requirements when addressing sizeable amounts of data across a large data infrastructure.

  7. Rapid quantification of vesicle concentration for DOPG/DOPC and Cardiolipin/DOPC mixed lipid systems of variable composition.

    PubMed

    Elmer-Dixon, Margaret M; Bowler, Bruce E

    2018-05-19

    A novel approach to quantify mixed lipid systems is described. Traditional approaches to lipid vesicle quantification are time consuming, require large amounts of material and are destructive. We extend our recently described method for quantification of pure lipid systems to mixed lipid systems. The method only requires a UV-Vis spectrometer and does not destroy sample. Mie scattering data from absorbance measurements are used as input into a Matlab program to calculate the total vesicle concentration and the concentrations of each lipid in the mixed lipid system. The technique is fast and accurate, which is essential for analytical lipid binding experiments. Copyright © 2018. Published by Elsevier Inc.

  8. Singlet oxygen detection in biological systems: Uses and limitations

    PubMed Central

    Koh, Eugene; Fluhr, Robert

    2016-01-01

    ABSTRACT The study of singlet oxygen in biological systems is challenging in many ways. Singlet oxygen is a relatively unstable ephemeral molecule, and its properties make it highly reactive with many biomolecules, making it difficult to quantify accurately. Several methods have been developed to study this elusive molecule, but most studies thus far have focused on those conditions that produce relatively large amounts of singlet oxygen. However, the need for more sensitive methods is required as one begins to explore the levels of singlet oxygen required in signaling and regulatory processes. Here we discuss the various methods used in the study of singlet oxygen, and outline their uses and limitations. PMID:27231787

  9. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.

    PubMed

    Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L

    2016-01-01

    Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.

  10. FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting

    PubMed Central

    Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.

    2016-01-01

    Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876

  11. Modeling a distributed environment for a petroleum reservoir engineering application with software product line

    NASA Astrophysics Data System (ADS)

    de Faria Scheidt, Rafael; Vilain, Patrícia; Dantas, M. A. R.

    2014-10-01

    Petroleum reservoir engineering is a complex and interesting field that requires large amount of computational facilities to achieve successful results. Usually, software environments for this field are developed without taking care out of possible interactions and extensibilities required by reservoir engineers. In this paper, we present a research work which it is characterized by the design and implementation based on a software product line model for a real distributed reservoir engineering environment. Experimental results indicate successfully the utilization of this approach for the design of distributed software architecture. In addition, all components from the proposal provided greater visibility of the organization and processes for the reservoir engineers.

  12. Microwash or macrowash technique to maintain a clear cornea during cataract surgery.

    PubMed

    Amjadi, Shahriar; Roufas, Athena; Figueira, Edwin C; Bhardwaj, Gaurav; Francis, Katherine E; Masselos, Katherine; Francis, Ian C

    2010-09-01

    We describe a technique of irrigating and thereby rapidly and effectively clearing the cornea of relatively large amounts of surface contaminants that reduce surgical visibility and may contribute to endophthalmitis. This technique is referred to as "macrowash." If the technique is required, it is usually at the commencement of cataract surgery, immediately after placement of the surgical drape. The technique not only saves time, but also reduces the volume of irrigating solution required by the "microwash" technique, which is traditionally carried out by the scrub nurse/surgical assistant using a Rycroft cannula attached to a 15 mL container of irrigating solution. Copyright (c) 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  13. Long-term adequacy of metal resources

    USGS Publications Warehouse

    Singer, D.A.

    1977-01-01

    Although the earth's crust contains vast quantities of metals, extraction technologies and associated costs are inextricably bound to three fundamental geological factors - the amount of metal available in the earth's crust in each range of grades, the mineralogical form and chemical state of the metal, and the spatial distribution of the metal. The energy required to recover a given amount of metal increases substantially as grade declines. Most metal is produced from sulphide or oxide minerals, whereas most metal in the crust may be locked in the structures of the more refractory silicates. Recovery from silicate minerals could require orders of magnitude more energy than that used at present as also could exploitation of small, widely scattered or thin, deeply buried deposits. Although specific information on the fundamental factors is not available, each factor must in turn tend to further restrict exploitation. Independence of average grade and tonnage for many deposit types further reduces the availability of rock as a source of metal. In the long term, effects of these factors will be large increases in price for many metals. ?? 1977.

  14. Molecular Hydrogen as an Emerging Therapeutic Medical Gas for Neurodegenerative and Other Diseases

    PubMed Central

    Ohno, Kinji; Ito, Mikako; Ichihara, Masatoshi; Ito, Masafumi

    2012-01-01

    Effects of molecular hydrogen on various diseases have been documented for 63 disease models and human diseases in the past four and a half years. Most studies have been performed on rodents including two models of Parkinson's disease and three models of Alzheimer's disease. Prominent effects are observed especially in oxidative stress-mediated diseases including neonatal cerebral hypoxia; Parkinson's disease; ischemia/reperfusion of spinal cord, heart, lung, liver, kidney, and intestine; transplantation of lung, heart, kidney, and intestine. Six human diseases have been studied to date: diabetes mellitus type 2, metabolic syndrome, hemodialysis, inflammatory and mitochondrial myopathies, brain stem infarction, and radiation-induced adverse effects. Two enigmas, however, remain to be solved. First, no dose-response effect is observed. Rodents and humans are able to take a small amount of hydrogen by drinking hydrogen-rich water, but marked effects are observed. Second, intestinal bacteria in humans and rodents produce a large amount of hydrogen, but an addition of a small amount of hydrogen exhibits marked effects. Further studies are required to elucidate molecular bases of prominent hydrogen effects and to determine the optimal frequency, amount, and method of hydrogen administration for each human disease. PMID:22720117

  15. Transfer learning improves supervised image segmentation across imaging protocols.

    PubMed

    van Opbroek, Annegreet; Ikram, M Arfan; Vernooij, Meike W; de Bruijne, Marleen

    2015-05-01

    The variation between images obtained with different scanners or different imaging protocols presents a major challenge in automatic segmentation of biomedical images. This variation especially hampers the application of otherwise successful supervised-learning techniques which, in order to perform well, often require a large amount of labeled training data that is exactly representative of the target data. We therefore propose to use transfer learning for image segmentation. Transfer-learning techniques can cope with differences in distributions between training and target data, and therefore may improve performance over supervised learning for segmentation across scanners and scan protocols. We present four transfer classifiers that can train a classification scheme with only a small amount of representative training data, in addition to a larger amount of other training data with slightly different characteristics. The performance of the four transfer classifiers was compared to that of standard supervised classification on two magnetic resonance imaging brain-segmentation tasks with multi-site data: white matter, gray matter, and cerebrospinal fluid segmentation; and white-matter-/MS-lesion segmentation. The experiments showed that when there is only a small amount of representative training data available, transfer learning can greatly outperform common supervised-learning approaches, minimizing classification errors by up to 60%.

  16. Lunar Surface Systems Wet-Bath Design Evaluation

    NASA Technical Reports Server (NTRS)

    Thompson, Shelby; Szabo, Rich; Howard, Robert

    2010-01-01

    The goal of the current evaluation was to examine five different wet-bath architectural design concepts. The primary means of testing the concepts required participants to physically act-out a number of functional tasks (e.g., shaving, showering, changing clothes, maintenance) in order to give judgments on the affordance of the volume as based on the design concepts. Each of the concepts was designed in such a way that certain features were exploited - for example, a concept may have a large amount of internal stowage, but minimum amount of usable space to perform tasks. The results showed that the most preferred concept was one in which stowage and usable space were balanced. This concept allowed for a moderate amount of stowage with some suggested redesign, but would not preclude additional personal items such as clothing. This concept also allowed for a greater distance to be achieved between the toilet and the sink with minimum redesign, which was desirable. Therefore, the all-in-one (i.e., toilet, sink, and shower all occupying a single volume) wet-bath concept seemed to be a viable solution in which there is a minimal amount of overall volume available with certain lunar habitat configurations.

  17. The Ethics of Paid Plasma Donation: A Plea for Patient Centeredness.

    PubMed

    Farrugia, Albert; Penrod, Joshua; Bult, Jan M

    2015-12-01

    Plasma protein therapies (PPTs) are a group of essential medicines extracted from human plasma through processes of industrial scale fractionation. They are used primarily to treat a number of rare, chronic disorders ensuing from inherited or acquired deficiencies of a number of physiologically essential proteins. These disorders include hemophilia A and B, different immunodeficiencies and alpha 1-antitrypsin deficiency. In addition, acute blood loss, burns and sepsis are treated by PPTs. Hence, a population of vulnerable and very sick individuals is dependent on these products. In addition, the continued well-being of large sections of the community, including pregnant women and their children, travelers and workers exposed to infectious risk is also subject to the availability of these therapies. Their manufacture to adequate amounts requires large volumes of human plasma as the starting material of a complex purification process. Mainstream blood transfusion services run primarily by the not-for-profit sector have attempted to provide this plasma through the separation of blood donations, but have failed to provide sufficient amounts to meet the clinical demand. The collection of plasma from donors willing to commit to the process of plasmapheresis, which is not only time consuming but requires a long term, continuing commitment, generates much higher amounts of plasma and has been an activity historically separate from the blood transfusion sector and run by commercial companies. These companies now supply two-thirds of the growing global need for these therapies, while the mainstream government-run blood sector continues to supply a shrinking proportion. The private sector plasmapheresis activity which provides the bulk of treatment products has been compensating the donors in order to recognize the time and effort required. Recent activities have reignited the debate regarding the ethical and medical aspects of such compensation. In this work, we review the landscape; assess the contributions made by the compensated and non-compensated sectors and synthesize the outcomes on the relevant patient communities of perturbing the current paradigm of compensated plasma donation. We conclude that the current era of "Patient Centeredness" in health care demands the continuation and extension of paid plasma donation.

  18. The NASA Lewis large wind turbine program

    NASA Technical Reports Server (NTRS)

    Thomas, R. L.; Baldwin, D. H.

    1981-01-01

    The program is directed toward development of the technology for safe, reliable, environmentally acceptable large wind turbines that have the potential to generate a significant amount of electricity at costs competitive with conventional electric generation systems. In addition, these large wind turbines must be fully compatible with electric utility operations and interface requirements. Advances are made by gaining a better understanding of the system design drivers, improvements in the analytical design tools, verification of design methods with operating field data, and the incorporation of new technology and innovative designs. An overview of the program activities is presented and includes results from the first and second generation field machines (Mod-OA, -1, and -2), the design phase of the third generation wind turbine (Mod-5) and the advanced technology projects. Also included is the status of the Department of Interior WTS-4 machine.

  19. Visual attention mitigates information loss in small- and large-scale neural codes.

    PubMed

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-04-01

    The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Multiplex titration RT-PCR: rapid determination of gene expression patterns for a large number of genes

    NASA Technical Reports Server (NTRS)

    Nebenfuhr, A.; Lomax, T. L.

    1998-01-01

    We have developed an improved method for determination of gene expression levels with RT-PCR. The procedure is rapid and does not require extensive optimization or densitometric analysis. Since the detection of individual transcripts is PCR-based, small amounts of tissue samples are sufficient for the analysis of expression patterns in large gene families. Using this method, we were able to rapidly screen nine members of the Aux/IAA family of auxin-responsive genes and identify those genes which vary in message abundance in a tissue- and light-specific manner. While not offering the accuracy of conventional semi-quantitative or competitive RT-PCR, our method allows quick screening of large numbers of genes in a wide range of RNA samples with just a thermal cycler and standard gel analysis equipment.

  1. Integrated li-ion ultracapacitor with lead acid battery for vehicular start-stop

    NASA Astrophysics Data System (ADS)

    Manla, Emad

    Advancements in automobile manufacturing aim at improving the driving experience at every level possible. One improvement aspect is increasing gas efficiency via hybridization, which can be achieved by introducing a feature called start-stop. This feature automatically switches the internal combustion engine off when it idles and switches it back on when it is time to resume driving. This application has been proven to reduce the amount of gas consumption and emission of greenhouse effect gases in the atmosphere. However, the repeated cranking of the engine puts a large amount of stress on the lead acid battery required to perform the cranking, which effectively reduces its life span. This dissertation presents a hybrid energy storage system assembled from a lead acid battery and an ultracapacitor module connected in parallel. The Li-ion ultracapacitor was tested and modeled to predict its behavior when connected in a system requiring pulsed power such as the one proposed. Both test and simulation results show that the proposed hybrid design significantly reduces the cranking loading and stress on the battery. The ultracapacitor module can take the majority of the cranking current, effectively reducing the stress on the battery. The amount of cranking current provided by the ultracapacitor can be easily controlled via controlling the resistance of the cable connected directly between the ultracapacitor module and the car circuitry.

  2. Recovering the Atmospheric Resources of Mars: Updating the MARRS Study

    NASA Astrophysics Data System (ADS)

    England, Christopher; Hrubes, J. Dana

    2006-01-01

    In 2000 a conceptual design was conducted of a plant that extracts oxygen (O2) directly from the martian atmosphere, and that makes water and carbon monoxide (CO) as by-products. Updated estimates suggest that the amount of O2 in the atmosphere is about 2.3 times greater than that used as the basis for the 2000 study. In this paper, estimates for O2 and by-products, and for energy and mass requirements based on the higher O2 value are updated. The basis for the design, termed ``MARRS'' for Mars Atmosphere Resource Recovery System, is the NASA/JSC Mars Reference Mission (MRM) requirement for O2, estimated at 5.8 kg/hr for about 500 sols. The 2000 study based its design on an atmospheric O2 content of 0.13%, the then-accepted value. Analysis now places the O2 content at about 0.3%, reducing the amount of energy and equipment proportionately. The revised estimate of the thermal power to meet MRM requirements for O2 is an average of about 52 kW, seasonally variable. The new mass estimate is 7898 kg, down from 13650 kg. The new estimate of oxygen content correspondingly reduces the amounts of by-products that can be recovered. CO, a primary fuel and propellant precursor, is produced at about 0.2 kg/kg O2. Water, possibly available at about 0.04 kg/kg O2, is believed not recoverable by the MARRS process at this lower level, even seasonally. An equation is provided for the seasonal variation in atmospheric O2 fraction based on Viking pressure measurements. Oxygen varies seasonally from about 0.25% or 0.34%, the variability affecting plant design. While the higher O2 fraction means reduced amounts of by-products from the MARRS process, large amounts of nitrogen (liquid and gas), argon gas and liquid carbon dioxide (CO2) remain available as by-products for use as respiratory agents, refrigerants, propellants, propellant precursors and working fluids for emergency or backup power, transportation, and surface operations such as drilling.

  3. Glasses-free large size high-resolution three-dimensional display based on the projector array

    NASA Astrophysics Data System (ADS)

    Sang, Xinzhu; Wang, Peng; Yu, Xunbo; Zhao, Tianqi; Gao, Xing; Xing, Shujun; Yu, Chongxiu; Xu, Daxiong

    2014-11-01

    Normally, it requires a huge amount of spatial information to increase the number of views and to provide smooth motion parallax for natural three-dimensional (3D) display similar to real life. To realize natural 3D video display without eye-wears, a huge amount of 3D spatial information is normal required. However, minimum 3D information for eyes should be used to reduce the requirements for display devices and processing time. For the 3D display with smooth motion parallax similar to the holographic stereogram, the size the virtual viewing slit should be smaller than the pupil size of eye at the largest viewing distance. To increase the resolution, two glass-free 3D display systems rear and front projection are presented based on the space multiplexing with the micro-projector array and the special designed 3D diffuse screens with the size above 1.8 m× 1.2 m. The displayed clear depths are larger 1.5m. The flexibility in terms of digitized recording and reconstructed based on the 3D diffuse screen relieves the limitations of conventional 3D display technologies, which can realize fully continuous, natural 3-D display. In the display system, the aberration is well suppressed and the low crosstalk is achieved.

  4. Assessing the accuracy of globe thermometer method in predicting outdoor mean radiant temperature under Malaysia tropical microclimate

    NASA Astrophysics Data System (ADS)

    Khrit, N. G.; Alghoul, M. A.; Sopian, K.; Lahimer, A. A.; Elayeb, O. K.

    2017-11-01

    Assessing outdoor human thermal comfort and urban climate quality require experimental investigation of microclimatic conditions and their variations in open urban spaces. For this, it is essential to provide quantitative information on air temperature, humidity, wind velocity and mean radiant temperature. These parameters can be quantified directly except mean radiant temperature (Tmrt). The most accurate method to quantify Tmrt is integral radiation measurements (3-D shortwave and long-wave) which require using expensive radiometer instruments. To overcome this limitation the well-known globe thermometer method was suggested to calculate Tmrt. The aim of this study was to assess the possibility of using indoor globe thermometer method in predicting outdoor mean radiant temperature under Malaysia tropical microclimate. Globe thermometer method using small and large sizes of black-painted copper globes (50mm, 150mm) were used to estimate Tmrt and compare it with the reference Tmrt estimated by integral radiation method. The results revealed that the globe thermometer method considerably overestimated Tmrt during the middle of the day and slightly underestimated it in the morning and late evening. The difference between the two methods was obvious when the amount of incoming solar radiation was high. The results also showed that the effect of globe size on the estimated Tmrt is mostly small. Though, the estimated Tmrt by the small globe showed a relatively large amount of scattering caused by rapid changes in radiation and wind speed.

  5. Large-eddy simulations with wall models

    NASA Technical Reports Server (NTRS)

    Cabot, W.

    1995-01-01

    The near-wall viscous and buffer regions of wall-bounded flows generally require a large expenditure of computational resources to be resolved adequately, even in large-eddy simulation (LES). Often as much as 50% of the grid points in a computational domain are devoted to these regions. The dense grids that this implies also generally require small time steps for numerical stability and/or accuracy. It is commonly assumed that the inner wall layers are near equilibrium, so that the standard logarithmic law can be applied as the boundary condition for the wall stress well away from the wall, for example, in the logarithmic region, obviating the need to expend large amounts of grid points and computational time in this region. This approach is commonly employed in LES of planetary boundary layers, and it has also been used for some simple engineering flows. In order to calculate accurately a wall-bounded flow with coarse wall resolution, one requires the wall stress as a boundary condition. The goal of this work is to determine the extent to which equilibrium and boundary layer assumptions are valid in the near-wall regions, to develop models for the inner layer based on such assumptions, and to test these modeling ideas in some relatively simple flows with different pressure gradients, such as channel flow and flow over a backward-facing step. Ultimately, models that perform adequately in these situations will be applied to more complex flow configurations, such as an airfoil.

  6. Responses of arthropods to large-scale manipulations of dead wood in loblolly pine stands of the southeastern United States.

    PubMed

    Ulyshen, Michael D; Hanula, James L

    2009-08-01

    Large-scale experimental manipulations of dead wood are needed to better understand its importance to animal communities in managed forests. In this experiment, we compared the abundance, species richness, diversity, and composition of arthropods in 9.3-ha plots in which either (1) all coarse woody debris was removed, (2) a large number of logs were added, (3) a large number of snags were added, or (4) no coarse woody debris was added or removed. The target taxa were ground-dwelling arthropods, sampled by pitfall traps, and saproxylic beetles (i.e., dependent on dead wood), sampled by flight intercept traps and emergence traps. There were no differences in total ground-dwelling arthropod abundance, richness, diversity, or composition among treatments. Only the results for ground beetles (Carabidae), which were more species rich and diverse in log input plots, supported our prediction that ground-dwelling arthropods would benefit from additions of dead wood. There were also no differences in saproxylic beetle abundance, richness, diversity, or composition among treatments. The findings from this study are encouraging in that arthropods seem less sensitive than expected to manipulations of dead wood in managed pine forests of the southeastern United States. Based on our results, we cannot recommend inputting large amounts of dead wood for conservation purposes, given the expense of such measures. However, the persistence of saproxylic beetles requires that an adequate amount of dead wood is available in the landscape, and we recommend that dead wood be retained whenever possible in managed pine forests.

  7. A highly efficient method for extracting next-generation sequencing quality RNA from adipose tissue of recalcitrant animal species.

    PubMed

    Sharma, Davinder; Golla, Naresh; Singh, Dheer; Onteru, Suneel K

    2018-03-01

    The next-generation sequencing (NGS) based RNA sequencing (RNA-Seq) and transcriptome profiling offers an opportunity to unveil complex biological processes. Successful RNA-Seq and transcriptome profiling requires a large amount of high-quality RNA. However, NGS-quality RNA isolation is extremely difficult from recalcitrant adipose tissue (AT) with high lipid content and low cell numbers. Further, the amount and biochemical composition of AT lipid varies depending upon the animal species which can pose different degree of resistance to RNA extraction. Currently available approaches may work effectively in one species but can be almost unproductive in another species. Herein, we report a two step protocol for the extraction of NGS quality RNA from AT across a broad range of animal species. © 2017 Wiley Periodicals, Inc.

  8. Correlative Microscopy of Neutron-Irradiated Materials

    DOE PAGES

    Briggs, Samuel A.; Sridharan, Kumar; Field, Kevin G.

    2016-12-31

    A nuclear reactor core is a highly demanding environment that presents several unique challenges for materials performance. Materials in modern light water reactor (LWR) cores must survive several decades in high-temperature (300-350°C) aqueous corrosion conditions while being subject to large amounts of high-energy neutron irradiation. Next-generation reactor designs seek to use more corrosive coolants (e.g., molten salts) and even greater temperatures and neutron doses. The high amounts of disorder and unique crystallographic defects and microchemical segregation effects induced by radiation inevitably lead to property degradation of materials. Thus, maintaining structural integrity and safety margins over the course of the reactor'smore » service life thus necessitates the ability to understand and predict these degradation phenomena in order to develop new, radiation-tolerant materials that can maintain the required performance in these extreme conditions.« less

  9. Small-volume potentiometric titrations: EPR investigations of Fe-S cluster N2 in mitochondrial complex I.

    PubMed

    Wright, John J; Salvadori, Enrico; Bridges, Hannah R; Hirst, Judy; Roessler, Maxie M

    2016-09-01

    EPR-based potentiometric titrations are a well-established method for determining the reduction potentials of cofactors in large and complex proteins with at least one EPR-active state. However, such titrations require large amounts of protein. Here, we report a new method that requires an order of magnitude less protein than previously described methods, and that provides EPR samples suitable for measurements at both X- and Q-band microwave frequencies. We demonstrate our method by determining the reduction potential of the terminal [4Fe-4S] cluster (N2) in the intramolecular electron-transfer relay in mammalian respiratory complex I. The value determined by our method, E m7 =-158mV, is precise, reproducible, and consistent with previously reported values. Our small-volume potentiometric titration method will facilitate detailed investigations of EPR-active centres in non-abundant and refractory proteins that can only be prepared in small quantities. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Preparing for in situ processing on upcoming leading-edge supercomputers

    DOE PAGES

    Kress, James; Churchill, Randy Michael; Klasky, Scott; ...

    2016-10-01

    High performance computing applications are producing increasingly large amounts of data and placing enormous stress on current capabilities for traditional post-hoc visualization techniques. Because of the growing compute and I/O imbalance, data reductions, including in situ visualization, are required. These reduced data are used for analysis and visualization in a variety of different ways. Many of he visualization and analysis requirements are known a priori, but when they are not, scientists are dependent on the reduced data to accurately represent the simulation in post hoc analysis. The contributions of this paper is a description of the directions we are pursuingmore » to assist a large scale fusion simulation code succeed on the next generation of supercomputers. Finally, these directions include the role of in situ processing for performing data reductions, as well as the tradeoffs between data size and data integrity within the context of complex operations in a typical scientific workflow.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berra, P.B.; Chung, S.M.; Hachem, N.I.

    This article presents techniques for managing a very large data/knowledge base to support multiple inference-mechanisms for logic programming. Because evaluation of goals can require accessing data from the extensional database, or EDB, in very general ways, one must often resort to indexing on all fields of the extensional database facts. This presents a formidable management problem in that the index data may be larger than the EDB itself. This problem becomes even more serious in this case of very large data/knowledge bases (hundreds of gigabytes), since considerably more hardware will be required to process and store the index data. Inmore » order to reduce the amount of index data considerably without losing generality, the authors form a surrogate file, which is a hashing transformation of the facts. Superimposed code words (SCW), concatenated code words (CCW), and transformed inverted lists (TIL) are possible structures for the surrogate file. since these transformations are quite regular and compact, the authors consider possible computer architecture for the processing of the surrogate file.« less

  12. Marine envenomations.

    PubMed

    Berling, Ingrid; Isbister, Geoffrey

    2015-01-01

    Marine stings are common but most are minor and do not require medical intervention. Severe and systemic marine envenoming is uncommon, but includes box jellyfish stings, Irukandji syndrome, major stingray trauma and blue-ringed octopus envenoming. Almost all marine injuries are caused by jellyfish stings, and penetrating injuries from spiny fish, stingrays or sea urchins. This article describes the presentation and management of marine envenomations and injuries that may occur in Australia. First aid for jellyfish includes tentacle removal, application of vinegar for box jellyfish, and hot water immersion (45°C for 20 min) for bluebottle jellyfish stings. Basic life support is essential for severe marine envenomings that result in cardiac collapse or paralysis. Irukandji syndrome causes severe generalised pain, autonomic excess and minimal local pain, which may require large amounts of analgesia, and, uncommonly, myocardial depression and pulmonary oedema occur. Penetrating marine injuries can cause significant trauma depending on location of the injury. Large and unclean wounds may have delayed healing and secondary infection if not adequately irrigated, debrided and observed.

  13. The use of electrochemistry for the synthesis of 17 alpha-hydroxyprogesterone by a fusion protein containing P450c17.

    PubMed

    Estabrook, R W; Shet, M S; Faulkner, K; Fisher, C W

    1996-11-01

    A method has been developed for the commercial application of the unique oxygen chemistry catalyzed by various cytochrome P450s. This is illustrated here for the synthesis of hydroxylated steroids. This method requires the preparation of large amounts of enzymatically functional P450 proteins that can serve as catalysts and a technique for providing electrons at an economically acceptable cost. To generate large amounts of enzymatically active recombinant P450s we have engineered the cDNAs for various P450s, including bovine adrenal P450c17, by linking them to a modified cDNA for rat NADPH-P450 reductase and placing them in the plasmid pCWori+. Transformation of E. coli results in the high level expression of an enzymatically active protein that can be easily purified by affinity chromatography. Incubation of the purified enzyme with steroid in a reaction vessel containing a platinum electrode and a Ag/AgCl electrode couple poised at -650 mV, together with the electromotively active redox mediator, cobalt sepulchrate, results in the 17 alpha-hydroxylation of progesterone at rates as high as 25 nmoles of progesterone hydroxylated/min/nmole of P450. Thus, high concentrations of hydroxylated steroids can be produced with incubation conditions of hours duration without the use of costly NADPH. Similar experiments have been carried out for the generation of the 6 beta-hydroxylation product of testosterone (using a fusion protein containing human P450 3A4). It is apparent that this method is applicable to many other P450 catalyzed reactions for the synthesis of large amounts of hydroxylated steroid metabolites. The electrochemical system is also applicable to drug discovery studies for the characterization of drug metabolites.

  14. Distribution and biokinetic analysis of 210Pb and 210Po in poultry due to ingestion of dicalcium phosphate.

    PubMed

    Casacuberta, N; Traversa, F L; Masqué, P; Garcia-Orellana, J; Anguita, M; Gasa, J; Garcia-Tenorio, R

    2010-09-15

    Dicalcium phosphate (DCP) is used as a calcium supplement for food producing animals (i.e., cattle, poultry and pig). When DCP is produced via wet acid digestion of the phosphate rock and depending on the acid used in the industrial process, the final product can result in enhanced (210)Pb and (210)Po specific activities (approximately 2000 Bq.kg(-1)). Both (210)Pb and (210)Po are of great interest because their contribution to the dose received by ingestion is potentially large. The aims of this work are to examine the accumulation of (210)Pb and (210)Po in chicken tissues during the first 42 days of life and to build a suitable single-compartment biokinetic model to understand the behavior of both radionuclides within the entire animal using the experimental results. Three commercial corn-soybean-based diets containing different amounts and sources of DCP were fed to broilers during a period of 42 days. The results show that diets containing enhanced concentrations of (210)Pb and (210)Po lead to larger specific accumulation in broiler tissues compared to the blank diet. Radionuclides do not accumulate homogeneously within the animal body: (210)Pb follows the calcium pathways to some extent and accumulates largely in bones, while (210)Po accumulates to a large extent in liver and kidneys. However, the total amount of radionuclide accumulation in tissues is small compared to the amounts excreted in feces. The single-compartment non-linear biokinetic model proposed here for (210)Pb and (210)Po in the whole animal takes into account the size evolution and is self-consistent in that no fitting parameterization of intake and excretions rates is required. Copyright 2010 Elsevier B.V. All rights reserved.

  15. Saving Salmon Through Advances in Fluvial Remote Sensing: Applying the Optimal Band Ratio Analysis (OBRA) for Bathymetric Mapping of Over 250 km of River Channel and Habitat Classification

    NASA Astrophysics Data System (ADS)

    Richardson, R.; Legleiter, C. J.; Harrison, L.

    2015-12-01

    Salmonids are threatened with extinction across the world from the fragmentation of riverine ecosystems from dams and diversions. In California, efforts to expand the range of spawnable habitat for native salmon by transporting fish around reservoirs is a potentially species saving idea. But, strong scientific evidence of the amount of high quality habitat is required to make these difficult management decisions. Remote sensing has long been used in fluvial settings to identify physical parameters that drive the quality of aquatic habitat; however, the true strength of remote sensing to cover large spatial extents has not been applied with the resolution that is relevant to salmonids. This project utilizes hyperspectral data of over 250 km of the Tuolumne and Merced Rivers to extract depth and bed slope from the wetted channel and NIR LiDAR for the surrounding topography. The Optimal Band Ratio Analysis (OBRA) has proven as an effective tool to create bathymetric maps of river channels in ideal settings with clear water, high amounts of bottom reflectance, and less than 3 meters deep over short distances. Results from this study show that OBRA can be applied over larger riverscapes at high resolutions (0.5 m). The depth and bed slope estimations are used to classify habitat units that are crucial to quantifying the quality and amount of habitat in these river that once produced large populations of native salmonids. As more managers look to expand habitat for these threatened species the tools developed here will be cost effective over the large extents that salmon migrate to spawn.

  16. Colony Size Affects the Efficacy of Bait Containing Chlorfluazuron Against the Fungus-Growing Termite Macrotermes gilvus (Blattodea: Termitidae).

    PubMed

    Lee, Ching-Chen; Neoh, Kok-Boon; Lee, Chow-Yang

    2014-12-01

    The efficacy of chitin synthesis inhibitors (CSIs) against fungus-growing termites is known to vary. In this study, 0.1% chlorfluazuron (CFZ) cellulose bait was tested against medium and large field colonies of Macrotermes gilvus (Hagen). The termite mounds were dissected to determine the health of the colony. Individual termites (i.e., workers and larvae) and fungus combs were subjected to gas chromatography-mass spectrometry (GC-MS) analysis to detect the presence of CFZ. In this study, 540.0 ± 25.8 g (or equivalent to 540.0 ± 25.8 mg active ingredient) and 680.0 ± 49.0 g (680.0 ± 49.0 mg active ingredient) of bait matrix were removed by the medium- and large-sized colonies, respectively, after baiting. All treated medium-sized colonies were moribund. The dead termites were scattered in the mound, larvae were absent, population size had decreased by 90%, and the queens appeared unhealthy. In contrast, no or limited effects were found in large-sized colonies. Only trace amounts of CFZ were detected in workers, larvae, and fungus combs, and the population of large-sized colonies had declined by only up to 40%. This might be owing to the presence of large amount of basidiomycete fungus and a drastic decrease of CFZ content per unit fungus comb (a main food source of larvae) in the large-sized colonies, and hence reduced the toxic effect and longer time is required to accumulate the lethal dose in larvae. Nevertheless, we do not deny the possibility of CSI bait eliminating or suppressing the higher termite if the test colonies could pick up adequate lethal dose by installing more bait stations and prolonging the baiting period. © 2014 Entomological Society of America.

  17. Theoretical comparison of maser materials for a 32-GHz maser amplifier

    NASA Technical Reports Server (NTRS)

    Lyons, James R.

    1988-01-01

    The computational results of a comparison of maser materials for a 32 GHz maser amplifier are presented. The search for a better maser material is prompted by the relatively large amount of pump power required to sustain a population inversion in ruby at frequencies on the order of 30 GHz and above. The general requirements of a maser material and the specific problems with ruby are outlined. The spin Hamiltonian is used to calculate energy levels and transition probabilities for ruby and twelve other materials. A table is compiled of several attractive operating points for each of the materials analyzed. All the materials analyzed possess operating points that could be superior to ruby. To complete the evaluation of the materials, measurements of inversion ratio and pump power requirements must be made in the future.

  18. [A large-scale accident in Alpine terrain].

    PubMed

    Wildner, M; Paal, P

    2015-02-01

    Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.

  19. 45 CFR 150.323 - Determining the amount of penalty-other matters as justice may require.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... REQUIREMENTS RELATING TO HEALTH CARE ACCESS CMS ENFORCEMENT IN GROUP AND INDIVIDUAL INSURANCE MARKETS CMS... Determining the amount of penalty—other matters as justice may require. CMS may take into account other...

  20. Limits on soft X-ray flux from distant emission regions

    NASA Technical Reports Server (NTRS)

    Burrows, D. N.; Mccammon, D.; Sanders, W. T.; Kraushaar, W. L.

    1984-01-01

    The all-sky soft X-ray data of McCammon et al. and the new N sub H survey (Stark et al. was used to place limits on the amount of the soft X-ray diffuse background that can originate beyond the neutral gas of the galactic disk. The X-ray data for two regions of the sky near the galactic poles are shown to be uncorrelated with 21 cm column densities. Most of the observed x-ray flux must therefore originate on the near side of the most distant neutral gas. The results from these regions are consistent with X-ray emission from a locally isotropic, unabsorbed source, but require large variations in the emission of the local region over large angular scales.

  1. A successful strategy for the recovering of active P21, an insoluble recombinant protein of Trypanosoma cruzi

    NASA Astrophysics Data System (ADS)

    Santos, Marlus Alves Dos; Teixeira, Francesco Brugnera; Moreira, Heline Hellen Teixeira; Rodrigues, Adele Aud; Machado, Fabrício Castro; Clemente, Tatiana Mordente; Brigido, Paula Cristina; Silva, Rebecca Tavares E.; Purcino, Cecílio; Gomes, Rafael Gonçalves Barbosa; Bahia, Diana; Mortara, Renato Arruda; Munte, Claudia Elisabeth; Horjales, Eduardo; da Silva, Claudio Vieira

    2014-03-01

    Structural studies of proteins normally require large quantities of pure material that can only be obtained through heterologous expression systems and recombinant technique. In these procedures, large amounts of expressed protein are often found in the insoluble fraction, making protein purification from the soluble fraction inefficient, laborious, and costly. Usually, protein refolding is avoided due to a lack of experimental assays that can validate correct folding and that can compare the conformational population to that of the soluble fraction. Herein, we propose a validation method using simple and rapid 1D 1H nuclear magnetic resonance (NMR) spectra that can efficiently compare protein samples, including individual information of the environment of each proton in the structure.

  2. Tradeoffs and synergies between biofuel production and large-scale solar infrastructure in deserts

    NASA Astrophysics Data System (ADS)

    Ravi, S.; Lobell, D. B.; Field, C. B.

    2012-12-01

    Solar energy installations in deserts are on the rise, fueled by technological advances and policy changes. Deserts, with a combination of high solar radiation and availability of large areas unusable for crop production are ideal locations for large scale solar installations. For efficient power generation, solar infrastructures require large amounts of water for operation (mostly for cleaning panels and dust suppression), leading to significant moisture additions to desert soil. A pertinent question is how to use the moisture inputs for sustainable agriculture/biofuel production. We investigated the water requirements for large solar infrastructures in North American deserts and explored the possibilities for integrating biofuel production with solar infrastructure. In co-located systems the possible decline in yields due to shading by solar panels may be offsetted by the benefits of periodic water addition to biofuel crops, simpler dust management and more efficient power generation in solar installations, and decreased impacts on natural habitats and scarce resources in deserts. In particular, we evaluated the potential to integrate solar infrastructure with biomass feedstocks that grow in arid and semi-arid lands (Agave Spp), which are found to produce high yields with minimal water inputs. To this end, we conducted detailed life cycle analysis for these coupled agave biofuel - solar energy systems to explore the tradeoffs and synergies, in the context of energy input-output, water use and carbon emissions.

  3. Development of large, horizontal-axis wind turbines

    NASA Technical Reports Server (NTRS)

    Baldwin, D. H.; Kennard, J.

    1985-01-01

    A program to develop large, horizontal-axis wind turbines is discussed. The program is directed toward developing the technology for safe, reliable, environmentally acceptable large wind turbines that can generate a significant amount of electricity at costs competitive with those of conventional electricity-generating systems. In addition, these large wind turbines must be fully compatible with electric utility operations and interface requirements. Several ongoing projects in large-wind-turbine development are directed toward meeting the technology requirements for utility applications. The machines based on first-generation technology (Mod-OA and Mod-1) successfully completed their planned periods of experimental operation in June, 1982. The second-generation machines (Mod-2) are in operation at selected utility sites. A third-generation machine (Mod-5) is under contract. Erection and initial operation of the Mod-5 in Hawaii should take place in 1986. Each successive generation of technology increased reliability and energy capture while reducing the cost of electricity. These advances are being made by gaining a better understanding of the system-design drivers, improving the analytical design tools, verifying design methods with operating field data, and incorporating new technology and innovative designs. Information is given on the results from the first- and second-generation machines (Mod-OA, - 1, and -2), the status of the Department of Interior, and the status of the third-generation wind turbine (Mod-5).

  4. Platelets from patients with the Quebec platelet disorder contain and secrete abnormal amounts of urokinase-type plasminogen activator.

    PubMed

    Kahr, W H; Zheng, S; Sheth, P M; Pai, M; Cowie, A; Bouchard, M; Podor, T J; Rivard, G E; Hayward, C P

    2001-07-15

    The Quebec platelet disorder (QPD) is an autosomal dominant platelet disorder associated with delayed bleeding and alpha-granule protein degradation. The degradation of alpha-granule, but not plasma, fibrinogen in patients with the QPD led to the investigation of their platelets for a protease defect. Unlike normal platelets, QPD platelets contained large amounts of fibrinolytic serine proteases that had properties of plasminogen activators. Western blot analysis, zymography, and immunodepletion experiments indicated this was because QPD platelets contained large amounts of urokinase-type plasminogen activator (u-PA) within a secretory compartment. u-PA antigen was not increased in all QPD plasmas, whereas it was increased more than 100-fold in QPD platelets (P <.00009), which contained increased u-PA messenger RNA. Although QPD platelets contained 2-fold more plasminogen activator inhibitor 1 (PAI-1) (P <.0008) and 100-fold greater u-PA-PAI-1 complexes (P <.0002) than normal platelets, they contained excess u-PA activity, predominantly in the form of two chain (tcu-PA), which required additional PAI-1 for full inhibition. There was associated proteolysis of plasminogen in QPD platelets, to forms that comigrated with plasmin. When similar amounts of tcu-PA were incubated with normal platelet secretory proteins, many alpha-granule proteins were proteolyzed to forms that resembled degraded QPD platelet proteins. These data implicate u-PA in the pathogenesis of alpha-granule protein degradation in the QPD. Although patients with the QPD have normal to increased u-PA levels in their plasma, without evidence of systemic fibrinogenolysis, their increased platelet u-PA could contribute to bleeding by accelerating fibrinolysis within the hemostatic plug. QPD is the only inherited bleeding disorder in humans known to be associated with increased u-PA.

  5. Resource Limitation Issues In Real-Time Intelligent Systems

    NASA Astrophysics Data System (ADS)

    Green, Peter E.

    1986-03-01

    This paper examines resource limitation problems that can occur in embedded AI systems which have to run in real-time. It does this by examining two case studies. The first is a system which acoustically tracks low-flying aircraft and has the problem of interpreting a high volume of often ambiguous input data to produce a model of the system's external world. The second is a robotics problem in which the controller for a robot arm has to dynamically plan the order in which to pick up pieces from a conveyer belt and sort them into bins. In this case the system starts with a continuously changing model of its environment and has to select which action to perform next. This latter case emphasizes the issues in designing a system which must operate in an uncertain and rapidly changing environment. The first system uses a distributed HEARSAY methodology running on multiple processors. It is shown, in this case, how the com-binatorial growth of possible interpretation of the input data can require large and unpredictable amounts of computer resources for data interpretation. Techniques are presented which achieve real-time operation by limiting the combinatorial growth of alternate hypotheses and processing those hypotheses that are most likely to lead to meaningful interpretation of the input data. The second system uses a decision tree approach to generate and evaluate possible plans of action. It is shown how the combina-torial growth of possible alternate plans can, as in the previous case, require large and unpredictable amounts of computer time to evalu-ate and select from amongst the alternative. The use of approximate decisions to limit the amount of computer time needed is discussed. The use of concept of using incremental evidence is then introduced and it is shown how this can be used as the basis of systems that can combine heuristic and approximate evidence in making real-time decisions.

  6. Using information theory to identify redundancy in common laboratory tests in the intensive care unit.

    PubMed

    Lee, Joon; Maslove, David M

    2015-07-31

    Clinical workflow is infused with large quantities of data, particularly in areas with enhanced monitoring such as the Intensive Care Unit (ICU). Information theory can quantify the expected amounts of total and redundant information contained in a given clinical data type, and as such has the potential to inform clinicians on how to manage the vast volumes of data they are required to analyze in their daily practice. The objective of this proof-of-concept study was to quantify the amounts of redundant information associated with common ICU lab tests. We analyzed the information content of 11 laboratory test results from 29,149 adult ICU admissions in the MIMIC II database. Information theory was applied to quantify the expected amount of redundant information both between lab values from the same ICU day, and between consecutive ICU days. Most lab values showed a decreasing trend over time in the expected amount of novel information they contained. Platelet, blood urea nitrogen (BUN), and creatinine measurements exhibited the most amount of redundant information on days 2 and 3 compared to the previous day. The creatinine-BUN and sodium-chloride pairs had the most redundancy. Information theory can help identify and discourage unnecessary testing and bloodwork, and can in general be a useful data analytic technique for many medical specialties that deal with information overload.

  7. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  8. Resolving the tips of the tree of life: How much mitochondrialdata doe we need?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonett, Ronald M.; Macey, J. Robert; Boore, Jeffrey L.

    2005-04-29

    Mitochondrial (mt) DNA sequences are used extensively to reconstruct evolutionary relationships among recently diverged animals,and have constituted the most widely used markers for species- and generic-level relationships for the last decade or more. However, most studies to date have employed relatively small portions of the mt-genome. In contrast, complete mt-genomes primarily have been used to investigate deep divergences, including several studies of the amount of mt sequence necessary to recover ancient relationships. We sequenced and analyzed 24 complete mt-genomes from a group of salamander species exhibiting divergences typical of those in many species-level studies. We present the first comprehensive investigationmore » of the amount of mt sequence data necessary to consistently recover the mt-genome tree at this level, using parsimony and Bayesian methods. Both methods of phylogenetic analysis revealed extremely similar results. A surprising number of well supported, yet conflicting, relationships were found in trees based on fragments less than {approx}2000 nucleotides (nt), typical of the vast majority of the thousands of mt-based studies published to date. Large amounts of data (11,500+ nt) were necessary to consistently recover the whole mt-genome tree. Some relationships consistently were recovered with fragments of all sizes, but many nodes required the majority of the mt-genome to stabilize, particularly those associated with short internal branches. Although moderate amounts of data (2000-3000 nt) were adequate to recover mt-based relationships for which most nodes were congruent with the whole mt-genome tree, many thousands of nucleotides were necessary to resolve rapid bursts of evolution. Recent advances in genomics are making collection of large amounts of sequence data highly feasible, and our results provide the basis for comparative studies of other closely related groups to optimize mt sequence sampling and phylogenetic resolution at the ''tips'' of the Tree of Life.« less

  9. Radiation Exposure Analyses Supporting the Development of Solar Particle Event Shielding Technologies

    NASA Technical Reports Server (NTRS)

    Walker, Steven A.; Clowdsley, Martha S.; Abston, H. Lee; Simon, Hatthew A.; Gallegos, Adam M.

    2013-01-01

    NASA has plans for long duration missions beyond low Earth orbit (LEO). Outside of LEO, large solar particle events (SPEs), which occur sporadically, can deliver a very large dose in a short amount of time. The relatively low proton energies make SPE shielding practical, and the possibility of the occurrence of a large event drives the need for SPE shielding for all deep space missions. The Advanced Exploration Systems (AES) RadWorks Storm Shelter Team was charged with developing minimal mass SPE storm shelter concepts for missions beyond LEO. The concepts developed included "wearable" shields, shelters that could be deployed at the onset of an event, and augmentations to the crew quarters. The radiation transport codes, human body models, and vehicle geometry tools contained in the On-Line Tool for the Assessment of Radiation In Space (OLTARIS) were used to evaluate the protection provided by each concept within a realistic space habitat and provide the concept designers with shield thickness requirements. Several different SPE models were utilized to examine the dependence of the shield requirements on the event spectrum. This paper describes the radiation analysis methods and the results of these analyses for several of the shielding concepts.

  10. A Fast Evaluation Method for Energy Building Consumption Based on the Design of Experiments

    NASA Astrophysics Data System (ADS)

    Belahya, Hocine; Boubekri, Abdelghani; Kriker, Abdelouahed

    2017-08-01

    Building sector is one of the effective consumer energy by 42% in Algeria. The need for energy has continued to grow, in inordinate way, due to lack of legislation on energy performance in this large consumer sector. Another reason is the simultaneous change of users’ requirements to maintain their comfort, especially summer in dry lands and parts of southern Algeria, where the town of Ouargla presents a typical example which leads to a large amount of electricity consumption through the use of air conditioning. In order to achieve a high performance envelope of the building, an optimization of major parameters building envelope is required, using design of experiments (DOE), can determine the most effective parameters and eliminate the less importance. The study building is often complex and time consuming due to the large number of parameters to consider. This study focuses on reducing the computing time and determines the major parameters of building energy consumption, such as area of building, factor shape, orientation, ration walls to windows …etc to make some proposal models in order to minimize the seasonal energy consumption due to air conditioning needs.

  11. 10 CFR 140.12 - Amount of financial protection required for other reactors.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Amount of financial protection required for other reactors... reactors. (a) Each licensee is required to have and maintain financial protection for each nuclear reactor... of financial protection required for any nuclear reactor under this section be less than $4,500,000...

  12. 10 CFR 140.12 - Amount of financial protection required for other reactors.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Amount of financial protection required for other reactors... reactors. (a) Each licensee is required to have and maintain financial protection for each nuclear reactor... of financial protection required for any nuclear reactor under this section be less than $4,500,000...

  13. [Performance enhancement by carbohydrate intake during sport: effects of carbohydrates during and after high-intensity exercise].

    PubMed

    Beelen, Milou; Cermak, Naomi M; van Loon, Luc J C

    2015-01-01

    Endogenous carbohydrate availability does not provide sufficient energy for prolonged moderate to high-intensity exercise. Carbohydrate ingestion during high-intensity exercise can therefore enhance performance.- For exercise lasting 1 to 2.5 hours, athletes are advised to ingest 30-60 g of carbohydrates per hour.- Well-trained endurance athletes competing for longer than 2.5 hours at high intensity can metabolise up to 90 g of carbohydrates per hour, provided that a mixture of glucose and fructose is ingested.- Athletes participating in intermittent or team sports are advised to follow the same strategies but the timing of carbohydrate intake depends on the type of sport.- If top performance is required again within 24 hours after strenuous exercise, the advice is to supplement endogenous carbohydrate supplies quickly within the first few hours post-exercise by ingesting large amounts of carbohydrate (1.2 g/kg/h) or a lower amount of carbohydrate (0.8 g/kg/h) with a small amount of protein (0.2-0.4 g/kg/h).

  14. Development of Sensors for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Medelius, Pedro

    2005-01-01

    Advances in technology have led to the availability of smaller and more accurate sensors. Computer power to process large amounts of data is no longer the prevailing issue; thus multiple and redundant sensors can be used to obtain more accurate and comprehensive measurements in a space vehicle. The successful integration and commercialization of micro- and nanotechnology for aerospace applications require that a close and interactive relationship be developed between the technology provider and the end user early in the project. Close coordination between the developers and the end users is critical since qualification for flight is time-consuming and expensive. The successful integration of micro- and nanotechnology into space vehicles requires a coordinated effort throughout the design, development, installation, and integration processes

  15. Advances in analytical chemistry

    NASA Technical Reports Server (NTRS)

    Arendale, W. F.; Congo, Richard T.; Nielsen, Bruce J.

    1991-01-01

    Implementation of computer programs based on multivariate statistical algorithms makes possible obtaining reliable information from long data vectors that contain large amounts of extraneous information, for example, noise and/or analytes that we do not wish to control. Three examples are described. Each of these applications requires the use of techniques characteristic of modern analytical chemistry. The first example, using a quantitative or analytical model, describes the determination of the acid dissociation constant for 2,2'-pyridyl thiophene using archived data. The second example describes an investigation to determine the active biocidal species of iodine in aqueous solutions. The third example is taken from a research program directed toward advanced fiber-optic chemical sensors. The second and third examples require heuristic or empirical models.

  16. GRIDVIEW: Recent Improvements in Research and Education Software for Exploring Mars Topography

    NASA Technical Reports Server (NTRS)

    Roark, J. H.; Masuoka, C. M.; Frey, H. V.

    2004-01-01

    GRIDVIEW is being developed by the GEODYNAMICS Branch at NASA's Goddard Space Flight Center and can be downloaded on the web at http://geodynamics.gsfc.nasa.gov/gridview/. The program is very mature and has been successfully used for more than four years, but is still under development as we add new features for data analysis and visualization. The software can run on any computer supported by the IDL virtual machine application supplied by RSI. The virtual machine application is currently available for recent versions of MS Windows, MacOS X, Red Hat Linux and UNIX. Minimum system memory requirement is 32 MB, however loading large data sets may require larger amounts of RAM to function adequately.

  17. A parallel data management system for large-scale NASA datasets

    NASA Technical Reports Server (NTRS)

    Srivastava, Jaideep

    1993-01-01

    The past decade has experienced a phenomenal growth in the amount of data and resultant information generated by NASA's operations and research projects. A key application is the reprocessing problem which has been identified to require data management capabilities beyond those available today (PRAT93). The Intelligent Information Fusion (IIF) system (ROEL91) is an ongoing NASA project which has similar requirements. Deriving our understanding of NASA's future data management needs based on the above, this paper describes an approach to using parallel computer systems (processor and I/O architectures) to develop an efficient parallel database management system to address the needs. Specifically, we propose to investigate issues in low-level record organizations and management, complex query processing, and query compilation and scheduling.

  18. Amino acids of Diclidophora merlangi (Monogenea).

    PubMed

    Arme, C; Whyte, A

    1975-02-01

    The level of free amino acids in Diclidophora merlangi is high, comprising over 500 mu moles/g ethanol extracted dry weight. A single amino acid, proline, constitutes some 70% of the total pool. Analysis of parasite protein and host blood and mucus revealed low proline levels, suggesting that the high free pool content was not related to a requirement for protein systhesis or to its abundance in the diet of the worm. Experiments revealed that proline was not involved specifically in osmoregulation, and the reasons for the large amounts present in Diclidophora remain unknown.

  19. Python-based geometry preparation and simulation visualization toolkits for STEPS

    PubMed Central

    Chen, Weiliang; De Schutter, Erik

    2014-01-01

    STEPS is a stochastic reaction-diffusion simulation engine that implements a spatial extension of Gillespie's Stochastic Simulation Algorithm (SSA) in complex tetrahedral geometries. An extensive Python-based interface is provided to STEPS so that it can interact with the large number of scientific packages in Python. However, a gap existed between the interfaces of these packages and the STEPS user interface, where supporting toolkits could reduce the amount of scripting required for research projects. This paper introduces two new supporting toolkits that support geometry preparation and visualization for STEPS simulations. PMID:24782754

  20. Supercritical CO2 Extraction of Rice Bran Oil -the Technology, Manufacture, and Applications.

    PubMed

    Sookwong, Phumon; Mahatheeranont, Sugunya

    2017-06-01

    Rice bran is a good source of nutrients that have large amounts of phytochemicals and antioxidants. Conventional rice bran oil production requires many processes that may deteriorate and degrade these valuable substances. Supercritical CO 2 extraction is a green alternative method for producing rice bran oil. This work reviews production of rice bran oil by supercritical carbon dioxide (SC-CO 2 ) extraction. In addition, the usefulness and advantages of SC-CO 2 extracted rice bran oil for edible oil and health purpose is also described.

  1. Design of joint source/channel coders

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The need to transmit large amounts of data over a band limited channel has led to the development of various data compression schemes. Many of these schemes function by attempting to remove redundancy from the data stream. An unwanted side effect of this approach is to make the information transfer process more vulnerable to channel noise. Efforts at protecting against errors involve the reinsertion of redundancy and an increase in bandwidth requirements. The papers presented within this document attempt to deal with these problems from a number of different approaches.

  2. Large-scale seismic signal analysis with Hadoop

    DOE PAGES

    Addair, T. G.; Dodge, D. A.; Walter, W. R.; ...

    2014-02-11

    In seismology, waveform cross correlation has been used for years to produce high-precision hypocenter locations and for sensitive detectors. Because correlated seismograms generally are found only at small hypocenter separation distances, correlation detectors have historically been reserved for spotlight purposes. However, many regions have been found to produce large numbers of correlated seismograms, and there is growing interest in building next-generation pipelines that employ correlation as a core part of their operation. In an effort to better understand the distribution and behavior of correlated seismic events, we have cross correlated a global dataset consisting of over 300 million seismograms. Thismore » was done using a conventional distributed cluster, and required 42 days. In anticipation of processing much larger datasets, we have re-architected the system to run as a series of MapReduce jobs on a Hadoop cluster. In doing so we achieved a factor of 19 performance increase on a test dataset. We found that fundamental algorithmic transformations were required to achieve the maximum performance increase. Whereas in the original IO-bound implementation, we went to great lengths to minimize IO, in the Hadoop implementation where IO is cheap, we were able to greatly increase the parallelism of our algorithms by performing a tiered series of very fine-grained (highly parallelizable) transformations on the data. Each of these MapReduce jobs required reading and writing large amounts of data.« less

  3. Large-scale seismic signal analysis with Hadoop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Addair, T. G.; Dodge, D. A.; Walter, W. R.

    In seismology, waveform cross correlation has been used for years to produce high-precision hypocenter locations and for sensitive detectors. Because correlated seismograms generally are found only at small hypocenter separation distances, correlation detectors have historically been reserved for spotlight purposes. However, many regions have been found to produce large numbers of correlated seismograms, and there is growing interest in building next-generation pipelines that employ correlation as a core part of their operation. In an effort to better understand the distribution and behavior of correlated seismic events, we have cross correlated a global dataset consisting of over 300 million seismograms. Thismore » was done using a conventional distributed cluster, and required 42 days. In anticipation of processing much larger datasets, we have re-architected the system to run as a series of MapReduce jobs on a Hadoop cluster. In doing so we achieved a factor of 19 performance increase on a test dataset. We found that fundamental algorithmic transformations were required to achieve the maximum performance increase. Whereas in the original IO-bound implementation, we went to great lengths to minimize IO, in the Hadoop implementation where IO is cheap, we were able to greatly increase the parallelism of our algorithms by performing a tiered series of very fine-grained (highly parallelizable) transformations on the data. Each of these MapReduce jobs required reading and writing large amounts of data.« less

  4. Development of an interactive data base management system for capturing large volumes of data.

    PubMed

    Moritz, T E; Ellis, N K; VillaNueva, C B; Steeger, J E; Ludwig, S T; Deegan, N I; Shroyer, A L; Henderson, W G; Sethi, G K; Grover, F L

    1995-10-01

    Accurate collection and successful management of data are problems common to all scientific studies. For studies in which large quantities of data are collected by means of questionnaires and/or forms, data base management becomes quite laborious and time consuming. Data base management comprises data collection, data entry, data editing, and data base maintenance. In this article, the authors describe the development of an interactive data base management (IDM) system for the collection of more than 1,400 variables from a targeted population of 6,000 patients undergoing heart surgery requiring cardiopulmonary bypass. The goals of the IDM system are to increase the accuracy and efficiency with which this large amount of data is collected and processed, to reduce research nurse work load through automation of certain administrative and clerical activities, and to improve the process for implementing a uniform study protocol, standardized forms, and definitions across sites.

  5. Toward exascale production of recombinant adeno-associated virus for gene transfer applications.

    PubMed

    Cecchini, S; Negrete, A; Kotin, R M

    2008-06-01

    To gain acceptance as a medical treatment, adeno-associated virus (AAV) vectors require a scalable and economical production method. Recent developments indicate that recombinant AAV (rAAV) production in insect cells is compatible with current good manufacturing practice production on an industrial scale. This platform can fully support development of rAAV therapeutics from tissue culture to small animal models, to large animal models, to toxicology studies, to Phase I clinical trials and beyond. Efforts to characterize, optimize and develop insect cell-based rAAV production have culminated in successful bioreactor-scale production of rAAV, with total yields potentially capable of approaching the exa-(10(18)) scale. These advances in large-scale AAV production will allow us to address specific catastrophic, intractable human diseases such as Duchenne muscular dystrophy, for which large amounts of recombinant vector are essential for successful outcome.

  6. Forming-free resistive switching characteristics of Ag/CeO2/Pt devices with a large memory window

    NASA Astrophysics Data System (ADS)

    Zheng, Hong; Kim, Hyung Jun; Yang, Paul; Park, Jong-Sung; Kim, Dong Wook; Lee, Hyun Ho; Kang, Chi Jung; Yoon, Tae-Sik

    2017-05-01

    Ag/CeO2(∼45 nm)/Pt devices exhibited forming-free bipolar resistive switching with a large memory window (low-resistance-state (LRS)/high-resistance-state (HRS) ratio >106) at a low switching voltage (<±1 ∼ 2 V) in voltage sweep condition. Also, they retained a large memory window (>104) at a pulse operation (±5 V, 50 μs). The high oxygen ionic conductivity of the CeO2 layer as well as the migration of silver facilitated the formation of filament for the transition to LRS at a low voltage without a high voltage forming operation. Also, a certain amount of defects in the CeO2 layer was required for stable HRS with space-charge-limited-conduction, which was confirmed comparing the devices with non-annealed and annealed CeO2 layers.

  7. The composition of the primitive atmosphere and the synthesis of organic compounds on the early Earth

    NASA Technical Reports Server (NTRS)

    Bada, J. L.; Miller, S. L.

    1985-01-01

    The generally accepted theory for the origin of life on the Earth requires that a large variety of organic compounds be present to form the first living organisms and to provide the energy sources for primitive life either directly or through various fermentation reactions. This can provide a strong constraint on discussions of the formation of the Earth and on the composition of the primitive atmosphere. In order for substantial amounts of organic compounds to have been present on the prebiological Earth, certain conditions must have existed. There is a large body of literature on the prebiotic synthesis of organic compounds in various postulated atmospheres. In this mixture of abiotically synthesized organic compounds, the amino acids are of special interest since they are utilized by modern organisms to synthesize structural materials and a large array of catalytic peptides.

  8. Omics for Precious Rare Biosamples: Characterization of Ancient Human Hair by a Proteomic Approach.

    PubMed

    Fresnais, Margaux; Richardin, Pascale; Sepúlveda, Marcela; Leize-Wagner, Emmanuelle; Charrié-Duhaut, Armelle

    2017-07-01

    Omics technologies have far-reaching applications beyond clinical medicine. A case in point is the analysis of ancient hair samples. Indeed, hair is an important biological indicator that has become a material of choice in archeometry to study the ancient civilizations and their environment. Current characterization of ancient hair is based on elemental and structural analyses, but only few studies have focused on the molecular aspects of ancient hair proteins-keratins-and their conservation state. In such cases, applied extraction protocols require large amounts of raw hair, from 30 to 100 mg. In the present study, we report an optimized new proteomic approach to accurately identify archeological hair proteins, and assess their preservation state, while using a minimum of raw material. Testing and adaptation of three protocols and of nano liquid chromatography-tandem mass spectrometry (nanoLC-MS/MS) parameters were performed on modern hair. On the basis of mass spectrometry data quality, and of the required initial sample amount, the most promising workflow was selected and applied to an ancient archeological sample, dated to about 3880 years before present. Finally, and importantly, we were able to identify 11 ancient hair proteins and to visualize the preservation state of mummy's hair from only 500 μg of raw material. The results presented here pave the way for new insights into the understanding of hair protein alteration processes such as those due to aging and ecological exposures. This work could enable omics scientists to apply a proteomic approach to precious and rare samples, not only in the context of archeometrical studies but also for future applications that would require the use of very small amounts of sample.

  9. 3He and BF 3 neutron detector pressure effect and model comparison

    NASA Astrophysics Data System (ADS)

    Lintereur, Azaree; Conlin, Kenneth; Ely, James; Erikson, Luke; Kouzes, Richard; Siciliano, Edward; Stromswold, David; Woodring, Mitchell

    2011-10-01

    Radiation detection systems for homeland security applications must possess the capability of detecting both gamma rays and neutrons. The radiation portal monitor systems that are currently deployed use a plastic scintillator for detecting gamma rays and 3He gas-filled proportional counters for detecting neutrons. Proportional counters filled with 3He are the preferred neutron detectors for use in radiation portal monitor systems because 3He has a large neutron cross-section, is relatively insensitive to gamma-rays, is neither toxic nor corrosive, can withstand extreme environments, and can be operated at a lower voltage than some of the alternative proportional counters. The amount of 3He required for homeland security and science applications has depleted the world supply and there is no longer enough available to fill the demand. Thus, alternative neutron detectors are being explored. Two possible temporary solutions that could be utilized while a more permanent solution is being identified are reducing the 3He pressure in the proportional counters and using boron trifluoride gas-filled proportional counters. Reducing the amount of 3He required in each of the proportional counters would decrease the rate at which 3He is being used; not enough to solve the shortage, but perhaps enough to increase the amount of time available to find a working replacement. Boron trifluoride is not appropriate for all situations as these detectors are less sensitive than 3He, boron trifluoride gas is corrosive, and a much higher voltage is required than what is used with 3He detectors. Measurements of the neutron detection efficiency of 3He and boron trifluoride as a function of tube pressure were made. The experimental results were also used to validate models of the radiation portal monitor systems.

  10. Comparing memory-efficient genome assemblers on stand-alone and cloud infrastructures.

    PubMed

    Kleftogiannis, Dimitrios; Kalnis, Panos; Bajic, Vladimir B

    2013-01-01

    A fundamental problem in bioinformatics is genome assembly. Next-generation sequencing (NGS) technologies produce large volumes of fragmented genome reads, which require large amounts of memory to assemble the complete genome efficiently. With recent improvements in DNA sequencing technologies, it is expected that the memory footprint required for the assembly process will increase dramatically and will emerge as a limiting factor in processing widely available NGS-generated reads. In this report, we compare current memory-efficient techniques for genome assembly with respect to quality, memory consumption and execution time. Our experiments prove that it is possible to generate draft assemblies of reasonable quality on conventional multi-purpose computers with very limited available memory by choosing suitable assembly methods. Our study reveals the minimum memory requirements for different assembly programs even when data volume exceeds memory capacity by orders of magnitude. By combining existing methodologies, we propose two general assembly strategies that can improve short-read assembly approaches and result in reduction of the memory footprint. Finally, we discuss the possibility of utilizing cloud infrastructures for genome assembly and we comment on some findings regarding suitable computational resources for assembly.

  11. Local wavelet transform: a cost-efficient custom processor for space image compression

    NASA Astrophysics Data System (ADS)

    Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier

    2002-11-01

    Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.

  12. Downscaling GLOF Hazards: An in-depth look at the Nepal Himalaya

    NASA Astrophysics Data System (ADS)

    Rounce, D.; McKinney, D. C.; Lala, J.

    2016-12-01

    The Nepal Himalaya house a large number of glacial lakes that pose a flood hazard to downstream communities and infrastructure. The modeling of the entire process chain of these glacial lake outburst floods (GLOFs) has been advancing rapidly in recent years. The most common cause of failure is mass movement entering the glacial lake, which triggers a tsunami-like wave that breaches the terminal moraine and causes the ensuing downstream flood. Unfortunately, modeling the avalanche, the breach of the moraine, and the downstream flood requires a large amount of site-specific information and can be very labor-intensive. Therefore, these detailed models need to be paired with large-scale hazard assessments that identify the glacial lakes that are the biggest threat and the triggering events that threaten these lakes. This study discusses the merger of a large-scale, remotely-based hazard assessment with more detailed GLOF models to show how GLOF hazard modeling can be downscaled in the Nepal Himalaya.

  13. Axial high topography and partial melt in the crust and mantle beneath the western Galápagos Spreading Center

    USGS Publications Warehouse

    Blacic, Tanya M.; Ito, Garrett; Shah, Anjana K.; Canales, Juan Pablo; Lin, Jian

    2008-01-01

    The hot spot-influenced western Galápagos Spreading Center (GSC) has an axial topographic high that reaches heights of ∼700 m relative to seafloor depth ∼25 km from the axis. We investigate the cause of the unusual size of the axial high using a model that determines the flexural response to loads resulting from the thermal and magmatic structure of the lithosphere. The thermal structure simulated is appropriate for large amounts of cooling by hydrothermal circulation, which tends to minimize the amount of partial melt needed to explain the axial topography. Nonetheless, results reveal that the large axial high near 92°W requires that either the crust below the magma lens contains >35% partial melt or that 20% melt is present in the lower crust and at least 3% in the mantle within a narrow column (<∼10 km wide) extending to depths of 45–65 km. Because melt fractions >35% in the crust are considered unreasonable, it is likely that much of the axial high region of the GSC is underlain by a narrow region of partially molten mantle of widths approaching those imaged seismically beneath the East Pacific Rise. A narrow zone of mantle upwelling and melting, driven largely by melt buoyancy, is a plausible explanation.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy,more » and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics (which we discussed in [1]) where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel. We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.« less

  15. Responses to Oxidative and Heavy Metal Stresses in Cyanobacteria: Recent Advances

    PubMed Central

    Cassier-Chauvat, Corinne; Chauvat, Franck

    2014-01-01

    Cyanobacteria, the only known prokaryotes that perform oxygen-evolving photosynthesis, are receiving strong attention in basic and applied research. In using solar energy, water, CO2 and mineral salts to produce a large amount of biomass for the food chain, cyanobacteria constitute the first biological barrier against the entry of toxics into the food chain. In addition, cyanobacteria have the potential for the solar-driven carbon-neutral production of biofuels. However, cyanobacteria are often challenged by toxic reactive oxygen species generated under intense illumination, i.e., when their production of photosynthetic electrons exceeds what they need for the assimilation of inorganic nutrients. Furthermore, in requiring high amounts of various metals for growth, cyanobacteria are also frequently affected by drastic changes in metal availabilities. They are often challenged by heavy metals, which are increasingly spread out in the environment through human activities, and constitute persistent pollutants because they cannot be degraded. Consequently, it is important to analyze the protection against oxidative and metal stresses in cyanobacteria because these ancient organisms have developed most of these processes, a large number of which have been conserved during evolution. This review summarizes what is known regarding these mechanisms, emphasizing on their crosstalk. PMID:25561236

  16. Distributed memory parallel Markov random fields using graph partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinemann, C.; Perciano, T.; Ushizima, D.

    Markov random fields (MRF) based algorithms have attracted a large amount of interest in image analysis due to their ability to exploit contextual information about data. Image data generated by experimental facilities, though, continues to grow larger and more complex, making it more difficult to analyze in a reasonable amount of time. Applying image processing algorithms to large datasets requires alternative approaches to circumvent performance problems. Aiming to provide scientists with a new tool to recover valuable information from such datasets, we developed a general purpose distributed memory parallel MRF-based image analysis framework (MPI-PMRF). MPI-PMRF overcomes performance and memory limitationsmore » by distributing data and computations across processors. The proposed approach was successfully tested with synthetic and experimental datasets. Additionally, the performance of the MPI-PMRF framework is analyzed through a detailed scalability study. We show that a performance increase is obtained while maintaining an accuracy of the segmentation results higher than 98%. The contributions of this paper are: (a) development of a distributed memory MRF framework; (b) measurement of the performance increase of the proposed approach; (c) verification of segmentation accuracy in both synthetic and experimental, real-world datasets« less

  17. The kinetics of influenza-virus adsorption on iron oxide in the process of viral purification and concentration

    PubMed Central

    Larin, N. M.; Gallimore, P. H.

    1971-01-01

    This paper reports a study carried out to clarify the mechanisms involved in adsorption of influenza A and B viruses on iron oxide. Accordingly, the amounts of virus that are adsorbed from virus suspensions of varying concentrations per unit surface area of magnetic or non-magnetic oxide at fixed temperature and time have been determined. The principles involved are clearly the same as those involved in multiple equilibria during the interaction of particles with a large number of combining sites with different intrinsic affinity. Consequently, the amount of virus that is adsorbed per unit mass of iron oxide depends on the size of the adsorbent area, not on its magnetic property. Owing to a significant difference between the affinities of influenza A and B particles for the binding sites on iron oxide, unit surface area of the adsorbent is invariably capable of adsorbing significantly greater amounts of influenza A than B particles. The practical implications of these findings are that a better understanding of the mechanisms involved in virus adsorption on iron oxide will permit a more efficient separation of virus particles from impurities. The simplicity and the rapidity of the technique and the cheapness of the equipment required suggest that the iron oxide method is of great value for both small- or large-scale viral purification, whether it is used as a single step procedure or as a primary step followed by zonal separation. PMID:5291749

  18. Data Prospecting Framework - a new approach to explore "big data" in Earth Science

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Rushing, J.; Lin, A.; Kuo, K.

    2012-12-01

    Due to advances in sensors, computation and storage, cost and effort required to produce large datasets have been significantly reduced. As a result, we are seeing a proliferation of large-scale data sets being assembled in almost every science field, especially in geosciences. Opportunities to exploit the "big data" are enormous as new hypotheses can be generated by combining and analyzing large amounts of data. However, such a data-driven approach to science discovery assumes that scientists can find and isolate relevant subsets from vast amounts of available data. Current Earth Science data systems only provide data discovery through simple metadata and keyword-based searches and are not designed to support data exploration capabilities based on the actual content. Consequently, scientists often find themselves downloading large volumes of data, struggling with large amounts of storage and learning new analysis technologies that will help them separate the wheat from the chaff. New mechanisms of data exploration are needed to help scientists discover the relevant subsets We present data prospecting, a new content-based data analysis paradigm to support data-intensive science. Data prospecting allows the researchers to explore big data in determining and isolating data subsets for further analysis. This is akin to geo-prospecting in which mineral sites of interest are determined over the landscape through screening methods. The resulting "data prospects" only provide an interaction with and feel for the data through first-look analytics; the researchers would still have to download the relevant datasets and analyze them deeply using their favorite analytical tools to determine if the datasets will yield new hypotheses. Data prospecting combines two traditional categories of data analysis, data exploration and data mining within the discovery step. Data exploration utilizes manual/interactive methods for data analysis such as standard statistical analysis and visualization, usually on small datasets. On the other hand, data mining utilizes automated algorithms to extract useful information. Humans guide these automated algorithms and specify algorithm parameters (training samples, clustering size, etc.). Data Prospecting combines these two approaches using high performance computing and the new techniques for efficient distributed file access.

  19. Exploring Google Earth Engine platform for big data processing: classification of multi-temporal satellite imagery for crop mapping

    NASA Astrophysics Data System (ADS)

    Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii

    2017-02-01

    Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.

  20. Financing the Air Transportation Industry

    NASA Technical Reports Server (NTRS)

    Lloyd-Jones, D. J.

    1972-01-01

    The basic characteristics of the air transportation industry are outlined and it is shown how they affect financing requirements and patterns of production. The choice of financial timing is imperative in order to get the best interest rates available and to insure a fair return to investors. The fact that the industry cannot store its products has a fairly major effect on the amount of equipment to purchase, the amount of capital investment required, and the amount of return required to offset industry depriciation.

  1. 26 CFR 1.665(c)-1 - Accumulation distributions of certain foreign trusts; in general.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... below zero) by the amount of income required to be distributed currently. (In computing the amount of an... distributable net income reduced (but not below zero) by the amount required to be distributed currently. This... unless there is undistributed net income in at least one of the preceding taxable years which began after...

  2. Exhaust after-treatment system with in-cylinder addition of unburnt hydrocarbons

    DOEpatents

    Coleman, Gerald N.; Kesse, Mary L.

    2007-10-30

    Certain exhaust after-treatment devices, at least periodically, require the addition of unburnt hydrocarbons in order to create reductant-rich exhaust conditions. The present disclosure adds unburnt hydrocarbons to exhaust from at least one combustion chamber by positioning, at least partially within a combustion chamber, a mixed-mode fuel injector operable to inject fuel into the combustion chamber in a first spray pattern with a small average angle relative to a centerline of the combustion chamber and a second spray pattern with a large average angle relative to the centerline of the combustion chamber. An amount of fuel is injected in the first spray pattern into a non-combustible environment within the at least one combustion chamber during at least one of an expansion stroke and exhaust stroke. The exhaust with the unburnt amount of fuel is moved into an exhaust passage via an exhaust valve.

  3. State of gas exchange in recumbent and orthostatic positions and under physical load in healthy persons of varying age, sex and body build

    NASA Technical Reports Server (NTRS)

    Glezer, G. A.; Charyyev, M.; Zilbert, N. L.

    1980-01-01

    Age effect on gas exchange was studied in the recumbent and orthostatic positions and under physical load. In the case of the older age group and for normal as compared with hypersthenic persons, oxygen consumption during rest and during moderate physical overload diminishes. When the vertical position is assumed oxygen consumption in persons of various age groups is distinctly increased, particularly in the elderly group. There is a reduction in the amount of oxygen consumption, oxygen pulse, recovery coefficient, and work efficiency under moderate overload. In persons over 50, physical labor induces a large oxygen requirement and a sharp rise in the level of lactic acid and the blood's lactate/pyruvate ratio. No distinct difference was noted in the amount of oxygen consumed during rest and during physical overload in men and women of the same physical development and age.

  4. Combinatorial gene editing in mammalian cells using ssODNs and TALENs

    NASA Astrophysics Data System (ADS)

    Strouse, Bryan; Bialk, Pawel; Niamat, Rohina A.; Rivera-Torres, Natalia; Kmiec, Eric B.

    2014-01-01

    The regulation of gene editing is being elucidated in mammalian cells and its potential as well as its limitations are becoming evident. ssODNs carry out gene editing by annealing to their complimentary sequence at the target site and acting as primers for replication fork extension. To effect a genetic change, a large amount of ssODN molecules must be introduced into cells and as such induce a Reduced Proliferation Phenotype (RPP), a phenomenon in which corrected cells do not proliferate. To overcome this limitation, we have used TAL-Effector Nucleases (TALENs) to increase the frequency, while reducing the amount of ssODN required to direct gene correction. This strategy resolves the problem and averts the serious effects of RPP. The efficiency of gene editing can be increased significantly if cells are targeted while they progress through S phase. Our studies define new reaction parameters that will help guide experimental strategies of gene editing.

  5. Experimental quantum data locking

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Cao, Zhu; Wu, Cheng; Fukuda, Daiji; You, Lixing; Zhong, Jiaqiang; Numata, Takayuki; Chen, Sijing; Zhang, Weijun; Shi, Sheng-Cai; Lu, Chao-Yang; Wang, Zhen; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei

    2016-08-01

    Classical correlation can be locked via quantum means: quantum data locking. With a short secret key, one can lock an exponentially large amount of information in order to make it inaccessible to unauthorized users without the key. Quantum data locking presents a resource-efficient alternative to one-time pad encryption which requires a key no shorter than the message. We report experimental demonstrations of a quantum data locking scheme originally proposed by D. P. DiVincenzo et al. [Phys. Rev. Lett. 92, 067902 (2004), 10.1103/PhysRevLett.92.067902] and a loss-tolerant scheme developed by O. Fawzi et al. [J. ACM 60, 44 (2013), 10.1145/2518131]. We observe that the unlocked amount of information is larger than the key size in both experiments, exhibiting strong violation of the incremental proportionality property of classical information theory. As an application example, we show the successful transmission of a photo over a lossy channel with quantum data (un)locking and error correction.

  6. Imaging samples larger than the field of view: the SLS experience

    NASA Astrophysics Data System (ADS)

    Vogiatzis Oikonomidis, Ioannis; Lovric, Goran; Cremona, Tiziana P.; Arcadu, Filippo; Patera, Alessandra; Schittny, Johannes C.; Stampanoni, Marco

    2017-06-01

    Volumetric datasets with micrometer spatial and sub-second temporal resolutions are nowadays routinely acquired using synchrotron X-ray tomographic microscopy (SRXTM). Although SRXTM technology allows the examination of multiple samples with short scan times, many specimens are larger than the field-of-view (FOV) provided by the detector. The extension of the FOV in the direction perpendicular to the rotation axis remains non-trivial. We present a method that can efficiently increase the FOV merging volumetric datasets obtained by region-of-interest tomographies in different 3D positions of the sample with a minimal amount of artefacts and with the ability to handle large amounts of data. The method has been successfully applied for the three-dimensional imaging of a small number of mouse lung acini of intact animals, where pixel sizes down to the micrometer range and short exposure times are required.

  7. Challenges in disposing of anthrax waste.

    PubMed

    Lesperance, Ann M; Stein, Steve; Upton, Jaki F; Toomey, Chris

    2011-09-01

    Disasters often create large amounts of waste that must be managed as part of both immediate response and long-term recovery. While many federal, state, and local agencies have debris management plans, these plans often do not address chemical, biological, and radiological contamination. The Interagency Biological Restoration Demonstration's (IBRD) purpose was to holistically assess all aspects of an anthrax incident and assist in the development of a plan for long-term recovery. In the case of wide-area anthrax contamination and the follow-on response and recovery activities, a significant amount of material would require decontamination and disposal. Accordingly, IBRD facilitated the development of debris management plans to address contaminated waste through a series of interviews and workshops with local, state, and federal representatives. The outcome of these discussions was the identification of 3 primary topical areas that must be addressed: planning, unresolved research questions, and resolving regulatory issues.

  8. Challenges in Disposing of Anthrax Waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lesperance, Ann M.; Stein, Steven L.; Upton, Jaki F.

    2011-09-01

    Disasters often create large amounts of waste that must be managed as part of both immediate response and long-term recovery. While many federal, state, and local agencies have debris management plans, these plans often do not address chemical, biological, and radiological contamination. The Interagency Biological Restoration Demonstration’s (IBRD) purpose was to holistically assess all aspects of an anthrax incident and assist the development of a plan for long-term recovery. In the case of wide-area anthrax contamination and the follow-on response and recovery activities, a significant amount of material will require decontamination and disposal. Accordingly, IBRD facilitated the development of debrismore » management plans to address contaminated waste through a series of interviews and workshops with local, state, and federal representatives. The outcome of these discussion was the identification of three primary topical areas that must be addressed: 1) Planning; 2) Unresolved research questions, and resolving regulatory issues.« less

  9. Polyelectrolyte assisted charge titration spectrometry: Applications to latex and oxide nanoparticles.

    PubMed

    Mousseau, F; Vitorazi, L; Herrmann, L; Mornet, S; Berret, J-F

    2016-08-01

    The electrostatic charge density of particles is of paramount importance for the control of the dispersion stability. Conventional methods use potentiometric, conductometric or turbidity titration but require large amount of samples. Here we report a simple and cost-effective method called polyelectrolyte assisted charge titration spectrometry or PACTS. The technique takes advantage of the propensity of oppositely charged polymers and particles to assemble upon mixing, leading to aggregation or phase separation. The mixed dispersions exhibit a maximum in light scattering as a function of the volumetric ratio X, and the peak position XMax is linked to the particle charge density according to σ∼D0XMax where D0 is the particle diameter. The PACTS is successfully applied to organic latex, aluminum and silicon oxide particles of positive or negative charge using poly(diallyldimethylammonium chloride) and poly(sodium 4-styrenesulfonate). The protocol is also optimized with respect to important parameters such as pH and concentration, and to the polyelectrolyte molecular weight. The advantages of the PACTS technique are that it requires minute amounts of sample and that it is suitable to a broad variety of charged nano-objects. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Overlapped Partitioning for Ensemble Classifiers of P300-Based Brain-Computer Interfaces

    PubMed Central

    Onishi, Akinari; Natsume, Kiyohisa

    2014-01-01

    A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance. PMID:24695550

  11. Overlapped partitioning for ensemble classifiers of P300-based brain-computer interfaces.

    PubMed

    Onishi, Akinari; Natsume, Kiyohisa

    2014-01-01

    A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance.

  12. Widespread episodic thiamine deficiency in Northern Hemisphere wildlife

    PubMed Central

    Balk, Lennart; Hägerroth, Per-Åke; Gustavsson, Hanna; Sigg, Lisa; Åkerman, Gun; Ruiz Muñoz, Yolanda; Honeyfield, Dale C.; Tjärnlund, Ulla; Oliveira, Kenneth; Ström, Karin; McCormick, Stephen D.; Karlsson, Simon; Ström, Marika; van Manen, Mathijs; Berg, Anna-Lena; Halldórsson, Halldór P.; Strömquist, Jennie; Collier, Tracy K.; Börjeson, Hans; Mörner, Torsten; Hansson, Tomas

    2016-01-01

    Many wildlife populations are declining at rates higher than can be explained by known threats to biodiversity. Recently, thiamine (vitamin B1) deficiency has emerged as a possible contributing cause. Here, thiamine status was systematically investigated in three animal classes: bivalves, ray-finned fishes, and birds. Thiamine diphosphate is required as a cofactor in at least five life-sustaining enzymes that are required for basic cellular metabolism. Analysis of different phosphorylated forms of thiamine, as well as of activities and amount of holoenzyme and apoenzyme forms of thiamine-dependent enzymes, revealed episodically occurring thiamine deficiency in all three animal classes. These biochemical effects were also linked to secondary effects on growth, condition, liver size, blood chemistry and composition, histopathology, swimming behaviour and endurance, parasite infestation, and reproduction. It is unlikely that the thiamine deficiency is caused by impaired phosphorylation within the cells. Rather, the results point towards insufficient amounts of thiamine in the food. By investigating a large geographic area, by extending the focus from lethal to sublethal thiamine deficiency, and by linking biochemical alterations to secondary effects, we demonstrate that the problem of thiamine deficiency is considerably more widespread and severe than previously reported. PMID:27958327

  13. Enhanced visual perception through tone mapping

    NASA Astrophysics Data System (ADS)

    Harrison, Andre; Mullins, Linda L.; Raglin, Adrienne; Etienne-Cummings, Ralph

    2016-05-01

    Tone mapping operators compress high dynamic range images to improve the picture quality on a digital display when the dynamic range of the display is lower than that of the image. However, tone mapping operators have been largely designed and evaluated based on the aesthetic quality of the resulting displayed image or how perceptually similar the compressed image appears relative to the original scene. They also often require per image tuning of parameters depending on the content of the image. In military operations, however, the amount of information that can be perceived is more important than the aesthetic quality of the image and any parameter adjustment needs to be as automated as possible regardless of the content of the image. We have conducted two studies to evaluate the perceivable detail of a set of tone mapping algorithms, and we apply our findings to develop and test an automated tone mapping algorithm that demonstrates a consistent improvement in the amount of perceived detail. An automated, and thereby predictable, tone mapping method enables a consistent presentation of perceivable features, can reduce the bandwidth required to transmit the imagery, and can improve the accessibility of the data by reducing the needed expertise of the analyst(s) viewing the imagery.

  14. Widespread episodic thiamine deficiency in Northern Hemisphere wildlife

    USGS Publications Warehouse

    Balk, Lennart; Hägerroth, Per-Åke; Gustavsson, Hanna; Sigg, Lisa; Akerman, Gun; Ruiz Muñoz, Yolanda; Honeyfield, Dale C.; Tjarnlund, Ulla; Oliveira, Kenneth; Strom, Karin; McCormick, Stephen D.; Karlsson, Simon; Strom, Marika; van Manen, Mathijs; Berg, Anna-Lena; Halldórsson, Halldór P.; Stromquist, Jennie; Collier, Tracy K.; Borjeson, Hans; Morner, Torsten; Hansson, Tomas

    2016-01-01

    Many wildlife populations are declining at rates higher than can be explained by known threats to biodiversity. Recently, thiamine (vitamin B1) deficiency has emerged as a possible contributing cause. Here, thiamine status was systematically investigated in three animal classes: bivalves, ray-finned fishes, and birds. Thiamine diphosphate is required as a cofactor in at least five life-sustaining enzymes that are required for basic cellular metabolism. Analysis of different phosphorylated forms of thiamine, as well as of activities and amount of holoenzyme and apoenzyme forms of thiamine-dependent enzymes, revealed episodically occurring thiamine deficiency in all three animal classes. These biochemical effects were also linked to secondary effects on growth, condition, liver size, blood chemistry and composition, histopathology, swimming behaviour and endurance, parasite infestation, and reproduction. It is unlikely that the thiamine deficiency is caused by impaired phosphorylation within the cells. Rather, the results point towards insufficient amounts of thiamine in the food. By investigating a large geographic area, by extending the focus from lethal to sublethal thiamine deficiency, and by linking biochemical alterations to secondary effects, we demonstrate that the problem of thiamine deficiency is considerably more widespread and severe than previously reported.

  15. MobB protein stimulates nicking at the R1162 origin of transfer by increasing the proportion of complexed plasmid DNA.

    PubMed Central

    Perwez, T; Meyer, R

    1996-01-01

    An essential early step in conjugal mobilization of R1162, nicking of the DNA strand that is subsequently transferred, is carried out in the relaxosome, a complex of two plasmid-encoded proteins and DNA at the origin of transfer (oriT). A third protein, MobB, is also required for efficient mobilization. We show that in the cell this protein increases the proportion of molecules specifically nicked at oriT, resulting in lower yields of covalently closed molecules after alkaline extraction. These nicked molecules largely remain supercoiled, with unwinding presumably constrained by the relaxosome. MobB enhances the sensitivity of the oriT DNA to oxidation by permanganate, indicating that the protein acts by increasing the fraction of complexed molecules. Mutations that significantly reduce the amount of complexed DNA in the cell were isolated. However, plasmids with these mutations were mobilized at nearly the normal frequency, were nicked at a commensurate level, and still required MobB. Our results indicate that the frequency of transfer is determined both by the amount of time each molecule is in the nicked form and by the proportion of complexed molecules in the total population. PMID:8824623

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Moses; Kim, Keonhui; Muljadi, Eduard

    This paper proposes a torque limit-based inertial control scheme of a doubly-fed induction generator (DFIG) that supports the frequency control of a power system. If a frequency deviation occurs, the proposed scheme aims to release a large amount of kinetic energy (KE) stored in the rotating masses of a DFIG to raise the frequency nadir (FN). Upon detecting the event, the scheme instantly increases its output to the torque limit and then reduces the output with the rotor speed so that it converges to the stable operating range. To restore the rotor speed while causing a small second frequency dipmore » (SFD), after the rotor speed converges the power reference is reduced by a small amount and maintained until it meets the reference for maximum power point tracking control. The test results demonstrate that the scheme can improve the FN and maximum rate of change of frequency while causing a small SFD in any wind conditions and in a power system that has a high penetration of wind power, and thus the scheme helps maintain the required level of system reliability. The scheme releases the KE from 2.9 times to 3.7 times the Hydro-Quebec requirement depending on the power reference.« less

  17. Widespread episodic thiamine deficiency in Northern Hemisphere wildlife

    NASA Astrophysics Data System (ADS)

    Balk, Lennart; Hägerroth, Per-Åke; Gustavsson, Hanna; Sigg, Lisa; Åkerman, Gun; Ruiz Muñoz, Yolanda; Honeyfield, Dale C.; Tjärnlund, Ulla; Oliveira, Kenneth; Ström, Karin; McCormick, Stephen D.; Karlsson, Simon; Ström, Marika; van Manen, Mathijs; Berg, Anna-Lena; Halldórsson, Halldór P.; Strömquist, Jennie; Collier, Tracy K.; Börjeson, Hans; Mörner, Torsten; Hansson, Tomas

    2016-12-01

    Many wildlife populations are declining at rates higher than can be explained by known threats to biodiversity. Recently, thiamine (vitamin B1) deficiency has emerged as a possible contributing cause. Here, thiamine status was systematically investigated in three animal classes: bivalves, ray-finned fishes, and birds. Thiamine diphosphate is required as a cofactor in at least five life-sustaining enzymes that are required for basic cellular metabolism. Analysis of different phosphorylated forms of thiamine, as well as of activities and amount of holoenzyme and apoenzyme forms of thiamine-dependent enzymes, revealed episodically occurring thiamine deficiency in all three animal classes. These biochemical effects were also linked to secondary effects on growth, condition, liver size, blood chemistry and composition, histopathology, swimming behaviour and endurance, parasite infestation, and reproduction. It is unlikely that the thiamine deficiency is caused by impaired phosphorylation within the cells. Rather, the results point towards insufficient amounts of thiamine in the food. By investigating a large geographic area, by extending the focus from lethal to sublethal thiamine deficiency, and by linking biochemical alterations to secondary effects, we demonstrate that the problem of thiamine deficiency is considerably more widespread and severe than previously reported.

  18. Mechanical design of a low concentration ratio solar array for a space station application

    NASA Technical Reports Server (NTRS)

    Biss, M. S.; Hsu, L.

    1983-01-01

    This paper describes a preliminary study and conceptual design of a low concentration ratio solar array for a space station application with approximately a 100 kW power requirement. The baseline design calls for a multiple series of inverted, truncated, pyramidal optical elements with a geometric concentration ratio (GCR) of 6. It also calls for low life cycle cost, simple on-orbit maintainability, 1984 technology readiness date, and gallium arsenide (GaAs) of silicon (Si) solar cell interchangeability. Due to the large area needed to produce the amount of power required for the baseline space station, a symmetrical wing design, making maximum use of the commonality of parts approach, was taken. This paper will describe the mechanical and structural design of a mass-producible solar array that is very easy to tailor to the needs of the individual user requirement.

  19. The Logistics Of Installing Pacs In An Existing Medical Center

    NASA Astrophysics Data System (ADS)

    Saarinen, Allan O.; Goodsitt, Mitchell M.; Loop, John W.

    1989-05-01

    A largely overlooked issue in the Picture Archiving and Communication Systems (PACS) area is the tremendous amount of site planning activity required to install such a system in an existing medical center. Present PACS equipment requires significant hospital real estate, specialized electrical power, cabling, and environmental controls to operate properly. Marshaling the hospital resources necessary to install PACS equipment requires many different players. The site preparation costs are nontrivial and usually include a number of hidden expenses. This paper summarizes the experience of the University of Washington Department of Radiology in installing an extensive digital imaging network (DIN) and PACS throughout the Department and several clinics in the hospital. The major logistical problems encountered at the University are discussed, a few recommendations are made, and the installation costs are documented. Overall, the University's site preparation costs equalled about seven percent (7%) of the total PACS equipment expenditure at the site.

  20. Committed emissions from existing and planned power plants and asset stranding required to meet the Paris Agreement

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Alexander; Hepburn, Cameron; Vogt-Schilb, Adrien; Caldecott, Ben

    2018-05-01

    Over the coming decade, the power sector is expected to invest ~7.2 trillion USD in power plants and grids globally, much of it into CO2-emitting coal and gas plants. These assets typically have long lifetimes and commit large amounts of (future) CO2 emissions. Here, we analyze the historic development of emission commitments from power plants and compare the emissions committed by current and planned plants with remaining carbon budgets. Based on this comparison we derive the likely amount of stranded assets that would be required to meet the 1.5 °C–2 °C global warming goal. We find that even though the growth of emission commitments has slowed down in recent years, currently operating generators still commit us to emissions (~300 GtCO2) above the levels compatible with the average 1.5 °C–2 °C scenario (~240 GtCO2). Furthermore, the current pipeline of power plants would add almost the same amount of additional commitments (~270 GtCO2). Even if the entire pipeline was cancelled, therefore, ~20% of global capacity would need to be stranded to meet the climate goals set out in the Paris Agreement. Our results can help companies and investors re-assess their investments in fossil-fuel power plants, and policymakers strengthen their policies to avoid further carbon lock-in.

  1. Natural Flood Management Plus: Scaling Up Nature Based Solutions to Larger Catchments

    NASA Astrophysics Data System (ADS)

    Quinn, Paul; Nicholson, Alex; Adams, Russ

    2017-04-01

    It has been established that networks NFM features, such as ponds and wetlands, can have a significant effect on flood flow and pollution at local scales (less than 10km2). However, it is much less certain that NFM and NBS can impact at larger scales and protect larger cities. This is especially true for recent storms in the UK such as storm Desmond that caused devastation across the north of England. It is possible using observed rainfall and runoff data to estimate the amounts of storage that would be required to impact on extreme flood events. Here we will how a toolkit that will estimate the amount of storage that can be accrued through a dense networks of NFM features. The analysis suggest that the use of many hundreds of small NFM features can have a significant impact on peak flow, however we still require more storage in order to address extreme events and to satisfy flood engineers who may propose more traditional flood defences. We will also show case studies of larger NFM feature positioned on flood plains that can store significantly more flood flow. Examples designs of NFM plus feature will be shown. The storage aggregation tool will then show the degree to which storing large amounts of flood flow in NFM plus features can contribute to flood management and estimate the likely costs. Together smaller and larger NFM features if used together can produce significant flood storage and at a much lower cost than traditional schemes.

  2. Local-search based prediction of medical image registration error

    NASA Astrophysics Data System (ADS)

    Saygili, Görkem

    2018-03-01

    Medical image registration is a crucial task in many different medical imaging applications. Hence, considerable amount of work has been published recently that aim to predict the error in a registration without any human effort. If provided, these error predictions can be used as a feedback to the registration algorithm to further improve its performance. Recent methods generally start with extracting image-based and deformation-based features, then apply feature pooling and finally train a Random Forest (RF) regressor to predict the real registration error. Image-based features can be calculated after applying a single registration but provide limited accuracy whereas deformation-based features such as variation of deformation vector field may require up to 20 registrations which is a considerably high time-consuming task. This paper proposes to use extracted features from a local search algorithm as image-based features to estimate the error of a registration. The proposed method comprises a local search algorithm to find corresponding voxels between registered image pairs and based on the amount of shifts and stereo confidence measures, it predicts the amount of registration error in millimetres densely using a RF regressor. Compared to other algorithms in the literature, the proposed algorithm does not require multiple registrations, can be efficiently implemented on a Graphical Processing Unit (GPU) and can still provide highly accurate error predictions in existence of large registration error. Experimental results with real registrations on a public dataset indicate a substantially high accuracy achieved by using features from the local search algorithm.

  3. Summary of LaRC 2-inch Erectable Joint Hardware Heritage Test Data

    NASA Technical Reports Server (NTRS)

    Dorsey, John T.; Watson, Judith J.

    2016-01-01

    As the National Space Transportation System (STS, also known as the Space Shuttle) went into service during the early 1980's, NASA envisioned many missions of exploration and discovery that could take advantage of the STS capabilities. These missions included: large orbiting space stations, large space science telescopes and large spacecraft for manned missions to the Moon and Mars. The missions required structures that were significantly larger than the payload volume available on the STS. NASA Langley Research Center (LaRC) conducted studies to design and develop the technology needed to assemble the large space structures in orbit. LaRC focused on technology for erectable truss structures, in particular, the joint that connects the truss struts at the truss nodes. When the NASA research in large erectable space structures ended in the early 1990's, a significant amount of structural testing had been performed on the LaRC 2-inch erectable joint that was never published. An extensive set of historical information and data has been reviewed and the joint structural testing results from this historical data are compiled and summarized in this report.

  4. A fast boosting-based screening method for large-scale association study in complex traits with genetic heterogeneity.

    PubMed

    Wang, Lu-Yong; Fasulo, D

    2006-01-01

    Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.

  5. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; Werth, Charles J.; Valocchi, Albert J.

    2016-07-01

    Characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydrogeophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with "big data" processing and numerous large-scale numerical simulations. To tackle such difficulties, the principal component geostatistical approach (PCGA) has been proposed as a "Jacobian-free" inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed in the traditional inversion methods. PCGA can be conveniently linked to any multiphysics simulation software with independent parallel executions. In this paper, we extend PCGA to handle a large number of measurements (e.g., 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data were compressed by the zeroth temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Only about 2000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method.

  6. 27 CFR 40.133 - Amount of individual bond.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... amount, the manufacturer shall immediately file a strengthening or superseding bond as required by this subpart. The amount of any such bond (or the total amount including strengthening bonds, if any) need not...

  7. 27 CFR 40.133 - Amount of individual bond.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... amount, the manufacturer shall immediately file a strengthening or superseding bond as required by this subpart. The amount of any such bond (or the total amount including strengthening bonds, if any) need not...

  8. The automatic neutron guide optimizer guide_bot

    NASA Astrophysics Data System (ADS)

    Bertelsen, Mads

    2017-09-01

    The guide optimization software guide_bot is introduced, the main purpose of which is to reduce the time spent programming when performing numerical optimization of neutron guides. A limited amount of information on the overall guide geometry and a figure of merit describing the desired beam is used to generate the code necessary to solve the problem. A generated McStas instrument file performs the Monte Carlo ray-tracing, which is controlled by iFit optimization scripts. The resulting optimal guide is thoroughly characterized, both in terms of brilliance transfer from an idealized source and on a more realistic source such as the ESS Butterfly moderator. Basic MATLAB knowledge is required from the user, but no experience with McStas or iFit is necessary. This paper briefly describes how guide_bot is used and some important aspects of the code. A short validation against earlier work is performed which shows the expected agreement. In addition a scan over the vertical divergence requirement, where individual guide optimizations are performed for each corresponding figure of merit, provides valuable data on the consequences of this parameter. The guide_bot software package is best suited for the start of an instrument design project as it excels at comparing a large amount of different guide alternatives for a specific set of instrument requirements, but is still applicable in later stages as constraints can be used to optimize more specific guides.

  9. Lost in space: Onboard star identification using CCD star tracker data without an a priori attitude

    NASA Technical Reports Server (NTRS)

    Ketchum, Eleanor A.; Tolson, Robert H.

    1993-01-01

    There are many algorithms in use today which determine spacecraft attitude by identifying stars in the field of view of a star tracker. Some methods, which date from the early 1960's, compare the angular separation between observed stars with a small catalog. In the last 10 years, several methods have been developed which speed up the process and reduce the amount of memory needed, a key element to onboard attitude determination. However, each of these methods require some a priori knowledge of the spacecraft attitude. Although the Sun and magnetic field generally provide the necessary coarse attitude information, there are occasions when a spacecraft could get lost when it is not prudent to wait for sunlight. Also, the possibility of efficient attitude determination using only the highly accurate CCD star tracker could lead to fully autonomous spacecraft attitude determination. The need for redundant coarse sensors could thus be eliminated at substantial cost reduction. Some groups have extended their algorithms to implement a computation intense full sky scan. Some require large data bases. Both storage and speed are concerns for autonomous onboard systems. Neural network technology is even being explored by some as a possible solution, but because of the limited number of patterns that can be stored and large overhead, nothing concrete has resulted from these efforts. This paper presents an algorithm which, by descretizing the sky and filtering by visual magnitude of the brightness observed star, speeds up the lost in space star identification process while reducing the amount of necessary onboard computer storage compared to existing techniques.

  10. A method for calculating minimum biodiversity offset multipliers accounting for time discounting, additionality and permanence

    PubMed Central

    Laitila, Jussi; Moilanen, Atte; Pouzols, Federico M

    2014-01-01

    Biodiversity offsetting, which means compensation for ecological and environmental damage caused by development activity, has recently been gaining strong political support around the world. One common criticism levelled at offsets is that they exchange certain and almost immediate losses for uncertain future gains. In the case of restoration offsets, gains may be realized after a time delay of decades, and with considerable uncertainty. Here we focus on offset multipliers, which are ratios between damaged and compensated amounts (areas) of biodiversity. Multipliers have the attraction of being an easily understandable way of deciding the amount of offsetting needed. On the other hand, exact values of multipliers are very difficult to compute in practice if at all possible. We introduce a mathematical method for deriving minimum levels for offset multipliers under the assumption that offsetting gains must compensate for the losses (no net loss offsetting). We calculate absolute minimum multipliers that arise from time discounting and delayed emergence of offsetting gains for a one-dimensional measure of biodiversity. Despite the highly simplified model, we show that even the absolute minimum multipliers may easily be quite large, in the order of dozens, and theoretically arbitrarily large, contradicting the relatively low multipliers found in literature and in practice. While our results inform policy makers about realistic minimal offsetting requirements, they also challenge many current policies and show the importance of rigorous models for computing (minimum) offset multipliers. The strength of the presented method is that it requires minimal underlying information. We include a supplementary spreadsheet tool for calculating multipliers to facilitate application. PMID:25821578

  11. Using Mosix for Wide-Area Compuational Resources

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  12. RadioAstron and millimetron space observatories: Multiverse models and the search for life

    NASA Astrophysics Data System (ADS)

    Kardashev, N. S.

    2017-04-01

    The transition from the radio to the millimeter and submillimeter ranges is very promising for studies of galactic nuclei, as well as detailed studies of processes related to supermassive black holes, wormholes, and possible manifestations of multi-element Universe (Multiverse) models. This is shown by observations with the largest interferometer available—RadioAstron observatory—that will be used for the scientific program forMillimetron observatory. Observations have also shown the promise of this range for studies of the formation and evolution of planetary systems and searches for manifestations of intelligent life. This is caused by the requirements to use a large amount of condensedmatter and energy in large-scale technological activities. This range can also be used efficiently in the organisation of optimal channels for the transmission of information.

  13. Automated multi-dimensional purification of tagged proteins.

    PubMed

    Sigrell, Jill A; Eklund, Pär; Galin, Markus; Hedkvist, Lotta; Liljedahl, Pia; Johansson, Christine Markeland; Pless, Thomas; Torstenson, Karin

    2003-01-01

    The capacity for high throughput purification (HTP) is essential in fields such as structural genomics where large numbers of protein samples are routinely characterized in, for example, studies of structural determination, functionality and drug development. Proteins required for such analysis must be pure and homogenous and available in relatively large amounts. AKTA 3D system is a powerful automated protein purification system, which minimizes preparation, run-time and repetitive manual tasks. It has the capacity to purify up to 6 different His6- or GST-tagged proteins per day and can produce 1-50 mg protein per run at >90% purity. The success of automated protein purification increases with careful experimental planning. Protocol, columns and buffers need to be chosen with the final application area for the purified protein in mind.

  14. Mining Critical Metals and Elements from Seawater: Opportunities and Challenges.

    PubMed

    Diallo, Mamadou S; Kotte, Madhusudhana Rao; Cho, Manki

    2015-08-18

    The availability and sustainable supply of technology metals and valuable elements is critical to the global economy. There is a growing realization that the development and deployment of the clean energy technologies and sustainable products and manufacturing industries of the 21st century will require large amounts of critical metals and valuable elements including rare-earth elements (REEs), platinum group metals (PGMs), lithium, copper, cobalt, silver, and gold. Advances in industrial ecology, water purification, and resource recovery have established that seawater is an important and largely untapped source of technology metals and valuable elements. This feature article discusses the opportunities and challenges of mining critical metals and elements from seawater. We highlight recent advances and provide an outlook of the future of metal mining and resource recovery from seawater.

  15. Tool for Rapid Analysis of Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.

    2013-01-01

    Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.

  16. Intelligent Detection of Structure from Remote Sensing Images Based on Deep Learning Method

    NASA Astrophysics Data System (ADS)

    Xin, L.

    2018-04-01

    Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai-Yuan; Zavala, Victor M.

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection viamore » symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.« less

  18. Critical review: Uncharted waters? The future of the electricity-water nexus.

    PubMed

    Sanders, Kelly T

    2015-01-06

    Electricity generation often requires large amounts of water, most notably for cooling thermoelectric power generators and moving hydroelectric turbines. This so-called "electricity-water nexus" has received increasing attention in recent years by governments, nongovernmental organizations, industry, and academics, especially in light of increasing water stress in many regions around the world. Although many analyses have attempted to project the future water requirements of electricity generation, projections vary considerably due to differences in temporal and spatial boundaries, modeling frameworks, and scenario definitions. This manuscript is intended to provide a critical review of recent publications that address the future water requirements of electricity production and define the factors that will moderate the water requirements of the electric grid moving forward to inform future research. The five variables identified include changes in (1) fuel consumption patterns, (2) cooling technology preferences, (3) environmental regulations, (4) ambient climate conditions, and (5) electric grid characteristics. These five factors are analyzed to provide guidance for future research related to the electricity-water nexus.

  19. Minimizing the area required for time constants in integrated circuits

    NASA Technical Reports Server (NTRS)

    Lyons, J. C.

    1972-01-01

    When a medium- or large-scale integrated circuit is designed, efforts are usually made to avoid the use of resistor-capacitor time constant generators. The capacitor needed for this circuit usually takes up more surface area on the chip than several resistors and transistors. When the use of this network is unavoidable, the designer usually makes an effort to see that the choice of resistor and capacitor combinations is such that a minimum amount of surface area is consumed. The optimum ratio of resistance to capacitance that will result in this minimum area is equal to the ratio of resistance to capacitance which may be obtained from a unit of surface area for the particular process being used. The minimum area required is a function of the square root of the reciprocal of the products of the resistance and capacitance per unit area. This minimum occurs when the area required by the resistor is equal to the area required by the capacitor.

  20. Using Unplanned Fires to Help Suppressing Future Large Fires in Mediterranean Forests

    PubMed Central

    Regos, Adrián; Aquilué, Núria; Retana, Javier; De Cáceres, Miquel; Brotons, Lluís

    2014-01-01

    Despite the huge resources invested in fire suppression, the impact of wildfires has considerably increased across the Mediterranean region since the second half of the 20th century. Modulating fire suppression efforts in mild weather conditions is an appealing but hotly-debated strategy to use unplanned fires and associated fuel reduction to create opportunities for suppression of large fires in future adverse weather conditions. Using a spatially-explicit fire–succession model developed for Catalonia (Spain), we assessed this opportunistic policy by using two fire suppression strategies that reproduce how firefighters in extreme weather conditions exploit previous fire scars as firefighting opportunities. We designed scenarios by combining different levels of fire suppression efficiency and climatic severity for a 50-year period (2000–2050). An opportunistic fire suppression policy induced large-scale changes in fire regimes and decreased the area burnt under extreme climate conditions, but only accounted for up to 18–22% of the area to be burnt in reference scenarios. The area suppressed in adverse years tended to increase in scenarios with increasing amounts of area burnt during years dominated by mild weather. Climate change had counterintuitive effects on opportunistic fire suppression strategies. Climate warming increased the incidence of large fires under uncontrolled conditions but also indirectly increased opportunities for enhanced fire suppression. Therefore, to shift fire suppression opportunities from adverse to mild years, we would require a disproportionately large amount of area burnt in mild years. We conclude that the strategic planning of fire suppression resources has the potential to become an important cost-effective fuel-reduction strategy at large spatial scale. We do however suggest that this strategy should probably be accompanied by other fuel-reduction treatments applied at broad scales if large-scale changes in fire regimes are to be achieved, especially in the wider context of climate change. PMID:24727853

  1. Using unplanned fires to help suppressing future large fires in Mediterranean forests.

    PubMed

    Regos, Adrián; Aquilué, Núria; Retana, Javier; De Cáceres, Miquel; Brotons, Lluís

    2014-01-01

    Despite the huge resources invested in fire suppression, the impact of wildfires has considerably increased across the Mediterranean region since the second half of the 20th century. Modulating fire suppression efforts in mild weather conditions is an appealing but hotly-debated strategy to use unplanned fires and associated fuel reduction to create opportunities for suppression of large fires in future adverse weather conditions. Using a spatially-explicit fire-succession model developed for Catalonia (Spain), we assessed this opportunistic policy by using two fire suppression strategies that reproduce how firefighters in extreme weather conditions exploit previous fire scars as firefighting opportunities. We designed scenarios by combining different levels of fire suppression efficiency and climatic severity for a 50-year period (2000-2050). An opportunistic fire suppression policy induced large-scale changes in fire regimes and decreased the area burnt under extreme climate conditions, but only accounted for up to 18-22% of the area to be burnt in reference scenarios. The area suppressed in adverse years tended to increase in scenarios with increasing amounts of area burnt during years dominated by mild weather. Climate change had counterintuitive effects on opportunistic fire suppression strategies. Climate warming increased the incidence of large fires under uncontrolled conditions but also indirectly increased opportunities for enhanced fire suppression. Therefore, to shift fire suppression opportunities from adverse to mild years, we would require a disproportionately large amount of area burnt in mild years. We conclude that the strategic planning of fire suppression resources has the potential to become an important cost-effective fuel-reduction strategy at large spatial scale. We do however suggest that this strategy should probably be accompanied by other fuel-reduction treatments applied at broad scales if large-scale changes in fire regimes are to be achieved, especially in the wider context of climate change.

  2. Ice ages and the thermal equilibrium of the earth, II

    USGS Publications Warehouse

    Adam, D.P.

    1975-01-01

    The energy required to sustain midlatitude continental glaciations comes from solar radiation absorbed by the oceans. It is made available through changes in relative amounts of energy lost from the sea surface as net outgoing infrared radiation, sensible heat loss, and latent heat loss. Ice sheets form in response to the initial occurrence of a large perennial snowfield in the subarctic. When such a snowfield forms, it undergoes a drastic reduction in absorbed solar energy because of its high albedo. When the absorbed solar energy cannot supply local infrared radiation losses, the snowfield cools, thus increasing the energy gradient between itself and external, warmer areas that can act as energy sources. Cooling of the snowfield progresses until the energy gradients between the snowfield and external heat sources are sufficient to bring in enough (latent plus sensible) energy to balance the energy budget over the snowfield. Much of the energy is imported as latent heat. The snow that falls and nourishes the ice sheet is a by-product of the process used to satisfy the energy balance requirements of the snowfield. The oceans are the primary energy source for the ice sheet because only the ocean can supply large amounts of latent heat. At first, some of the energy extracted by the ice sheet from the ocean is stored heat, so the ocean cools. As it cools, less energy is lost as net outgoing infrared radiation, and the energy thus saved is then available to augment evaporation. The ratio between sensible and latent heat lost by the ocean is the Bowen ratio; it depends in part on the sea surface temperature. As the sea surface temperature falls during a glaciation, the Bowen ratio increases, until most of the available energy leaves the oceans as sensible, rather than latent heat. The ice sheet starves, and an interglacial period begins. The oscillations between stadial and interstadial intervals within a glaciation are caused by the effects of varying amounts of glacial meltwater entering the oceans as a surface layer that acts to reduce the amount of energy available for glacial nourishment. This causes the ice sheet to melt back, which continues the supply of meltwater until the ice sheet diminishes to a size consistent with the reduced rate of nourishment. The meltwater supply then decreases, the rate of nourishment increases, and a new stadial begins. ?? 1975.

  3. Gene Expression Browser: Large-Scale and Cross-Experiment Microarray Data Management, Search & Visualization

    USDA-ARS?s Scientific Manuscript database

    The amount of microarray gene expression data in public repositories has been increasing exponentially for the last couple of decades. High-throughput microarray data integration and analysis has become a critical step in exploring the large amount of expression data for biological discovery. Howeve...

  4. Trend and current practices of palm oil mill effluent polishing: Application of advanced oxidation processes and their future perspectives.

    PubMed

    Bello, Mustapha Mohammed; Abdul Raman, Abdul Aziz

    2017-08-01

    Palm oil processing is a multi-stage operation which generates large amount of effluent. On average, palm oil mill effluent (POME) may contain up to 51, 000 mg/L COD, 25,000 mg/L BOD, 40,000 TS and 6000 mg/L oil and grease. Due to its potential to cause environmental pollution, palm oil mills are required to treat the effluent prior to discharge. Biological treatments using open ponding system are widely used for POME treatment. Although these processes are capable of reducing the pollutant concentrations, they require long hydraulic retention time and large space, with the effluent frequently failing to satisfy the discharge regulation. Due to more stringent environmental regulations, research interest has recently shifted to the development of polishing technologies for the biologically-treated POME. Various technologies such as advanced oxidation processes, membrane technology, adsorption and coagulation have been investigated. Among these, advanced oxidation processes have shown potentials as polishing technologies for POME. This paper offers an overview on the POME polishing technologies, with particularly emphasis on advanced oxidation processes and their prospects for large scale applications. Although there are some challenges in large scale applications of these technologies, this review offers some perspectives that could help in overcoming these challenges. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Consumption with Large Sip Sizes Increases Food Intake and Leads to Underestimation of the Amount Consumed

    PubMed Central

    Bolhuis, Dieuwerke P.; Lakemond, Catriona M. M.; de Wijk, Rene A.; Luning, Pieternel A.; de Graaf, Cees

    2013-01-01

    Background A number of studies have shown that bite and sip sizes influence the amount of food intake. Consuming with small sips instead of large sips means relatively more sips for the same amount of food to be consumed; people may believe that intake is higher which leads to faster satiation. This effect may be disturbed when people are distracted. Objective The objective of the study is to assess the effects of sip size in a focused state and a distracted state on ad libitum intake and on the estimated amount consumed. Design In this 3×2 cross-over design, 53 healthy subjects consumed ad libitum soup with small sips (5 g, 60 g/min), large sips (15 g, 60 g/min), and free sips (where sip size was determined by subjects themselves), in both a distracted and focused state. Sips were administered via a pump. There were no visual cues toward consumption. Subjects then estimated how much they had consumed by filling soup in soup bowls. Results Intake in the small-sip condition was ∼30% lower than in both the large-sip and free-sip conditions (P<0.001). In addition, subjects underestimated how much they had consumed in the large-sip and free-sip conditions (P<0.03). Distraction led to a general increase in food intake (P = 0.003), independent of sip size. Distraction did not influence sip size or estimations. Conclusions Consumption with large sips led to higher food intake, as expected. Large sips, that were either fixed or chosen by subjects themselves led to underestimations of the amount consumed. This may be a risk factor for over-consumption. Reducing sip or bite sizes may successfully lower food intake, even in a distracted state. PMID:23372657

  6. Application of fluorescence spectroscopy for on-line bioprocess monitoring and control

    NASA Astrophysics Data System (ADS)

    Boehl, Daniela; Solle, D.; Toussaint, Hans J.; Menge, M.; Renemann, G.; Lindemann, Carsten; Hitzmann, Bernd; Scheper, Thomas-Helmut

    2001-02-01

    12 Modern bioprocess control requires fast data acquisition and in-time evaluation of bioprocess variables. On-line fluorescence spectroscopy for data acquisition and the use of chemometric methods accomplish these requirements. The presented investigations were performed with fluorescence spectrometers with wide ranges of excitation and emission wavelength. By detection of several biogenic fluorophors (amino acids, coenzymes and vitamins) a large amount of information about the state of the bioprocess are obtained. For the evaluation of the process variables partial least squares regression is used. This technique was applied to several bioprocesses: the production of ergotamine by Claviceps purpurea, the production of t-PA (tissue plasminogen activator) by animal cells and brewing processes. The main point of monitoring the brewing processes was to determine the process variables cell count and extract concentration.

  7. Thermal insulating coating for spacecrafts

    NASA Technical Reports Server (NTRS)

    Kaul, Raj K. (Inventor)

    2005-01-01

    To protect spacecraft and their contents from excessive heat thermal protection systems are essential. For such thermal protection, metal coatings, ceramic materials, ablative materials, and various matrix materials have all been tried, but none have been found entirely satisfactory. The basis for this thermal protection system is the fact that the heat required to melt a substance is 80 to 100 times larger than the heat required to raise its temperature one degree. This led to the use herein of solid-liquid phase change materials. Unlike conventional heat storage materials, when phase change materials reach the temperature at which they change phase they absorb large amounts of heat without getting hotter. By this invention, then, a coating composition is provided for application to substrates subjected to temperatures above 100? F. The coating composition includes a phase change material.

  8. Thermal Insulating Coating for Spacecrafts

    NASA Technical Reports Server (NTRS)

    Kaul, Raj K. (Inventor)

    2005-01-01

    To protect spacecraft and their contents from excessive heat thermal protection system are essential. For such thermal protection, metal coatings, ceramic materials, ablative materials, and various matrix materials have all been tried, but none have been found entirely satisfactory. The basis for this thermal protection system is the fact that the heat required to melt a substance is 80 to 100 times larger than the heat required to raise its temperature one degree. This led to the use herein of solid-liquid phase change materials. Unlike conventional heat storage materials, when phase change materials reach the temperature at which they change phase they absorb large amounts of heat without getting hotter. By this invention, then, a coating composition is provided for application to substrates subjected to temperatures above 100 F. The coating composition includes a phase change material.

  9. Pulsed hybrid field emitter

    DOEpatents

    Sampayan, Stephen E.

    1998-01-01

    A hybrid emitter exploits the electric field created by a rapidly depoled ferroelectric material. Combining the emission properties of a planar thin film diamond emitter with a ferroelectric alleviates the present technological problems associated with both types of emitters and provides a robust, extremely long life, high current density cathode of the type required by emerging microwave power generation, accelerator technology and display applications. This new hybrid emitter is easy to fabricate and not susceptible to the same failures which plague microstructure field emitter technology. Local electrode geometries and electric field are determined independently from those for optimum transport and brightness preservation. Due to the large amount of surface charge created on the ferroelectric, the emitted electrons have significant energy, thus eliminating the requirement for specialized phosphors in emissive flat-panel displays.

  10. Pulsed hybrid field emitter

    DOEpatents

    Sampayan, S.E.

    1998-03-03

    A hybrid emitter exploits the electric field created by a rapidly depoled ferroelectric material. Combining the emission properties of a planar thin film diamond emitter with a ferroelectric alleviates the present technological problems associated with both types of emitters and provides a robust, extremely long life, high current density cathode of the type required by emerging microwave power generation, accelerator technology and display applications. This new hybrid emitter is easy to fabricate and not susceptible to the same failures which plague microstructure field emitter technology. Local electrode geometries and electric field are determined independently from those for optimum transport and brightness preservation. Due to the large amount of surface charge created on the ferroelectric, the emitted electrons have significant energy, thus eliminating the requirement for specialized phosphors in emissive flat-panel displays. 11 figs.

  11. 78 FR 55339 - Regulatory Capital Rules: Regulatory Capital, Implementation of Basel III, Capital Adequacy...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-10

    ...The Federal Deposit Insurance Corporation (FDIC) is adopting an interim final rule that revises its risk-based and leverage capital requirements for FDIC-supervised institutions. This interim final rule is substantially identical to a joint final rule issued by the Office of the Comptroller of the Currency (OCC) and the Board of Governors of the Federal Reserve System (Federal Reserve) (together, with the FDIC, the agencies). The interim final rule consolidates three separate notices of proposed rulemaking that the agencies jointly published in the Federal Register on August 30, 2012, with selected changes. The interim final rule implements a revised definition of regulatory capital, a new common equity tier 1 minimum capital requirement, a higher minimum tier 1 capital requirement, and, for FDIC-supervised institutions subject to the advanced approaches risk-based capital rules, a supplementary leverage ratio that incorporates a broader set of exposures in the denominator. The interim final rule incorporates these new requirements into the FDIC's prompt corrective action (PCA) framework. In addition, the interim final rule establishes limits on FDIC-supervised institutions' capital distributions and certain discretionary bonus payments if the FDIC-supervised institution does not hold a specified amount of common equity tier 1 capital in addition to the amount necessary to meet its minimum risk-based capital requirements. The interim final rule amends the methodologies for determining risk-weighted assets for all FDIC-supervised institutions. The interim final rule also adopts changes to the FDIC's regulatory capital requirements that meet the requirements of section 171 and section 939A of the Dodd-Frank Wall Street Reform and Consumer Protection Act. The interim final rule also codifies the FDIC's regulatory capital rules, which have previously resided in various appendices to their respective regulations, into a harmonized integrated regulatory framework. In addition, the FDIC is amending the market risk capital rule (market risk rule) to apply to state savings associations. The FDIC is issuing these revisions to its capital regulations as an interim final rule. The FDIC invites comments on the interaction of this rule with other proposed leverage ratio requirements applicable to large, systemically important banking organizations. This interim final rule otherwise contains regulatory text that is identical to the common rule text adopted as a final rule by the Federal Reserve and the OCC. This interim final rule enables the FDIC to proceed on a unified, expedited basis with the other federal banking agencies pending consideration of other issues. Specifically, the FDIC intends to evaluate this interim final rule in the context of the proposed well- capitalized and buffer levels of the supplementary leverage ratio applicable to large, systemically important banking organizations, as described in a separate Notice of Proposed Rulemaking (NPR) published in the Federal Register August 20, 2013. The FDIC is seeking commenters' views on the interaction of this interim final rule with the proposed rule regarding the supplementary leverage ratio for large, systemically important banking organizations.

  12. Model for fluorescence quenching in light harvesting complex II in different aggregation states.

    PubMed

    Andreeva, Atanaska; Abarova, Silvia; Stoitchkova, Katerina; Busheva, Mira

    2009-02-01

    Low-temperature (77 K) steady-state fluorescence emission spectroscopy and dynamic light scattering were applied to the main chlorophyll a/b protein light harvesting complex of photosystem II (LHC II) in different aggregation states to elucidate the mechanism of fluorescence quenching within LHC II oligomers. Evidences presented that LHC II oligomers are heterogeneous and consist of large and small particles with different fluorescence yield. At intermediate detergent concentrations the mean size of the small particles is similar to that of trimers, while the size of large particles is comparable to that of aggregated trimers without added detergent. It is suggested that in small particles and trimers the emitter is monomeric chlorophyll, whereas in large aggregates there is also another emitter, which is a poorly fluorescing chlorophyll associate. A model, describing populations of antenna chlorophyll molecules in small and large aggregates in their ground and first singlet excited states, is considered. The model enables us to obtain the ratio of the singlet excited-state lifetimes in small and large particles, the relative amount of chlorophyll molecules in large particles, and the amount of quenchers as a function of the degree of aggregation. These dependencies reveal that the quenching of the chl a fluorescence upon aggregation is due to the formation of large aggregates and the increasing of the amount of chlorophyll molecules forming these aggregates. As a consequence, the amount of quenchers, located in large aggregates, is increased, and their singlet excited-state lifetimes steeply decrease.

  13. Imaging near surface mineral targets with ambient seismic noise

    NASA Astrophysics Data System (ADS)

    Dales, P.; Audet, P.; Olivier, G.

    2017-12-01

    To keep up with global metal and mineral demand, new ore-deposits have to be discovered on a regular basis. This task is becoming increasingly difficult, since easily accessible deposits have been exhausted to a large degree. The typical procedure for mineral exploration begins with geophysical surveys followed by a drilling program to investigate potential targets. Since the retrieved drill core samples are one-dimensional observations, the many holes needed to interpolate and interpret potential deposits can lead to very high costs. To reduce the amount of drilling, active seismic imaging is sometimes used as an intermediary, however the active sources (e.g. large vibrating trucks or explosive shots) are expensive and unsuitable for operation in remote or environmentally sensitive areas. In recent years, passive seismic imaging using ambient noise has emerged as a novel, low-cost and environmentally sensitive approach for exploring the sub-surface. This technique dispels with active seismic sources and instead uses ambient seismic noise such as ocean waves, traffic or minor earthquakes. Unfortunately at this point, passive surveys are not capable of reaching the required resolution to image the vast majority of the ore-bodies that are being explored. In this presentation, we will show the results of an experiment where ambient seismic noise recorded on 60 seismic stations was used to image a near-mine target. The target consists of a known ore-body that has been partially exhausted by mining efforts roughly 100 years ago. The experiment examined whether ambient seismic noise interferometry can be used to image the intact and exhausted ore deposit. A drilling campaign was also conducted near the target which offers the opportunity to compare the two methods. If the accuracy and resolution of passive seismic imaging can be improved to that of active surveys (and beyond), this method could become an inexpensive intermediary step in the exploration process and result in a large decrease in the amount of drilling required to investigate and identify high-grade ore deposits.

  14. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less

  15. Videometric Applications in Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Radeztsky, R. H.; Liu, Tian-Shu

    1997-01-01

    Videometric measurements in wind tunnels can be very challenging due to the limited optical access, model dynamics, optical path variability during testing, large range of temperature and pressure, hostile environment, and the requirements for high productivity and large amounts of data on a daily basis. Other complications for wind tunnel testing include the model support mechanism and stringent surface finish requirements for the models in order to maintain aerodynamic fidelity. For these reasons nontraditional photogrammetric techniques and procedures sometimes must be employed. In this paper several such applications are discussed for wind tunnels which include test conditions with Mach number from low speed to hypersonic, pressures from less than an atmosphere to nearly seven atmospheres, and temperatures from cryogenic to above room temperature. Several of the wind tunnel facilities are continuous flow while one is a short duration blowdown facility. Videometric techniques and calibration procedures developed to measure angle of attack, the change in wing twist and bending induced by aerodynamic load, and the effects of varying model injection rates are described. Some advantages and disadvantages of these techniques are given and comparisons are made with non-optical and more traditional video photogrammetric techniques.

  16. A decentralized training algorithm for Echo State Networks in distributed big data applications.

    PubMed

    Scardapane, Simone; Wang, Dianhui; Panella, Massimo

    2016-06-01

    The current big data deluge requires innovative solutions for performing efficient inference on large, heterogeneous amounts of information. Apart from the known challenges deriving from high volume and velocity, real-world big data applications may impose additional technological constraints, including the need for a fully decentralized training architecture. While several alternatives exist for training feed-forward neural networks in such a distributed setting, less attention has been devoted to the case of decentralized training of recurrent neural networks (RNNs). In this paper, we propose such an algorithm for a class of RNNs known as Echo State Networks. The algorithm is based on the well-known Alternating Direction Method of Multipliers optimization procedure. It is formulated only in terms of local exchanges between neighboring agents, without reliance on a coordinating node. Additionally, it does not require the communication of training patterns, which is a crucial component in realistic big data implementations. Experimental results on large scale artificial datasets show that it compares favorably with a fully centralized implementation, in terms of speed, efficiency and generalization accuracy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. A hypertext system that learns from user feedback

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie

    1994-01-01

    Retrieving specific information from large amounts of documentation is not an easy task. It could be facilitated if information relevant in the current problem solving context could be automatically supplied to the user. As a first step towards this goal, we have developed an intelligent hypertext system called CID (Computer Integrated Documentation). Besides providing an hypertext interface for browsing large documents, the CID system automatically acquires and reuses the context in which previous searches were appropriate. This mechanism utilizes on-line user information requirements and relevance feedback either to reinforce current indexing in case of success or to generate new knowledge in case of failure. Thus, the user continually augments and refines the intelligence of the retrieval system. This allows the CID system to provide helpful responses, based on previous usage of the documentation, and to improve its performance over time. We successfully tested the CID system with users of the Space Station Freedom requirements documents. We are currently extending CID to other application domains (Space Shuttle operations documents, airplane maintenance manuals, and on-line training). We are also exploring the potential commercialization of this technique.

  18. Proteinortho: detection of (co-)orthologs in large-scale analysis.

    PubMed

    Lechner, Marcus; Findeiss, Sven; Steiner, Lydia; Marz, Manja; Stadler, Peter F; Prohaska, Sonja J

    2011-04-28

    Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases. The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes. Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.

  19. Time-Shifted Boundary Conditions Used for Navier-Stokes Aeroelastic Solver

    NASA Technical Reports Server (NTRS)

    Srivastava, Rakesh

    1999-01-01

    Under the Advanced Subsonic Technology (AST) Program, an aeroelastic analysis code (TURBO-AE) based on Navier-Stokes equations is currently under development at NASA Lewis Research Center s Machine Dynamics Branch. For a blade row, aeroelastic instability can occur in any of the possible interblade phase angles (IBPA s). Analyzing small IBPA s is very computationally expensive because a large number of blade passages must be simulated. To reduce the computational cost of these analyses, we used time shifted, or phase-lagged, boundary conditions in the TURBO-AE code. These conditions can be used to reduce the computational domain to a single blade passage by requiring the boundary conditions across the passage to be lagged depending on the IBPA being analyzed. The time-shifted boundary conditions currently implemented are based on the direct-store method. This method requires large amounts of data to be stored over a period of the oscillation cycle. On CRAY computers this is not a major problem because solid-state devices can be used for fast input and output to read and write the data onto a disk instead of storing it in core memory.

  20. Streptococcus group B typing: comparison of counter-immunoelectrophoresis with the precipitin method.

    PubMed

    Kubín, V; Jelínková, J; Franêk, J

    1977-07-01

    The method of counter-immunoelectrophoresis (CIE) was tested for its applicability to group B streptococcus typing. The results obtained were compared with the typing by the ring precipitin test. Identical antigens and identical hyperimmune typing serum batches had been used in both methods. A large majority of 75 freshly isolated strains were typed identically by both methods. Five strains with a weak antigenic outfit were untypable by the ring precipitin test but were typed by CIE owing to a higher sensitivity of CIE method. Two strains were typable by the precipitin test but not by CIE; an explanation for this phenomenon is lacking. The CIE method in group B typing is specific, rapid, highly sensitive and relatively simple. It requires strict maintenance of standard conditions. The method is economical with respect to manipulation and material, requires small amounts of diagnostic antisera. Potent antisera may be used diluted. Moreover, sera for CIE typing need not be absorbed to remove group B antibodies. CIE method is practicable for group B streptococcus typing, especially in laboratories carrying out routine large scale type identification.

  1. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  2. The adaption and use of research codes for performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    1987-05-01

    Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less

  3. Agronomic and environmental consequences of using liquid mineral concentrates on arable farms.

    PubMed

    Schils, René L M; Postma, Romke; van Rotterdam, Debby; Zwart, Kor B

    2015-12-01

    In regions with intensive livestock systems, the processing of manure into liquid mineral concentrates is seen as an option to increase the nutrient use efficiency of manures. The agricultural sector anticipates that these products may in future be regarded as regular mineral fertilisers. We assessed the agronomic suitability and impact on greenhouse gas (GHG) and ammonia emissions of using liquid mineral concentrates on arable farms. The phosphate requirements on arable farms were largely met by raw pig slurry, given its large regional availability. After the initial nutrient input by means of pig slurry, the nitrogen/phosphate ratio of the remaining nutrient crop requirements determined the additional amount of liquid mineral concentrates that can be used. For sandy soils, liquid mineral concentrates could supply 50% of the nitrogen requirement, whereas for clay soils the concentrates did not meet the required nitrogen/phosphate ratio. The total GHG emissions per kg of plant available nitrogen ranged from -65 to 33 kg CO2 -equivalents. It increased in the order digestates < mineral fertiliser < raw slurries. Liquid mineral concentrates had limited added value for arable farms. For an increased suitability it is necessary that liquid mineral concentrates do not contain phosphate and that the nitrogen availability is increased. In the manure-processing chain, anaerobic digestion had a dominant and beneficial effect on GHG emissions. © 2015 Society of Chemical Industry.

  4. Process-in-Network: A Comprehensive Network Processing Approach

    PubMed Central

    Urzaiz, Gabriel; Villa, David; Villanueva, Felix; Lopez, Juan Carlos

    2012-01-01

    A solid and versatile communications platform is very important in modern Ambient Intelligence (AmI) applications, which usually require the transmission of large amounts of multimedia information over a highly heterogeneous network. This article focuses on the concept of Process-in-Network (PIN), which is defined as the possibility that the network processes information as it is being transmitted, and introduces a more comprehensive approach than current network processing technologies. PIN can take advantage of waiting times in queues of routers, idle processing capacity in intermediate nodes, and the information that passes through the network. PMID:22969390

  5. A Numerical Model for Wind-Wave Prediction in Deep Water.

    DTIC Science & Technology

    1983-01-01

    amounts of gage data are available. Additionally, if all steps are modeled correctly, factors such as direction and angular spreading, which are not...spherical orthogonal system if large oceanic areas are to be modeled. The wave model requires a rect- angular grid and wind input at each of the...RM22CNFREQ+1)u1. DO 70 Im1,NFREG 70 SINF(I)uTWOPI*690/(TWOPIIFF(l))3S5 C DO 17 ItJ𔃻,100 VST =O,4851.4$IU USTwVST 19 ZOaCl/UST+C2*UST$UST-C3 UST1= VST /ALOG

  6. Research on key technologies of data processing in internet of things

    NASA Astrophysics Data System (ADS)

    Zhu, Yangqing; Liang, Peiying

    2017-08-01

    The data of Internet of things (IOT) has the characteristics of polymorphism, heterogeneous, large amount and processing real-time. The traditional structured and static batch processing method has not met the requirements of data processing of IOT. This paper studied a middleware that can integrate heterogeneous data of IOT, and integrated different data formats into a unified format. Designed a data processing model of IOT based on the Storm flow calculation architecture, integrated the existing Internet security technology to build the Internet security system of IOT data processing, which provided reference for the efficient transmission and processing of IOT data.

  7. Co-firing of paper mill sludge and coal in an industrial circulating fluidized bed boiler.

    PubMed

    Tsai, Meng-Yuan; Wu, Keng-Tung; Huang, Chin-Cheng; Lee, Hom-Ti

    2002-01-01

    Co-firing of coal and paper mill sludge was conducted in a 103 MWth circulating fluidized bed boiler to investigate the effect of the sludge feeding rate on emissions of SOx, NOx, and CO. The preliminary results show that emissions of SOx and Nx decrease with increasing sludge feeding rate, but CO shows the reverse tendency due to the decrease in combustion temperature caused by a large amount of moisture in the sludge. All emissions met the local environmental requirements. The combustion ashes could be recycled as feed materials in the cement manufacturing process.

  8. [The relationship between the placebo effect and spontaneous improvement in research on antidepressants. Are placebos powerless?].

    PubMed

    Hougaard, Esben

    2005-08-08

    Clinical trials of antidepressant medications have generally found large changes in groups given a placebo, which may be due to either spontaneous remission or a true placebo effect. This paper reviews the evidence for a true placebo effect in the treatment of unipolar depressed outpatients. Although there is no evidence from experimental studies, a rather substantial amount of circumstantial evidence indicates a true placebo effect. This article raises the question of whether it is meaningful to require experimental evidence for a loose and unspecified concept involving varying components such as placebo.

  9. Big Data and Ambulatory Care

    PubMed Central

    Thorpe, Jane Hyatt; Gray, Elizabeth Alexandra

    2015-01-01

    Big data is heralded as having the potential to revolutionize health care by making large amounts of data available to support care delivery, population health, and patient engagement. Critics argue that big data's transformative potential is inhibited by privacy requirements that restrict health information exchange. However, there are a variety of permissible activities involving use and disclosure of patient information that support care delivery and management. This article presents an overview of the legal framework governing health information, dispels misconceptions about privacy regulations, and highlights how ambulatory care providers in particular can maximize the utility of big data to improve care. PMID:25401945

  10. An Evidence-Based Practical Approach to Pediatric Otolaryngology in the Developing World.

    PubMed

    Belcher, Ryan H; Molter, David W; Goudy, Steven L

    2018-06-01

    Despite humanitarian otolaryngology groups traveling in record numbers to resource-limited areas treating pediatric otolaryngology disease processes and training local providers, there remains a large burden of unmet needs. There is a meager amount of published information that comes from the developing world from an otolaryngology standpoint. As would be expected, the little information that does comes involves some of the most common pediatric otolaryngology diseases and surgical burdens including childhood hearing loss, otitis media, adenotonsillectomies, airway obstructions requiring tracheostomies, foreign body aspirations, and craniomaxillofacial surgeries, including cleft lip and palate. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Guild structure of a riparian avifauna relative to seasonal cattle grazing

    USGS Publications Warehouse

    Knopf, F.L.; Sedgwick, J.A.; Cannon, R. W.

    1988-01-01

    Knopf et al. found that summer cattle grazing has an adverse effect on the presence of certain willow-dependent songbirds. Pastures that have historical summer grazing no longer have the Willow flycatcher, Lincoln's sparrow and the White-crowned sparrow present. Yet in these same areas, birds like the American Robin, Brown-headed cowbird and the Red-winged blackbird have increased in density. One possible answer for the decrease in some songbirds is the fact that the main focus of the Arapaho National Wildlife Refuge is on waterfowl habitat, which requires large amounts of open space (opposite of desirable songbird habitat).

  12. Automatic building identification under bomb damage conditions

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Noll, Warren; Barker, Joseph; Wunsch, Donald C., II

    2009-05-01

    Given the vast amount of image intelligence utilized in support of planning and executing military operations, a passive automated image processing capability for target identification is urgently required. Furthermore, transmitting large image streams from remote locations would quickly use available band width (BW) precipitating the need for processing to occur at the sensor location. This paper addresses the problem of automatic target recognition for battle damage assessment (BDA). We utilize an Adaptive Resonance Theory approach to cluster templates of target buildings. The results show that the network successfully classifies targets from non-targets in a virtual test bed environment.

  13. Introduction to Remote Sensing Image Registration

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline

    2017-01-01

    For many applications, accurate and fast image registration of large amounts of multi-source data is the first necessary step before subsequent processing and integration. Image registration is defined by several steps and each step can be approached by various methods which all present diverse advantages and drawbacks depending on the type of data, the type of applications, the a prior information known about the data and the type of accuracy that is required. This paper will first present a general overview of remote sensing image registration and then will go over a few specific methods and their applications

  14. Potential converter for laser-power beaming

    NASA Technical Reports Server (NTRS)

    Walker, Gilbert H.; Williams, Michael D.; Schuster, Gregory L.; Iles, Peter A.

    1991-01-01

    Future space missions, such as those associated with the Space Exploration Initiative (SEI), will require large amounts of power for operation of bases, rovers, and orbit transfer vehicles. One method for supplying this power is to beam power from a spaced based or Earth based laser power station to a receiver where laser photons can be converted to electricity. Previous research has described such laser power stations orbiting the Moon and beaming power to a receiver on the surface of the Moon by using arrays of diode lasers. Photovoltaic converters that can be efficiently used with these diode lasers are described.

  15. The Kepler End-to-End Model: Creating High-Fidelity Simulations to Test Kepler Ground Processing

    NASA Technical Reports Server (NTRS)

    Bryson, Stephen T.; Jenkins, Jon M.; Peters, Dan J.; Tenenbaum, Peter P.; Klaus, Todd C.; Gunter, Jay P.; Cote, Miles T.; Caldwell, Douglas A.

    2010-01-01

    The Kepler mission is designed to detect the transit of Earth-like planets around Sun-like stars by observing 100,000 stellar targets. Developing and testing the Kepler ground-segment processing system, in particular the data analysis pipeline, requires high-fidelity simulated data. This simulated data is provided by the Kepler End-to-End Model (ETEM). ETEM simulates the astrophysics of planetary transits and other phenomena, properties of the Kepler spacecraft and the format of the downlinked data. Major challenges addressed by ETEM include the rapid production of large amounts of simulated data, extensibility and maintainability.

  16. Species profiles: Life histories and environmental requirements of coastal fishes and invertebrates (Mid-Atlantic)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, S.G.; Van Den Avyle, M.J.

    1989-08-01

    Species profiles are literature summaries of the life history, distribution and environmental requirements of coastal fishes and invertebrates. Profiles are prepared to assist with environmental impact assessment. The Atlantic menhaden (Brevoortia tyrannus) is an important commercial fish along the Atlantic coast. In the South Atlantic Region, Atlantic menhaden spawn during winter in continental shelf waters. Adults then move inshore and northward in spring; some move into estuaries as far as the brackish-freshwater boundary. Atlantic menhaden larvae in the South Atlantic Region enter estuaries after 1 to 3 months at sea. Young fish move into the shallow regions of estuaries andmore » seem to prefer vegetated marsh habitats. Atlantic menhaden are size-selective plankton feeders as larvae, and filter feeders as juveniles and adults. Due to their large population size, individual growth rates, and seasonal movements, Atlantic menhaden annually consume and redistribute large amounts of energy and materials. They are also important prey for large game fishes such as bluefish (Pomatomus saltatrix), striped bass (Morone saxatilis), and bluefin tuna (Thunnus thynnus). The Atlantic menhaden is associated with estuarine and nearshore systems during all phases of its life cycle. Young menhaden require these food-rich habitats to survive and grown. Destruction of estuarine wetlands has decreased nursery habitat available to Atlantic menhaden and other estuarine wetlands has decreased nursery habitat available to Atlantic menhaden and other estuarine-dependent species. 115 refs., 2 figs., 2 tabs.« less

  17. SAMIS- STANDARD ASSEMBLY-LINE MANUFACTURING INDUSTRY SIMULATION

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1994-01-01

    The Standard Assembly-Line Manufacturing Industry Simulation (SAMIS) program was originally developed to model a hypothetical U. S. industry which manufactures silicon solar modules for use in electricity generation. The SAMIS program has now been generalized to the extent that it should be useful for simulating many different production-line manufacturing industries and companies. The most important capability of SAMIS is its ability to "simulate" an industry based on a model developed by the user with the aid of the SAMIS program. The results of the simulation are a set of financial reports which detail the requirements, including quantities and cost, of the companies and processes which comprise the industry. SAMIS provides a fair, consistent, and reliable means of comparing manufacturing processes being developed by numerous independent efforts. It can also be used to assess the industry-wide impact of changes in financial parameters, such as cost of resources and services, inflation rates, interest rates, tax policies, and required return on equity. Because of the large amount of data needed to describe an industry, a major portion of SAMIS is dedicated to data entry and maintenance. This activity in SAMIS is referred to as model management. Model management requires a significant amount of interaction through a system of "prompts" which make it possible for persons not familiar with computers, or the SAMIS program, to provide all of the data necessary to perform a simulation. SAMIS is written in TURBO PASCAL (version 2.0 required for compilation) and requires 10 meg of hard disk space, an 8087 coprocessor, and an IBM color graphics monitor. Executables and source code are provided. SAMIS was originally developed in 1978; the IBM PC version was developed in 1985. Release 6.1 was made available in 1986, and includes the PC-IPEG program.

  18. Physical feedbacks on stratus cloud amount resolve the Faint Young Sun Paradox

    NASA Astrophysics Data System (ADS)

    Goldblatt, C.; McCusker, K. E.; McDonald, V.

    2017-12-01

    Geological evidence suggests that Earth was mostly warm and not glaciated during the Archean, despite Earth receiving only around 80% of the present day amount of sunlight. 1-D models require higher abundances of greenhouse gases than geochemical proxies permit, whereas some 3-D models permit lower greenhouse gas inventories, but for reasons which are somewhat opaque. Here, we show that physically motivated changes to low cloud (stratus) amount likely played a large role in resolving the FYSP. The amount of stratus cloud is strongly linked to lower tropospheric stability [Slingo 1987; Woods and Bretherton 2006], with a stronger inversion at the planetary boundary layer trapping moisture and giving a higher stratus cloud fraction. By hypothesis, an Archean situation where the surface is heated less by sunlight and the atmosphere is heated more by absorption of thermal radiation with a stronger greenhouse, should feature a weaker inversion and less stable lower troposphere. Hence, with a weaker sun but stronger greenhouse, we expect less stratus clouds. To test this hypothesis, we run a set of carefully controlled General Circulation Model experiments using the Community Atmosphere Model. We change only the solar constant and CO2 mixing ratio, increasing CO2 and decreasing the solar constant so that the global mean surface temperature remains the same. We do not change anything else, so as to focus directly on a single hypothesis, and to keep the model as near to known conditions as possible. We find that at 80% of modern solar constant: (1) only 30,000 ppmv CO2 is required to maintain modern surface temperatures, versus the expectation of 80,000 ppmv from radiative forcing calculations. (2) The dominant change is to low cloud fraction, decreasing from 34% to 25%, with an associated reduction in short-wave cloud forcing of 20W/m/m. This can be set in the context of a 50W/m/m radiative deficit due to the weaker sun, so the cloud feedback contributes two-fifths of the required warming. (3) There is a reduced meridional temperature gradient such that the poles are 4 to 8 K warmer than present, which will further contributes to the avoidance of glaciation.

  19. Modifications of the U.S. Geological Survey modular, finite-difference, ground-water flow model to read and write geographic information system files

    USGS Publications Warehouse

    Orzol, Leonard L.; McGrath, Timothy S.

    1992-01-01

    This report documents modifications to the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model, commonly called MODFLOW, so that it can read and write files used by a geographic information system (GIS). The modified model program is called MODFLOWARC. Simulation programs such as MODFLOW generally require large amounts of input data and produce large amounts of output data. Viewing data graphically, generating head contours, and creating or editing model data arrays such as hydraulic conductivity are examples of tasks that currently are performed either by the use of independent software packages or by tedious manual editing, manipulating, and transferring data. Programs such as GIS programs are commonly used to facilitate preparation of the model input data and analyze model output data; however, auxiliary programs are frequently required to translate data between programs. Data translations are required when different programs use different data formats. Thus, the user might use GIS techniques to create model input data, run a translation program to convert input data into a format compatible with the ground-water flow model, run the model, run a translation program to convert the model output into the correct format for GIS, and use GIS to display and analyze this output. MODFLOWARC, avoids the two translation steps and transfers data directly to and from the ground-water-flow model. This report documents the design and use of MODFLOWARC and includes instructions for data input/output of the Basic, Block-centered flow, River, Recharge, Well, Drain, Evapotranspiration, General-head boundary, and Streamflow-routing packages. The modification to MODFLOW and the Streamflow-Routing package was minimized. Flow charts and computer-program code describe the modifications to the original computer codes for each of these packages. Appendix A contains a discussion on the operation of MODFLOWARC using a sample problem.

  20. Toward accelerating landslide mapping with interactive machine learning techniques

    NASA Astrophysics Data System (ADS)

    Stumpf, André; Lachiche, Nicolas; Malet, Jean-Philippe; Kerle, Norman; Puissant, Anne

    2013-04-01

    Despite important advances in the development of more automated methods for landslide mapping from optical remote sensing images, the elaboration of inventory maps after major triggering events still remains a tedious task. Image classification with expert defined rules typically still requires significant manual labour for the elaboration and adaption of rule sets for each particular case. Machine learning algorithm, on the contrary, have the ability to learn and identify complex image patterns from labelled examples but may require relatively large amounts of training data. In order to reduce the amount of required training data active learning has evolved as key concept to guide the sampling for applications such as document classification, genetics and remote sensing. The general underlying idea of most active learning approaches is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and/or the data structure to iteratively select the most valuable samples that should be labelled by the user and added in the training set. With relatively few queries and labelled samples, an active learning strategy should ideally yield at least the same accuracy than an equivalent classifier trained with many randomly selected samples. Our study was dedicated to the development of an active learning approach for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. The developed approach is a region-based query heuristic that enables to guide the user attention towards few compact spatial batches rather than distributed points resulting in time savings of 50% and more compared to standard active learning techniques. The approach was tested with multi-temporal and multi-sensor satellite images capturing recent large scale triggering events in Brazil and China and demonstrated balanced user's and producer's accuracies between 74% and 80%. The assessment also included an experimental evaluation of the uncertainties of manual mappings from multiple experts and demonstrated strong relationships between the uncertainty of the experts and the machine learning model.

  1. The global Cretaceous-Tertiary fire: Biomass or fossil carbon

    NASA Technical Reports Server (NTRS)

    Gilmour, Iain; Guenther, Frank

    1988-01-01

    The global soot layer at the K-T boundary indicates a major fire triggered by meteorite impact. However, it is not clear whether the principal fuel was biomass or fossil carbon. Forests are favored by delta value of C-13, which is close to the average for trees, but the total amount of elemental C is approximately 10 percent of the present living carbon, and thus requires very efficient conversion to soot. The PAH was analyzed at Woodside Creek, in the hope of finding a diagnostic molecular marker. A promising candidate is 1-methyl-7-isopropyl phenanthrene (retene,), which is probably derived by low temperature degradation of abietic acid. Unlike other PAH that form by pyrosynthesis at higher temperatures, retene has retained the characteristic side chains of its parent molecule. A total of 11 PAH compounds were identified in the boundary clay. Retene is present in substantial abundance. The identification was confirmed by analysis of a retene standard. Retene is characteristic of the combustion of resinous higher plants. Its formation depends on both temperature and oxygen access, and is apparently highest in oxygen-poor fires. Such fires would also produce soot more efficiently which may explain the high soot abundance. The relatively high level of coronene is not typical of a wood combustion source, however, though it can be produced during high temperature pyrolysis of methane, and presumably other H, C-containing materials. This would require large, hot, low O2 zones, which may occur only in very large fires. The presence of retene indicates that biomass was a significant fuel source for the soot at the Cretaceous-Tertiary boundary. The total amount of elemental C produced requires a greater than 3 percent soot yield, which is higher than typically observed for wildfires. However, retene and presumably coronene imply limited access of O2 and hence high soot yield.

  2. 10 CFR 140.13b - Amount of liability insurance required for uranium enrichment facilities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Amount of liability insurance required for uranium... required for uranium enrichment facilities. Each holder of a license issued under Parts 40 or 70 of this chapter for a uranium enrichment facility that involves the use of source material or special nuclear...

  3. 10 CFR 140.13b - Amount of liability insurance required for uranium enrichment facilities.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Amount of liability insurance required for uranium... required for uranium enrichment facilities. Each holder of a license issued under Parts 40 or 70 of this chapter for a uranium enrichment facility that involves the use of source material or special nuclear...

  4. 10 CFR 140.13b - Amount of liability insurance required for uranium enrichment facilities.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Amount of liability insurance required for uranium... required for uranium enrichment facilities. Each holder of a license issued under Parts 40 or 70 of this chapter for a uranium enrichment facility that involves the use of source material or special nuclear...

  5. 10 CFR 140.13b - Amount of liability insurance required for uranium enrichment facilities.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Amount of liability insurance required for uranium... required for uranium enrichment facilities. Each holder of a license issued under Parts 40 or 70 of this chapter for a uranium enrichment facility that involves the use of source material or special nuclear...

  6. Two methods for parameter estimation using multiple-trait models and beef cattle field data.

    PubMed

    Bertrand, J K; Kriese, L A

    1990-08-01

    Two methods are presented for estimating variances and covariances from beef cattle field data using multiple-trait sire models. Both methods require that the first trait have no missing records and that the contemporary groups for the second trait be subsets of the contemporary groups for the first trait; however, the second trait may have missing records. One method uses pseudo expectations involving quadratics composed of the solutions and the right-hand sides of the mixed model equations. The other method is an extension of Henderson's Simple Method to the multiple trait case. Neither of these methods requires any inversions of large matrices in the computation of the parameters; therefore, both methods can handle very large sets of data. Four simulated data sets were generated to evaluate the methods. In general, both methods estimated genetic correlations and heritabilities that were close to the Restricted Maximum Likelihood estimates and the true data set values, even when selection within contemporary groups was practiced. The estimates of residual correlations by both methods, however, were biased by selection. These two methods can be useful in estimating variances and covariances from multiple-trait models in large populations that have undergone a minimal amount of selection within contemporary groups.

  7. Artificial maturation of an immature sulfur- and organic matter-rich limestone from the Ghareb Formation, Jordan

    USGS Publications Warehouse

    Koopmans, M.P.; Rijpstra, W.I.C.; De Leeuw, J. W.; Lewan, M.D.; Damste, J.S.S.

    1998-01-01

    An immature (Ro=0.39%), S-rich (S(org)/C = 0.07), organic matter-rich (19.6 wt. % TOC) limestone from the Ghareb Formation (Upper Cretaceous) in Jordan was artificially matured by hydrous pyrolysis (200, 220 ..., 300??C; 72 h) to study the effect of progressive diagenesis and early catagenesis on the amounts and distributions of hydrocarbons, organic sulfur compounds and S-rich geomacromolecules. The use of internal standards allowed the determination of absolute amounts. With increasing thermal maturation, large amounts of alkanes and alkylthiophenes with predominantly linear carbon skeletons are generated from the kerogen. The alkylthiophene isomer distributions do not change significantly with increasing thermal maturation, indicating the applicability of alkylthiophenes as biomarkers at relatively high levels of thermal maturity. For a given carbon skeleton, the saturated hydrocarbon, alkylthiophenes and alkylbenzo[b]thiophenes are stable forms at relatively high temperatures, whereas the alkylsulfides are not stable. The large amount of alkylthiophenes produced relative to the alkanes may be explained by the large number of monosulfide links per carbon skeleton. These results are in good agreement with those obtained previously for an artificial maturation series of an immature S-rich sample from the Gessoso-solfifera Formation.An immature (Ro = 0.39%), S-rich (Sorg/C = 0.07), organic matter-rich (19.6 wt.% TOC) limestone from the Ghareb Formation (Upper Cretaceous) in Jordan was artificially matured by hydrous pyrolysis (200, 220, ..., 300??C; 72 h) to study the effect of progressive diagenesis and early catagenesis on the amounts and distributions of hydrocarbons, organic sulfur compounds and S-rich geomacromolecules. The use of internal standards allowed the determination of absolute amounts. With increasing thermal maturation, large amounts of alkanes and alkylthiophenes with predominantly linear carbon skeletons are generated from the kerogen. The alkylthiophene isomer distributions do not change significantly with increasing thermal maturation, indicating the applicability of alkylthiophenes as biomarkers at relatively high levels of thermal maturity. For a given carbon skeleton, the saturated hydrocarbon, alkylthiophene and alkylbenzo[b]thiophenes are stable forms at relatively high temperatures, whereas the alkylsulfides are not stable. The large amount of alkylthiophenes produced relative to the alkanes may be explained by the large number of monosulfide links per carbon skeleton. These results are in good agreement with those obtained previously for an artificial maturation series of an immature S-rich sample from the Gessoso-solfifera Formation.

  8. Metagenomics of rumen bacteriophage from thirteen lactating dairy cattle

    PubMed Central

    2013-01-01

    Background The bovine rumen hosts a diverse and complex community of Eukarya, Bacteria, Archea and viruses (including bacteriophage). The rumen viral population (the rumen virome) has received little attention compared to the rumen microbial population (the rumen microbiome). We used massively parallel sequencing of virus like particles to investigate the diversity of the rumen virome in thirteen lactating Australian Holstein dairy cattle all housed in the same location, 12 of which were sampled on the same day. Results Fourteen putative viral sequence fragments over 30 Kbp in length were assembled and annotated. Many of the putative genes in the assembled contigs showed no homology to previously annotated genes, highlighting the large amount of work still required to fully annotate the functions encoded in viral genomes. The abundance of the contig sequences varied widely between animals, even though the cattle were of the same age, stage of lactation and fed the same diets. Additionally the twelve animals which were co-habited shared a number of their dominant viral contigs. We compared the functional characteristics of our bovine viromes with that of other viromes, as well as rumen microbiomes. At the functional level, we found strong similarities between all of the viral samples, which were highly distinct from the rumen microbiome samples. Conclusions Our findings suggest a large amount of between animal variation in the bovine rumen virome and that co-habiting animals may have more similar viromes than non co-habited animals. We report the deepest sequencing to date of the rumen virome. This work highlights the enormous amount of novelty and variation present in the rumen virome. PMID:24180266

  9. Optimized energy harvesting materials and generator design

    NASA Astrophysics Data System (ADS)

    Graf, Christian; Hitzbleck, Julia; Feller, Torsten; Clauberg, Karin; Wagner, Joachim; Krause, Jens; Maas, Jürgen

    2013-04-01

    Electroactive polymers are soft capacitors made of thin elastic and electrically insulating films coated with compliant electrodes offering a large amount of deformation. They can either be used as actuators by applying an electric charge or they can be used as energy converters based on the electrostatic principle. These unique properties enable the industrial development of highly efficient and environmentally sustainable energy converters, which opens up the possibility to further exploit large renewable and inexhaustible energy sources like wind and water that are widely unused otherwise. Compared to other electroactive polymer materials, polyurethanes, whose formulations have been systematically modified and optimized for energy harvesting applications, have certain advantages over silicones and acrylates. The inherently higher dipole content results in a significantly increased permittivity and the dielectric breakdown strength is higher, too, whereby the overall specific energy, a measure for the energy gain, is better by at least factor ten, i.e. more than ten times the energy can be gained out of the same amount of material. In order to reduce conduction losses on the electrode during charging and discharging, a highly conductive bidirectional stretchable electrode has been developed. Other important material parameters like stiffness and bulk resistivity have been optimized to fit the requirements. To realize high power energy harvesting systems, substantial amounts of electroactive polymer material are necessary as well as a smart mechanical and electrical design of the generator. In here we report on different measures to evaluate and improve electroactive polymer materials for energy harvesting by e.g. reducing the defect occurrence and improving the electrode behavior.

  10. The role of skeletal muscle contractile duration throughout the whole day: reducing sedentary time and promoting universal physical activity in all people

    PubMed Central

    2017-01-01

    Abstract A shared goal of many researchers has been to discover how to improve health and prevent disease, through safely replacing a large amount of daily sedentary time with physical activity in everyone, regardless of age and current health status. This involves contrasting how different muscle contractile activity patterns regulate the underlying molecular and physiological responses impacting health‐related processes. It also requires an equal attention to behavioural feasibility studies in extremely unfit and sedentary people. A sound scientific principle is that the body is constantly sensing and responding to changes in skeletal muscle metabolism induced by contractile activity. Because of that, the rapid time course of health‐related responses to physical inactivity/activity patterns are caused in large part directly because of the variable amounts of muscle inactivity/activity throughout the day. However, traditional modes and doses of exercise fall far short of replacing most of the sedentary time in the modern lifestyle, because both the weekly frequency and the weekly duration of exercise time are an order of magnitude less than those for people sitting inactive. This can explain why high amounts of sedentary time produce distinct metabolic and cardiovascular responses through inactivity physiology that are not sufficiently prevented by low doses of exercise. For these reasons, we hypothesize that maintaining a high metabolic rate over the majority of the day, through safe and sustainable types of muscular activity, will be the optimal way to create a healthy active lifestyle over the whole lifespan. PMID:28657123

  11. A MapReduce approach to diminish imbalance parameters for big deoxyribonucleic acid dataset.

    PubMed

    Kamal, Sarwar; Ripon, Shamim Hasnat; Dey, Nilanjan; Ashour, Amira S; Santhi, V

    2016-07-01

    In the age of information superhighway, big data play a significant role in information processing, extractions, retrieving and management. In computational biology, the continuous challenge is to manage the biological data. Data mining techniques are sometimes imperfect for new space and time requirements. Thus, it is critical to process massive amounts of data to retrieve knowledge. The existing software and automated tools to handle big data sets are not sufficient. As a result, an expandable mining technique that enfolds the large storage and processing capability of distributed or parallel processing platforms is essential. In this analysis, a contemporary distributed clustering methodology for imbalance data reduction using k-nearest neighbor (K-NN) classification approach has been introduced. The pivotal objective of this work is to illustrate real training data sets with reduced amount of elements or instances. These reduced amounts of data sets will ensure faster data classification and standard storage management with less sensitivity. However, general data reduction methods cannot manage very big data sets. To minimize these difficulties, a MapReduce-oriented framework is designed using various clusters of automated contents, comprising multiple algorithmic approaches. To test the proposed approach, a real DNA (deoxyribonucleic acid) dataset that consists of 90 million pairs has been used. The proposed model reduces the imbalance data sets from large-scale data sets without loss of its accuracy. The obtained results depict that MapReduce based K-NN classifier provided accurate results for big data of DNA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Simultaneous determination of the quantity and isotopic signature of dissolved organic matter from soil water using high-performance liquid chromatography/isotope ratio mass spectrometry.

    PubMed

    Scheibe, Andrea; Krantz, Lars; Gleixner, Gerd

    2012-01-30

    We assessed the accuracy and utility of a modified high-performance liquid chromatography/isotope ratio mass spectrometry (HPLC/IRMS) system for measuring the amount and stable carbon isotope signature of dissolved organic matter (DOM) <1 µm. Using a range of standard compounds as well as soil solutions sampled in the field, we compared the results of the HPLC/IRMS analysis with those from other methods for determining carbon and (13)C content. The conversion efficiency of the in-line wet oxidation of the HPLC/IRMS averaged 99.3% for a range of standard compounds. The agreement between HPLC/IRMS and other methods in the amount and isotopic signature of both standard compounds and soil water samples was excellent. For DOM concentrations below 10 mg C L(-1) (250 ng C total) pre-concentration or large volume injections are recommended in order to prevent background interferences. We were able to detect large differences in the (13)C signatures of soil solution DOM sampled in 10 cm depth of plots with either C3 or C4 vegetation and in two different parent materials. These measurements also demonstrated changes in the (13)C signature that demonstrate rapid loss of plant-derived C with depth. Overall the modified HLPC/IRMS system has the advantages of rapid sample preparation, small required sample volume and high sample throughput, while showing comparable performance with other methods for measuring the amount and isotopic signature of DOM. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Exoplanet phase curves at large phase angles. Diagnostics for extended hazy atmospheres

    NASA Astrophysics Data System (ADS)

    García Muñoz, A.; Cabrera, J.

    2018-01-01

    At optical wavelengths, Titan's brightness for large Sun-Titan-observer phase angles significantly exceeds its dayside brightness. The brightening that occurs near back-illumination is due to moderately large haze particles in the moon's extended atmosphere that forward scatters the incident sunlight. Motivated by this phenomenon, here we investigate the forward scattering from currently known exoplanets, its diagnostics possibilities, the observational requirements to resolve it and potential implications. An analytical expression is derived for the amount of starlight forward scattered by an exponential atmosphere that takes into account the finite angular size of the star. We use this expression to tentatively estimate how prevalent this phenomenon may be. Based on numerical calculations that consider exoplanet visibility, we identify numerous planets with predicted out-of-transit forward-scattering signals of up to tens of parts per million provided that aerosols of ≳1 μm size form over an extended vertical region near the optical radius level. We propose that the interpretation of available optical phase curves should be revised to constrain the strength of this phenomenon that might provide insight into aerosol scale heights and particle sizes. For the relatively general atmospheres considered here, forward scattering reduces the transmission-only transit depth by typically less than the equivalent to a scale height. For short-period exoplanets, the finite angular size of the star severely affects the amount of radiation scattered towards the observer at mid-transit.

  14. Research on photodiode detector-based spatial transient light detection and processing system

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wang, Hu; Liu, Yang; Zhao, Hui; Nan, Meng

    2016-10-01

    In order to realize real-time signal identification and processing of spatial transient light, the features and the energy of the captured target light signal are first described and quantitatively calculated. Considering that the transient light signal has random occurrence, a short duration and an evident beginning and ending, a photodiode detector based spatial transient light detection and processing system is proposed and designed in this paper. This system has a large field of view and is used to realize non-imaging energy detection of random, transient and weak point target under complex background of spatial environment. Weak signal extraction under strong background is difficult. In this paper, considering that the background signal changes slowly and the target signal changes quickly, filter is adopted for signal's background subtraction. A variable speed sampling is realized by the way of sampling data points with a gradually increased interval. The two dilemmas that real-time processing of large amount of data and power consumption required by the large amount of data needed to be stored are solved. The test results with self-made simulative signal demonstrate the effectiveness of the design scheme. The practical system could be operated reliably. The detection and processing of the target signal under the strong sunlight background was realized. The results indicate that the system can realize real-time detection of target signal's characteristic waveform and monitor the system working parameters. The prototype design could be used in a variety of engineering applications.

  15. Rhizosphere Environment and Labile Phosphorus Release from Organic Waste-Amended Soils.

    NASA Astrophysics Data System (ADS)

    Dao, Thanh H.

    2015-04-01

    Crop residues and biofertilizers are primary sources of nutrients for organic crop production. However, soils treated with large amounts of nutrient-enriched manure have elevated phosphorus (P) levels in regions of intensive animal agriculture. Surpluses occurred in these amended soils, resulting in large pools of exchangeable inorganic P (Pi) and enzyme-labile organic P (Po) that averaging 30.9 and 68.2 mg kg-1, respectively. Organic acids produced during crop residue decomposition can promote the complexation of counter-ions and decouple and release unbound Pi from metal and alkali metal phosphates. Animal manure and cover crop residues also contain large amounts of soluble organic matter, and likely generate similar ligands. However, a high degree of heterogeneity in P spatial distribution in such amended fields, arising from variances in substrate physical forms ranging from slurries to dried solids, composition, and diverse application methods and equipment. Distinct clusters of Pi and Po were observed, where accumulation of the latter forms was associated with high soil microbial biomass C and reduced phosphomonoesterases' activity. Accurate estimates of plant requirements and lability of soil P pools, and real-time plant and soil P sensing systems are critical considerations to optimally manage manure-derived nutrients in crop production systems. An in situ X-ray fluorescence-based approach to sensing canopy and soil XRFS-P was developed to improve the yield-soil P relationship for optimal nutrient recommendations in addition to allowing in-the-field verification of foliar P status.

  16. Analysis of Possibility of Yeast Production Increase at Maintained Carbon Dioxide Emission Level

    NASA Astrophysics Data System (ADS)

    Włodarczyk, Barbara; Włodarczyk, Paweł P.

    2016-12-01

    Main parameters polluting of technological wastewater (dregs from decantation and thicken of the wort) from yeast industry are: nitrogen, potassium and COD. Such wastewater are utilized mostly on agricultural fields. Unfortunately, these fields can only accept a limited amount of wastes. The basic parameter limiting there the amount of wastewater is nitrogen. When capacity of the production is large sewages are often pretreated at an evaporator station. However, due to the fairly high running costs of the evaporator station currently such a solution is applied only to a small amount of wastes (just to meet legal requirements). Replacement of the earth gas with a biomass being supplied to the evaporator station from the agricultural fields will both allow to maintain the carbon dioxide emission level and enable the production growth. Moreover, the biomass growing on the agricultural fields being fertilized with the wastewater coming from the yeast production allows consequently to utilize the greater volume of wastewater. Theoretically, the possible increase in the yeasts production, with maintaining the carbon dioxide emission level, can reach even 70%. Therefore, the solution presented in this paper combines both intensification of the yeasts production and maintaining the carbon dioxide emission level.

  17. Estimating glucose requirements of an activated immune system in growing pigs.

    PubMed

    Kvidera, S K; Horst, E A; Mayorga, E J; Sanz-Fernandez, M V; Abuajamieh, M; Baumgard, L H

    2017-11-01

    Activated immune cells become obligate glucose utilizers, and a large i.v. lipopolysaccharide (LPS) dose causes insulin resistance and severe hypoglycemia. Therefore, study objectives were to quantify the amount of glucose needed to maintain euglycemia following an endotoxin challenge as a proxy of leukocyte glucose requirements. Fifteen fasted crossbred gilts (30.3 ± 1.7 kg) were bilaterally jugular catheterized and assigned 1 of 2 i.v. bolus treatments: control (CON; 10 mL sterile saline; = 7) or LPS challenge + euglycemic clamp (LPS-Eu; 055:B5; 5 μg/kg BW; 50% dextrose infusion to maintain euglycemia; = 8). Following administration, blood glucose was determined every 10 min and dextrose infusion rates were adjusted in LPS-Eu pigs to maintain euglycemia for 8 h. Pigs were fasted for 8 h prior to the bolus and remained fasted throughout the challenge. Rectal temperature was increased in LPS-Eu pigs relative to CON pigs (39.8 vs. 38.8°C; < 0.01). Relative to the baseline, CON pigs had 20% decreased blood glucose from 300 to 480 min postbolus ( = 0.01) whereas circulating glucose content in LPS-Eu pigs did not differ ( = 0.96) from prebolus levels. A total of 116 ± 8 g of infused glucose was required to maintain euglycemia in LPS-Eu pigs. Relative to CON pigs, overall plasma insulin, blood urea nitrogen, β-hydroxybutrate, lactate, and LPS-binding protein were increased in LPS-Eu pigs (295, 108, 29, 133, and 13%, respectively; ≤ 0.04) whereas NEFA was decreased (66%; < 0.01). Neutrophils in LPS-Eu pigs were decreased 84% at 120 min postbolus and returned to CON levels by 480 min ( < 0.01). Overall, lymphocytes, monocytes, eosinophils, and basophils were decreased in LPS-Eu pigs relative to CON pigs (75, 87, 70, and 50%, respectively; ≤ 0.05). These alterations in metabolism and the large amount of glucose needed to maintain euglycemia indicate nutrient repartitioning away from growth toward the immune system. Glucose is an important fuel for the immune system, and data from this study established that the glucose requirements of an intensely and acutely activated immune system in growing pigs are approximately 1.1 g/kg BW/h.

  18. Can Advances in Science and Technology Prevent Global Warming? A Critical Review of Limitations and Challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huesemann, Michael H.

    The most stringent emission scenarios published by the Intergovernmental Panel on Climate Change (IPCC) would result in the stabilization of atmospheric carbon dioxide (CO2) at concentrations of approximately 550 ppm which would produce a global temperature increase of at least 2 C by 2100. Given the large uncertainties regarding the potential risks associated with this degree of global warming, it would be more prudent to stabilize atmospheric CO2 concentrations at or below current levels which, in turn, would require a greater than 20-fold reduction (i.e., ?95%) in per capita carbon emissions in industrialized nations within the next 50 to 100more » years. Using the Kaya equation as a conceptual framework, this paper examines whether CO2 mitigation approaches such as energy efficiency improvements, carbon sequestration, and the development of carbon-free energy sources would be sufficient to bring about the required reduction in per capita carbon emissions without creating unforeseen negative impacts elsewhere. In terms of energy efficiency, large improvements (?5-fold) are in principle possible given aggressive investments in R&D and if market imperfections such as corporate subsidies are removed. However, energy efficiency improvements per se will not result in a reduction in carbon emissions if, as predicted by the IPCC, the size of the global economy has expanded 12-26 fold by 2100. Terrestrial carbon sequestration via reforestation and improved agricultural soil management has many environmental advantages but has only limited CO2 mitigation potential because the global terrestrial carbon sink (ca. 200 Gt C) is small relative to the size of fossil fuel deposits (?4000 Gt C). By contrast, very large amounts of CO2 can potentially be removed from the atmosphere via sequestration in geologic formations and oceans, but carbon storage is not permanent and is likely to create many unpredictable environmental consequences. Renewable solar energy can in theory provide large amounts of carbon-free power. However, biomass and hydroelectric energy can only be marginally expanded and large-scale solar energy installations (i.e., wind, photovoltaics, and direct thermal) are likely to have significant negative environmental impacts. Expansion of nuclear energy is highly unlikely due to concerns over reactor safety, radioactive waste management, weapons proliferation, and cost. In view of the serious limitations and liabilities of many proposed CO2 mitigation approaches it appears that there remain only few no-regrets options such as drastic energy efficiency improvements, extensive terrestrial carbon sequestration, and cautious expansion of renewable energy generation. These promising CO2 mitigation technologies have the potential to bring about the required 20-fold reduction in per capita carbon emission only if population and economic growth are halted without delay. Thus, addressing the problem of global warming requires not only technological research and development but also a reexamination of core values that mistakenly equate material consumption and economic growth to happiness and well-being.« less

  19. 13 CFR 120.847 - Requirements for the Loan Loss Reserve Fund (LLRF).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... by all parties in a timely fashion, and that all required deposits are made. (b) PCLP CDC Exposure... establish and maintain an LLRF equal to one percent of the original principal amount (the face amount) of...

  20. 13 CFR 120.847 - Requirements for the Loan Loss Reserve Fund (LLRF).

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... by all parties in a timely fashion, and that all required deposits are made. (b) PCLP CDC Exposure... establish and maintain an LLRF equal to one percent of the original principal amount (the face amount) of...

  1. Cover crops in vegetable production systems

    USDA-ARS?s Scientific Manuscript database

    Current vegetable production systems require an intensive amount Current vegetable production systems require an intensive amount of work and inputs, and if not properly managed could have detrimental effects on soil and the environment. Practices such as intensive tillage, increased herbicide use, ...

  2. 33 CFR 135.203 - Amount required.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Amount required. 135.203 Section 135.203 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OFFSHORE OIL POLLUTION COMPENSATION FUND...

  3. 33 CFR 135.203 - Amount required.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Amount required. 135.203 Section 135.203 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OFFSHORE OIL POLLUTION COMPENSATION FUND...

  4. 33 CFR 135.203 - Amount required.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Amount required. 135.203 Section 135.203 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OFFSHORE OIL POLLUTION COMPENSATION FUND...

  5. 33 CFR 135.203 - Amount required.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Amount required. 135.203 Section 135.203 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OFFSHORE OIL POLLUTION COMPENSATION FUND...

  6. 33 CFR 135.203 - Amount required.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Amount required. 135.203 Section 135.203 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE POLLUTION FINANCIAL RESPONSIBILITY AND COMPENSATION OFFSHORE OIL POLLUTION COMPENSATION FUND...

  7. CFD modeling of thermoelectric generators in automotive EGR-coolers

    NASA Astrophysics Data System (ADS)

    Högblom, Olle; Andersson, Ronnie

    2012-06-01

    A large amount of the waste heat in the exhaust gases from diesel engines is removed in the exhaust gas recirculation (EGR) cooler. Introducing a thermoelectric generator (TEG) in an EGR cooler requires a completely new design of the heat exchanger. To accomplish that a model of the TEG-EGR system is required. In this work, a transient 3D CFD model for simulation of gas flow, heat transfer and power generation has been developed. This model allows critical design parameters in the TEG-EGR to be identified and design requirements for the systems to be specified. Besides the prediction of Seebeck, Peltier, Thomson and Joule effects, the simulations also give detailed insight to the temperature gradients in the gas-phase and inside the thermoelectric (TE) elements. The model is a very valuable tool to identify bottlenecks, improve design, select optimal TE materials and operating conditions. The results show that the greatest heat transfer resistance is located in the gas phase and it is critical to reduce this in order to achieve a large temperature difference over the thermoelectric elements without compromising on the maximum allowable pressure drop in the system. Further results from an investigation of the thermoelectric performance during a vehicle test cycle is presented.

  8. Conical Seat Shut-Off Valve

    NASA Technical Reports Server (NTRS)

    Farner, Bruce

    2013-01-01

    A moveable valve for controlling flow of a pressurized working fluid was designed. This valve consists of a hollow, moveable floating piston pressed against a stationary solid seat, and can use the working fluid to seal the valve. This open/closed, novel valve is able to use metal-to-metal seats, without requiring seat sliding action; therefore there are no associated damaging effects. During use, existing standard high-pressure ball valve seats tend to become damaged during rotation of the ball. Additionally, forces acting on the ball and stem create large amounts of friction. The combination of these effects can lead to system failure. In an attempt to reduce damaging effects and seat failures, soft seats in the ball valve have been eliminated; however, the sliding action of the ball across the highly loaded seat still tends to scratch the seat, causing failure. Also, in order to operate, ball valves require the use of large actuators. Positioning the metal-to-metal seats requires more loading, which tends to increase the size of the required actuator, and can also lead to other failures in other areas such as the stem and bearing mechanisms, thus increasing cost and maintenance. This novel non-sliding seat surface valve allows metal-to-metal seats without the damaging effects that can lead to failure, and enables large seating forces without damaging the valve. Additionally, this valve design, even when used with large, high-pressure applications, does not require large conventional valve actuators and the valve stem itself is eliminated. Actuation is achieved with the use of a small, simple solenoid valve. This design also eliminates the need for many seals used with existing ball valve and globe valve designs, which commonly cause failure, too. This, coupled with the elimination of the valve stem and conventional valve actuator, improves valve reliability and seat life. Other mechanical liftoff seats have been designed; however, they have only resulted in increased cost, and incurred other reliability issues. With this novel design, the seat is lifted by simply removing the working fluid pressure that presses it against the seat and no external force is required. By eliminating variables associated with existing ball and globe configurations that can have damaging effects upon a valve, this novel design reduces downtime in rocket engine test schedules and maintenance costs.

  9. Information Fusion of Conflicting Input Data.

    PubMed

    Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael

    2016-10-29

    Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μ BalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible.

  10. Integrated process development-a robust, rapid method for inclusion body harvesting and processing at the microscale level.

    PubMed

    Walther, Cornelia; Kellner, Martin; Berkemeyer, Matthias; Brocard, Cécile; Dürauer, Astrid

    2017-10-21

    Escherichia coli stores large amounts of highly pure product within inclusion bodies (IBs). To take advantage of this beneficial feature, after cell disintegration, the first step to optimal product recovery is efficient IB preparation. This step is also important in evaluating upstream optimization and process development, due to the potential impact of bioprocessing conditions on product quality and on the nanoscale properties of IBs. Proper IB preparation is often neglected, due to laboratory-scale methods requiring large amounts of materials and labor. Miniaturization and parallelization can accelerate analyses of individual processing steps and provide a deeper understanding of up- and downstream processing interdependencies. Consequently, reproducible, predictive microscale methods are in demand. In the present study, we complemented a recently established high-throughput cell disruption method with a microscale method for preparing purified IBs. This preparation provided results comparable to laboratory-scale IB processing, regarding impurity depletion, and product loss. Furthermore, with this method, we performed a "design of experiments" study to demonstrate the influence of fermentation conditions on the performance of subsequent downstream steps and product quality. We showed that this approach provided a 300-fold reduction in material consumption for each fermentation condition and a 24-fold reduction in processing time for 24 samples.

  11. Information Fusion of Conflicting Input Data

    PubMed Central

    Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael

    2016-01-01

    Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μBalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible. PMID:27801874

  12. Carbon dioxide efficiency of terrestrial enhanced weathering.

    PubMed

    Moosdorf, Nils; Renforth, Phil; Hartmann, Jens

    2014-05-06

    Terrestrial enhanced weathering, the spreading of ultramafic silicate rock flour to enhance natural weathering rates, has been suggested as part of a strategy to reduce global atmospheric CO2 levels. We budget potential CO2 sequestration against associated CO2 emissions to assess the net CO2 removal of terrestrial enhanced weathering. We combine global spatial data sets of potential source rocks, transport networks, and application areas with associated CO2 emissions in optimistic and pessimistic scenarios. The results show that the choice of source rocks and material comminution technique dominate the CO2 efficiency of enhanced weathering. CO2 emissions from transport amount to on average 0.5-3% of potentially sequestered CO2. The emissions of material mining and application are negligible. After accounting for all emissions, 0.5-1.0 t CO2 can be sequestered on average per tonne of rock, translating into a unit cost from 1.6 to 9.9 GJ per tonne CO2 sequestered by enhanced weathering. However, to control or reduce atmospheric CO2 concentrations substantially with enhanced weathering would require very large amounts of rock. Before enhanced weathering could be applied on large scales, more research is needed to assess weathering rates, potential side effects, social acceptability, and mechanisms of governance.

  13. Dust Removal Technolgy for a Mars In Situ Resource Utilization System

    NASA Technical Reports Server (NTRS)

    Calle, C. I.; Johansen, M. R.; Williams, B. S.; Hogue, M. D.; Mackey, P. J.; Clements, J. S.

    2011-01-01

    Several In Situ Resource Utilization (lSRU) systems being considered to enable future manned exploration of Mars require capture of Martian atmospheric gas to extract oxygen and other commodities. However, the Martian atmosphere contains relatively large amounts of dust which must be removed in tbe collection systems of the ISRU chambers. The amount of atmospheric dust varies largely with the presence of daily dust devils and the less frequent but much more powerful global dust storms. A common and mature dust removal technology for terrestrial systems is the electrostatic precipitator. With this technology, dust particles being captured are imparted an electrostatic charge by means of a corona discharge. Charged dust particles are then driven to a region of high electric field which forces the particles onto a collector for capture. Several difficulties appear when this technology is adapted to the Martian atmospheric environment At the low atmospheric pressure of Mars, electrical breakdown occurs at much lower voltages than on Earth and corona discharge is difficult to sustain. In this paper, we report on our efforts to obtain a steady corona/glow discharge in a simulated Martian atmosphere of carbon dioxide at 9 millibars of pressure. We also present results on the design of a dust capture system under these atmospheric conditions.

  14. A tool for optimization of the production and user analysis on the Grid, C. Grigoras for the ALICE Collaboration

    NASA Astrophysics Data System (ADS)

    Grigoras, Costin; Carminati, Federico; Vladimirovna Datskova, Olga; Schreiner, Steffen; Lee, Sehoon; Zhu, Jianlin; Gheata, Mihaela; Gheata, Andrei; Saiz, Pablo; Betev, Latchezar; Furano, Fabrizio; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Bagnasco, Stefano; Peters, Andreas Joachim; Saiz Santos, Maria Dolores

    2011-12-01

    With the LHC and ALICE entering a full operation and production modes, the amount of Simulation and RAW data processing and end user analysis computational tasks are increasing. The efficient management of all these tasks, all of which have large differences in lifecycle, amounts of processed data and methods to analyze the end result, required the development and deployment of new tools in addition to the already existing Grid infrastructure. To facilitate the management of the large scale simulation and raw data reconstruction tasks, ALICE has developed a production framework called a Lightweight Production Manager (LPM). The LPM is automatically submitting jobs to the Grid based on triggers and conditions, for example after a physics run completion. It follows the evolution of the job and publishes the results on the web for worldwide access by the ALICE physicists. This framework is tightly integrated with the ALICE Grid framework AliEn. In addition to the publication of the job status, LPM is also allowing a fully authenticated interface to the AliEn Grid catalogue, to browse and download files, and in the near future will provide simple types of data analysis through ROOT plugins. The framework is also being extended to allow management of end user jobs.

  15. Evaluation of new superficially porous particles with carbon core and nanodiamond-polymer shell for proteins characterization.

    PubMed

    Bobály, Balázs; Guillarme, Davy; Fekete, Szabolcs

    2015-02-01

    A new superficially porous material possessing a carbon core and nanodiamond-polymer shell and pore size of 180Å was evaluated for the analysis of large proteins. Because the stationary phase on this new support contains a certain amount of protonated amino groups within the shell structure, the resulting retention mechanism is most probably a mix between reversed phase and anion exchange. However, under the applied conditions (0.1-0.5% TFA in the mobile phase), it seemed that the main retention mechanism for proteins was hydrophobic interaction with the C18 alkylchains on this carbon based material. In this study, we demonstrated that there was no need to increase mobile phase temperature, as the peak capacity was not modified considerably between 30 and 80°C for model proteins. Thus, the risk of thermal on-column degradation or denaturation of large proteins is not relevant. Another important difference compared to silica-based materials is that this carbon-based column requires larger amount of TFA, comprised between 0.2 and 0.5%. Finally, it is important to mention that selectivity between closely related proteins (oxidized, native and reduced forms of Interferon α-2A variants) could be changed mostly through mobile phase temperature. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. View the label before you view the movie: A field experiment into the impact of Portion size and Guideline Daily Amounts labelling on soft drinks in cinemas

    PubMed Central

    2011-01-01

    Background Large soft drink sizes increase consumption, and thereby contribute to obesity. Portion size labelling may help consumers to select more appropriate food portions. This study aimed to assess the effectiveness of portion size and caloric Guidelines for Daily Amounts (GDA) labelling on consumers' portion size choices and consumption of regular soft drinks. Methods A field experiment that took place on two subsequent evenings in a Dutch cinema. Participants (n = 101) were asked to select one of five different portion sizes of a soft drink. Consumers were provided with either portion size and caloric GDA labelling (experimental condition) or with millilitre information (control condition). Results Labelling neither stimulated participants to choose small portion sizes (OR = .75, p = .61, CI: .25 - 2.25), nor did labelling dissuade participants to choose large portion sizes (OR = .51, p = .36, CI: .12 - 2.15). Conclusions Portion size and caloric GDA labelling were found to have no effect on soft drink intake. Further research among a larger group of participants combined with pricing strategies is required. The results of this study are relevant for the current public health debate on food labelling. PMID:21645373

  17. Optimization of multiple turbine arrays in a channel with tidally reversing flow by numerical modelling with adaptive mesh.

    PubMed

    Divett, T; Vennell, R; Stevens, C

    2013-02-28

    At tidal energy sites, large arrays of hundreds of turbines will be required to generate economically significant amounts of energy. Owing to wake effects within the array, the placement of turbines within will be vital to capturing the maximum energy from the resource. This study presents preliminary results using Gerris, an adaptive mesh flow solver, to investigate the flow through four different arrays of 15 turbines each. The goal is to optimize the position of turbines within an array in an idealized channel. The turbines are represented as areas of increased bottom friction in an adaptive mesh model so that the flow and power capture in tidally reversing flow through large arrays can be studied. The effect of oscillating tides is studied, with interesting dynamics generated as the tidal current reverses direction, forcing turbulent flow through the array. The energy removed from the flow by each of the four arrays is compared over a tidal cycle. A staggered array is found to extract 54 per cent more energy than a non-staggered array. Furthermore, an array positioned to one side of the channel is found to remove a similar amount of energy compared with an array in the centre of the channel.

  18. Sample presentation, sources of error and future perspectives on the application of vibrational spectroscopy in the wine industry.

    PubMed

    Cozzolino, Daniel

    2015-03-30

    Vibrational spectroscopy encompasses a number of techniques and methods including ultra-violet, visible, Fourier transform infrared or mid infrared, near infrared and Raman spectroscopy. The use and application of spectroscopy generates spectra containing hundreds of variables (absorbances at each wavenumbers or wavelengths), resulting in the production of large data sets representing the chemical and biochemical wine fingerprint. Multivariate data analysis techniques are then required to handle the large amount of data generated in order to interpret the spectra in a meaningful way in order to develop a specific application. This paper focuses on the developments of sample presentation and main sources of error when vibrational spectroscopy methods are applied in wine analysis. Recent and novel applications will be discussed as examples of these developments. © 2014 Society of Chemical Industry.

  19. Chemical, biological, radiological, and nuclear decontamination: Recent trends and future perspective

    PubMed Central

    Kumar, Vinod; Goel, Rajeev; Chawla, Raman; Silambarasan, M.; Sharma, Rakesh Kumar

    2010-01-01

    Chemical, biological, radiological, and nuclear (CBRN) decontamination is the removal of CBRN material from equipment or humans. The objective of the decontamination is to reduce radiation burden, salvage equipment, and materials, remove loose CBRN contaminants, and fix the remaining in place in preparation for protective storage or permanent disposal work activities. Decontamination may be carried out using chemical, electrochemical, and mechanical means. Like materials, humans may also be contaminated with CBRN contamination. Changes in cellular function can occur at lower radiation doses and exposure to chemicals. At high dose, cell death may take place. Therefore, decontamination of humans at the time of emergency while generating bare minimum waste is an enormous task requiring dedication of large number of personnel and large amount of time. General principles of CBRN decontamination are discussed in this review with emphasis on radiodecontamination. PMID:21829318

  20. A Programming Environment Evaluation Methodology for Object-Oriented Systems. Ph.D Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Moreau, Dennis R.

    1987-01-01

    The object-oriented design strategy as both a problem decomposition and system development paradigm has made impressive inroads into the various areas of the computing sciences. Substantial development productivity improvements have been demonstrated in areas ranging from artificial intelligence to user interface design. However, there has been very little progress in the formal characterization of these productivity improvements and in the identification of the underlying cognitive mechanisms. The development and validation of models and metrics of this sort require large amounts of systematically-gathered structural and productivity data. There has, however, been a notable lack of systematically-gathered information on these development environments. A large part of this problem is attributable to the lack of a systematic programming environment evaluation methodology that is appropriate to the evaluation of object-oriented systems.

Top